TrustMeImWorking
Automatically consume your AI API token quota on a schedule — so your usage metrics always look healthy.
What it does
TrustMeImWorking runs as a background daemon that calls your configured LLM API at controlled intervals, consuming tokens according to your daily/weekly budget. It supports three run modes:
| Mode | Behaviour |
|---|---|
| ⭐ Work Simulation (recommended) | Fires during working hours on weekdays, using realistic job-related prompts (code review, architecture design, data analysis…). Looks exactly like real usage. |
| Spread | Distributes today's budget evenly across the remaining hours, so consumption is smooth and natural-looking. |
| Immediate | Starts consuming right now; resets at 00:00 each day. |
Quick start
# 1. Clone
git clone https://github.com/pengtianhao48-lab/TrustMeImWorking
cd TrustMeImWorking
# 2. Install dependencies
pip install -r requirements.txt
# 3. Configure (interactive wizard)
python3 tmw.py wizard
# 4. Start (real-time dashboard)
python3 tmw.py start
Wizard
The interactive wizard (python3 tmw.py wizard) walks you through:
- Language — English or 中文
- Platform — OpenAI, DeepSeek, Anthropic, Gemini, Groq, or Custom
- Base URL — official endpoint or a third-party relay / company gateway
- Model — with platform defaults pre-filled
- API Key — stored only in local
config.json, never committed - Weekly budget — min/max token range
- Job description — used to generate realistic work-simulation prompts
- Run mode — work / spread / immediate
- Work hours — start/end time, timezone (work mode only)
- Enterprise gateway — custom headers, proxy, mTLS, JWT (optional)
Dashboard
python3 tmw.py start opens a live dashboard that refreshes every 2 seconds:
╭──────────────────────────────────────────────────────────────────────╮
│ TrustMeImWorking Platform: DeepSeek Mode: Work-Simulation │
│ Uptime: 01m 23s Config: config.json │
╰──────────────────────────────────────────────────────────────────────╯
╭─────────────────────────── Consumption ──────────────────────────────╮
│ Today ████████████████████████░░░░ 12,345 / 14,000 (88%) │
│ This week ███████████████████████░░░░░ 67,890 / 50,000–80,000 │
╰──────────────────────────────────────────────────────────────────────╯
╭──────────────────────────── Session ─────────────────────────────────╮
│ ● ACTIVE This session: 1,234 / 8,800 (14%) │
│ ███░░░░░░░░░░░░░░░░░░░░░░░░░ │
│ Today's progress: 5,200 / 14,000 (37%) │
│ Prompt: How does Kubernetes handle container orchestration… │
╰──────────────────────────────────────────────────────────────────────╯
Press Ctrl+C to stop
Commands
| Command | Description |
|---|---|
python3 tmw.py wizard | Interactive setup wizard |
python3 tmw.py start | Start daemon with live dashboard |
python3 tmw.py start -b | Start as background daemon (silent) |
python3 tmw.py stop | Stop background daemon |
python3 tmw.py logs | View recent log (last 50 lines) |
python3 tmw.py logs -n 200 | View more log lines |
python3 tmw.py status | Show consumption stats |
python3 tmw.py run | Run one session immediately |
python3 tmw.py run --dry-run | Simulate without calling the API |
Requirements
- Python 3.8+
pip install -r requirements.txt(rich, openai, requests)- An API key for any OpenAI-compatible LLM service
Configuration
All settings are stored in config.json (auto-generated by the wizard). The file is listed in .gitignore and never committed.
Supported platforms
OpenAI · DeepSeek · Anthropic · Google Gemini · Groq · Any OpenAI-compatible endpoint (Ollama, vLLM, LM Studio, SiliconFlow, company gateways, etc.)