Operating cadence
The studio runs on a schedule. The agents follow it.
Most "AI agency" promises are vibes. Ours is a published cadence — what gets briefed, when it gets reviewed, and what AI is allowed to do without a human signing off. Here's how the studio actually runs.
AgentM runs four scheduled tasks each week — Monday briefing, Wednesday standup, Friday review, and a daily content draft Mon–Fri. Each task reads context, drafts work, and notifies the operator. AI never publishes externally without human approval. The full operating contract is below.
The weekly rhythm
Four recurring jobs the studio runs without being asked. Each generates a deliverable on disk and notifies the operator. None of them publish anything externally — they're a structured first draft you finish.
Weekly briefing
Reads the 90-day plan, the last Friday's review, and any open work. Returns the single most important thing to do this week, three priorities per active product, the content plan, and an explicit list of what needs operator approval.
- Open tasks from last week
- Compliance flags (if any)
- Approval queue
Mid-week standup
A short check-in. On track? Drifting? Blocked? It's deliberately tiny — under 300 words — so blockers get caught Wednesday instead of being discovered at Friday review.
- What's actually shipped since Monday
- Specific blockers (not "behind schedule")
- One question for the operator, max
Performance review
Pulls metrics from connected sources (PostHog, Gmail, Search Console, Play Console). Compares plan to reality. Names what worked, what didn't, and what we're killing under our pre-set kill triggers.
- Numbers, not adjectives
- Honesty rating 1–10 — below 6 we rewrite the plan
- What to test next week
Content draft batch
Three TikTok hooks, two compliance-checked health-niche scripts, one ad copy variant, and one Reddit comment opportunity — landed on disk before the operator's first coffee. Drafts only; nothing posts without approval.
- Per-product positioning enforced
- Brand voice and compliance constraints applied
- Logged with model used + timestamp
What the agents do without asking
Some work is safe to automate end-to-end because the output stays internal. The rule of thumb: if it doesn't publish, spend money, or reach a customer, it can run on autopilot.
| Task | Mode | Why |
|---|---|---|
| Daily content drafts | Auto | Written to disk, never posted |
| ASO keyword research | Auto | Read-only research |
| App review sentiment scan | Auto | Reading user signal, not responding to it |
| Competitive monitoring | Auto | Watching the market |
| Performance reports | Auto | Numbers gathered, written, sent to operator |
| Email drafts to customers | Approve-then-act | Wrong email is reputational damage |
| Social media posts | Approve-then-act | Public, hard to undo |
| App store metadata updates | Approve-then-act | Affects all users; rollback is slow |
| App review responses | Approve-then-act | Public, attached to brand permanently |
| Ad campaign creation | Approve-then-act | Spends money |
| Influencer outreach | Approve-then-act | Brand-defining first impression |
| Pricing changes | Plan-only | Strategic, multi-system effect |
| Vendor contracts | Plan-only | Legal commitment |
| Hiring decisions | Plan-only | Requires human judgment |
Picking the right model for the job
Different work calls for different tools. We don't pretend a single LLM does everything well — every task routes to whichever model is cheapest, fastest, or sharpest for the job at hand.
| Job | We use | Why |
|---|---|---|
| Strategy, positioning, long-form | Claude (Opus / Sonnet) | Best long-context reasoning and clean writing |
| Bulk content variants | Gemini Flash | Fast, cheap, good enough for first drafts |
| Research with citations | Perplexity | Real-time search with source links |
| Code review and debugging | Claude (Sonnet) | Strong code reasoning, integrated in workflow |
| Image generation | Midjourney / Flux Pro | Best quality for marketing assets |
| Short video | Veo / Sora | Best lip-sync and physical realism |
| Voiceover | ElevenLabs / Fish Audio | Natural delivery in 24+ locales |
| Translation (first pass) | Gemini Flash | Cheap and good enough; native review for paid copy |
Things AI is not allowed to do
The hard rules. These can't be unlocked by another agent or even the operator in a single session — they require deliberate, written exceptions.
- Send any email or message without explicit per-message approval
- Move money — no charges, refunds, or payouts
- Delete data — no account deletions, file deletions, no destructive database operations
- Speak as the operator on any platform without flagging the post as AI-assisted
- Make commitments to vendors, partners, customers, or employees on the operator's behalf
- Override the compliance constraints — for example, mention medication brand names in any GLP-1-related copy, or violate Vinted's Terms of Service in resale-related copy
If an agent finds itself about to do any of the above, it refuses and explains why, citing this policy.
Crisis triggers
Some events stop everything else and notify the operator immediately, regardless of what task is in flight.
- App store rejection on any product
- Ad account warning or strike from Google, Meta, or TikTok
- More than three one-star app reviews in 24 hours
- Customer support volume spike of 20%+ above baseline
- Compliance flag in any draft — medical claim or brand-name leak in Titra copy, Vinted ToS-circumvention in VintSnap copy
- Sentry error spike affecting app stability
- Any legal mention — DMCA, IP claim, regulatory letter
Why publish this
Three reasons. One: clients deserve to know how the work actually gets done — not a sales pitch. Two: it forces us to keep the operating contract honest, because it's public. Three: the studio playbook is part of the product. We're not just shipping apps; we're shipping a way of running a software business.