Now taking briefs for Q3 — start a conversation →

Operating cadence

The studio runs on a schedule. The agents follow it.

Most "AI agency" promises are vibes. Ours is a published cadence — what gets briefed, when it gets reviewed, and what AI is allowed to do without a human signing off. Here's how the studio actually runs.

TL;DR

AgentM runs four scheduled tasks each week — Monday briefing, Wednesday standup, Friday review, and a daily content draft Mon–Fri. Each task reads context, drafts work, and notifies the operator. AI never publishes externally without human approval. The full operating contract is below.

The weekly rhythm

Four recurring jobs the studio runs without being asked. Each generates a deliverable on disk and notifies the operator. None of them publish anything externally — they're a structured first draft you finish.

Mon · 09:00 local

Weekly briefing

Reads the 90-day plan, the last Friday's review, and any open work. Returns the single most important thing to do this week, three priorities per active product, the content plan, and an explicit list of what needs operator approval.

  • Open tasks from last week
  • Compliance flags (if any)
  • Approval queue
Wed · 12:00 local

Mid-week standup

A short check-in. On track? Drifting? Blocked? It's deliberately tiny — under 300 words — so blockers get caught Wednesday instead of being discovered at Friday review.

  • What's actually shipped since Monday
  • Specific blockers (not "behind schedule")
  • One question for the operator, max
Fri · 15:00 local

Performance review

Pulls metrics from connected sources (PostHog, Gmail, Search Console, Play Console). Compares plan to reality. Names what worked, what didn't, and what we're killing under our pre-set kill triggers.

  • Numbers, not adjectives
  • Honesty rating 1–10 — below 6 we rewrite the plan
  • What to test next week
Daily · 08:00, weekdays

Content draft batch

Three TikTok hooks, two compliance-checked health-niche scripts, one ad copy variant, and one Reddit comment opportunity — landed on disk before the operator's first coffee. Drafts only; nothing posts without approval.

  • Per-product positioning enforced
  • Brand voice and compliance constraints applied
  • Logged with model used + timestamp

What the agents do without asking

Some work is safe to automate end-to-end because the output stays internal. The rule of thumb: if it doesn't publish, spend money, or reach a customer, it can run on autopilot.

TaskModeWhy
Daily content draftsAutoWritten to disk, never posted
ASO keyword researchAutoRead-only research
App review sentiment scanAutoReading user signal, not responding to it
Competitive monitoringAutoWatching the market
Performance reportsAutoNumbers gathered, written, sent to operator
Email drafts to customersApprove-then-actWrong email is reputational damage
Social media postsApprove-then-actPublic, hard to undo
App store metadata updatesApprove-then-actAffects all users; rollback is slow
App review responsesApprove-then-actPublic, attached to brand permanently
Ad campaign creationApprove-then-actSpends money
Influencer outreachApprove-then-actBrand-defining first impression
Pricing changesPlan-onlyStrategic, multi-system effect
Vendor contractsPlan-onlyLegal commitment
Hiring decisionsPlan-onlyRequires human judgment

Picking the right model for the job

Different work calls for different tools. We don't pretend a single LLM does everything well — every task routes to whichever model is cheapest, fastest, or sharpest for the job at hand.

JobWe useWhy
Strategy, positioning, long-formClaude (Opus / Sonnet)Best long-context reasoning and clean writing
Bulk content variantsGemini FlashFast, cheap, good enough for first drafts
Research with citationsPerplexityReal-time search with source links
Code review and debuggingClaude (Sonnet)Strong code reasoning, integrated in workflow
Image generationMidjourney / Flux ProBest quality for marketing assets
Short videoVeo / SoraBest lip-sync and physical realism
VoiceoverElevenLabs / Fish AudioNatural delivery in 24+ locales
Translation (first pass)Gemini FlashCheap and good enough; native review for paid copy

Things AI is not allowed to do

The hard rules. These can't be unlocked by another agent or even the operator in a single session — they require deliberate, written exceptions.

  • Send any email or message without explicit per-message approval
  • Move money — no charges, refunds, or payouts
  • Delete data — no account deletions, file deletions, no destructive database operations
  • Speak as the operator on any platform without flagging the post as AI-assisted
  • Make commitments to vendors, partners, customers, or employees on the operator's behalf
  • Override the compliance constraints — for example, mention medication brand names in any GLP-1-related copy, or violate Vinted's Terms of Service in resale-related copy

If an agent finds itself about to do any of the above, it refuses and explains why, citing this policy.

Crisis triggers

Some events stop everything else and notify the operator immediately, regardless of what task is in flight.

  • App store rejection on any product
  • Ad account warning or strike from Google, Meta, or TikTok
  • More than three one-star app reviews in 24 hours
  • Customer support volume spike of 20%+ above baseline
  • Compliance flag in any draft — medical claim or brand-name leak in Titra copy, Vinted ToS-circumvention in VintSnap copy
  • Sentry error spike affecting app stability
  • Any legal mention — DMCA, IP claim, regulatory letter

Why publish this

Three reasons. One: clients deserve to know how the work actually gets done — not a sales pitch. Two: it forces us to keep the operating contract honest, because it's public. Three: the studio playbook is part of the product. We're not just shipping apps; we're shipping a way of running a software business.

Hire the studio →