Skip to content
Back to Blog
AI agentsbuilding in publiccase studycursormulti-agent2026

Built with Itself — How TinyFirm's Marketing Was Made by TinyFirm

TinyFirmComing soon11 min read

Built with Itself — How TinyFirm's Marketing Was Made by TinyFirm

The marketing for TinyFirm — the research, the brand strategy, the blog posts, the landing page, the SEO, the social media content — was built by an AI agent team. That AI agent team was generated by TinyFirm.

This is building in public taken to its logical extreme. Most AI tools make promises. This one was used to build its own launch. If the AI agent workflow works well enough to produce its own marketing materials — with real deadlines, real quality standards, and real stakes — it works.

What follows is the full story: the team, the workflow, the deliverables, the failures, and the results. Not a sanitized case study. Not a highlight reel. The actual project — what building a product with AI agents looks like when you're shipping real work inside Cursor IDE.

The Project — TinyFirm Marketing

The Goal

Launch marketing for TinyFirm: a $49 one-time-purchase digital product — an AI team orchestration system for Cursor IDE. The deliverables list was not small: market research, brand voice guidelines, landing page copy, landing page build and deployment, SEO strategy, five blog posts with keyword briefs, social media content for six platforms, and a Gumroad listing optimization.

Timeline: days, not weeks.

The Constraint

One human. Ryan, the founder. No marketing team. No freelancers. No agency budget. No prior marketing content existed.

The thesis was direct: TinyFirm generates custom AI teams for any project. Could it generate a marketing team for its own launch?

The answer turned out to be yes. But the journey reveals what actually works about AI agent workflows and where the edges still are.

The Team

Meet the Team

Seven agents, each with a defined role, speaking style, and domain ownership:

  • Ace — CEO / Orchestrator. Managed every delegation. Never wrote code or content directly. Delegated, reviewed, synthesized across agents, and maintained the shared memory system.
  • Scout — Market Research Analyst. Competitive intelligence, audience research, market positioning, pricing analysis.
  • Prism — Brand Strategist & Creative Director. Brand voice guidelines, visual direction, logo concepts, cover image specifications.
  • Quill — Content Strategist & Copywriter. Blog posts, landing page copy, Gumroad listing, email copy, persona messaging, video scripts.
  • Beacon — SEO & Growth Analyst. Keyword research, content briefs with detailed outlines, conversion optimization, analytics strategy.
  • Buzz — Social Media Manager. Platform-specific content for Twitter/X, Reddit, LinkedIn, DEV.to, Bluesky, and Hacker News.
  • Forge — Frontend Engineer. Next.js 15 landing page build, Tailwind CSS styling, responsive design, Vercel deployment.

How the AI Coding Team Was Generated

TinyFirm's hiring interview asked about the project — what we're building, the tech stack, the goals, the constraints. The system generated appropriate roles with personalities, domain expertise, and specialized cursor rules.

The team composition is custom. A SaaS project would get engineering-heavy agents. An e-commerce project would get product and design specialists. This marketing project got a research analyst, a brand strategist, a content writer, an SEO specialist, a social media manager, and a frontend engineer. The right team for the right job.

[Screenshot: AGENTS.md team roster showing all 7 agents with roles and descriptions]

How the Workflow Actually Worked

Delegation — The 7-Section Protocol

Every task that left Ace's hands followed a structured delegation protocol. Seven sections, every time: agent identity (who you are), team briefing (full project context), personal memory (read your history), task (what to do), key files (where to look), expected output (what done looks like), and summary write-back (record what you did).

Here's a condensed example — when Ace delegated the landing page copy to Quill:

Agent Identity: You are Quill, Content Strategist. Eloquent but never pretentious. Team Briefing: [Full project context — what we're building, decisions made, current phase] Your Memory: Read your long-term memory file for patterns and lessons from previous work. Task: Write production-ready landing page copy. 7 sections. Problem-first, feature-rich, CTA at the end. Key Files: brand-voice-guidelines.md, competitive-matrix.md, landing-page-seo-requirements.md Expected Output: Complete copy at content/landing-page-copy.md Write-Back: Record changes, decisions, and lessons in your daily summary.

No ambiguity. No "make it good." Every delegation was a complete brief with full context, specific files, and a clear definition of done.

This protocol prevents the "I told the AI to do X and it did Y" problem. If you've used Cursor's agent mode without structured delegation, you know how quickly vague prompts produce vague results. Structure doesn't slow the agent down. It aims the agent.

Memory in Action — Session Continuity

The project spanned multiple sessions across several days. Each time Ryan said "save progress," Ace wrote the full project state to memory files — what was built, what was decided, what was learned, what's next. Each time he said "pick up where we left off," Ace read everything back and gave a status update. No re-onboarding. No lost context.

Here's the chain that proved persistent memory works: Scout's competitive research identified pricing patterns and competitor weaknesses. Beacon read Scout's findings and built keyword strategies targeting gaps in the competition. Quill read Beacon's briefs — which contained Scout's competitive insights — and wrote blog posts that hit the right keywords while addressing real market gaps.

That chain only works if memory persists across sessions. If Beacon doesn't remember what Scout found, the keyword strategy ignores the competitive landscape. If Quill doesn't remember Beacon's brief, the content misses the SEO targets. Persistent memory turned seven independent agents into a coordinated team.

For a deep dive on how persistent memory works — and simpler approaches for smaller projects — read How to Make Cursor Remember Context Across Sessions.

[Screenshot: Long-term memory file showing accumulated context from multiple sessions]

TinyFirm's Mission Control — Watching It Happen in Real Time

TinyFirm's Mission Control dashboard tracked every phase, every agent's status, every task, every decision. The live activity feed showed delegations happening, agents working, tasks completing — updating automatically.

[Screenshot: TinyFirm Mission Control dashboard showing phase progress with real agent names and activity feed]

This was the visibility layer that made the project manageable. When seven agents are working across research, brand, content, SEO, social, and engineering — you need to see what's happening. Not after the fact. In real time.

[Screenshot: TinyFirm Mission Control agent detail view showing completed tasks and lessons learned]

Parallel Work

Multiple agents worked simultaneously. While Quill wrote blog posts, Beacon created SEO briefs for the next batch. While Forge built the landing page, Prism designed the brand identity. While Buzz prepared the social media launch package, Scout refined the competitive analysis.

This is the leverage of a multi-agent AI agent workflow: parallel execution with shared context. A single AI handles one thing at a time. A team handles many — and because they share a persistent memory system, they stay aligned even when working independently.

What the Team Actually Produced

Concrete deliverables. Not claims — artifacts.

Market Research & Competitive Intelligence

Scout produced a competitive matrix analyzing five competitors across nine dimensions. Plus audience persona profiles, pricing analysis, channel strategy ranking, and a Reddit community analysis covering r/vibecoding (89K members) and r/cursor.

Brand Voice & Visual Identity

Prism produced comprehensive brand voice guidelines — five voice pillars (Confident, Inventive, Precise, Warm, Opinionated), a tone spectrum calibrated for nine channels, vocabulary rules with banned terms and preferred alternatives, and per-persona voice shifts. Plus a logo creative brief and OG image specifications.

Content & Copy

Quill produced: a complete landing page copy (seven sections, problem-first structure), five blog posts with full SEO optimization, a Gumroad listing optimization, vibe coder persona messaging, a founder narrative, and a 60-second demo video script.

The blog post you're reading right now? Briefed by Beacon. Written by Quill. Managed by Ace.

SEO & Growth Strategy

Beacon produced landing page SEO requirements, keyword research across 17 target keywords with volume and difficulty estimates, five detailed blog content briefs — each with 10-section outlines, featured snippet targets, CTA placement maps, and competitive SERP analysis.

Landing Page & Frontend

Forge built tinyfirm.dev — a Next.js 15 landing page with Tailwind CSS and shadcn/ui components. Dark-themed, responsive, accessible. Deployed on Vercel from a private GitHub repo. Every section was built from Quill's copy, aligned with Prism's brand voice, and optimized for Beacon's SEO targets.

Social Media Content

Buzz produced a multi-platform launch package: a Twitter/X launch thread, Reddit posts adapted for r/cursor and r/vibecoding, LinkedIn announcements, DEV.to article drafts, Bluesky posts, and a Hacker News "Show HN" submission — each calibrated to the platform's culture and norms. For more on reaching the vibe coding community, see Vibe Coding at Scale.

What Went Wrong (And What We Learned)

Real projects have failures. Showing them builds more credibility than the successes ever could.

Lesson 1 — Agents Need Course Correction

Not every delegation produced the right output on the first try. Some agents misinterpreted scope. Some produced inconsistent formatting. Some went in a direction that didn't match the brief.

The fix: Ace's correction protocol. Identify the specific failure. Write a correction to the agent's long-term memory so the mistake doesn't repeat on future delegations. Re-delegate with the correction stated explicitly in the task.

AI teams are not set-and-forget. They need management — the same way human teams do. Ace's role as a manager who never writes code but always reviews output is not a quirk. It's the core design decision that makes the system work.

Lesson 2 — Context Can Drift

Over many sessions, accumulated context sometimes became stale. An early decision that was later reversed might still appear in an agent's memory, causing conflicting behavior.

The fix: agents flag potential inconsistencies in their daily summaries. Each summary includes an "Observations" section where agents note anything that looks outdated or contradictory. Ace reviews these and corrects the memory files during condensation.

Persistent memory is powerful. But it needs maintenance. Stale memory is worse than no memory — it causes confident decisions based on outdated information.

Lesson 3 — The Biggest Time Sink Was Coordination, Not Creation

The agents produced content quickly. The bottleneck was alignment — ensuring Quill's copy matched Prism's brand voice, Beacon's keywords appeared in Quill's content, Forge's implementation matched the copy, and Buzz's social posts reflected the latest messaging.

This is the same challenge human teams face. AI teams don't eliminate coordination cost — they change its shape. The structured delegation protocol reduces it significantly. It doesn't eliminate it. Anyone claiming AI teams are friction-free is selling something.

The Results — By the Numbers

  • Agents: 7 (Ace + 6 specialists)
  • Sessions: Multiple, spanning several days
  • Deliverables: Market research, brand voice guidelines, landing page (copy + build + deploy), 5 blog content briefs, 5 blog posts, SEO strategy, social media launch package, Gumroad listing optimization, demo video script, founder narrative
  • Landing page: Live at tinyfirm.dev — Next.js 15, Tailwind CSS, shadcn/ui
  • TinyFirm Mission Control events logged: 100+
  • Human input: One person (Ryan). Direction, decisions, and approvals. Zero content writing. Zero code writing.

One human. Seven AI agents. A persistent memory system. A live dashboard. And a complete marketing launch.

Frequently Asked Questions

Did a human write any of the content?

Ryan provided direction, made decisions, and approved output. He did not write marketing copy, blog posts, code, or design specifications. The AI agents produced all deliverables. Ace managed the workflow. This blog post was briefed by Beacon and written by Quill — both AI agents in the TinyFirm Marketing team.

How long did the whole project take?

The marketing project spanned multiple sessions over several days — market research, brand development, content creation, SEO strategy, landing page build, and social media planning. Individual tasks completed within sessions. Session continuity ("save progress" / "pick up where we left off") allowed work to compound across days without losing context.

Can I use TinyFirm for non-coding projects like this?

Yes. While TinyFirm was designed for software development, the agent generation system creates teams based on your project's needs. The hiring interview adapts — a marketing project gets marketing agents, a product project gets product agents. The orchestration layer (delegation, memory, TinyFirm's Mission Control dashboard) works regardless of domain.

What's the difference between this and using ChatGPT?

ChatGPT is one generalist in a single conversation. TinyFirm generates a team of specialists with persistent memory, structured delegation, and a live project dashboard — all inside your IDE. The difference is between asking one person to do everything and managing a coordinated team where each member owns a domain and they all share a memory system.

Conclusion

TinyFirm's marketing was built by TinyFirm. The meta is real.

The system isn't theoretical — it produced the market research, the brand voice, the content, the landing page, and the growth strategy for its own launch. Every blog post in this series was briefed by an AI SEO analyst and written by an AI content strategist. The landing page was built by an AI frontend engineer using copy guided by brand voice from an AI brand strategist informed by competitive research from an AI market analyst.

That chain — from research to strategy to creation to deployment — is what an AI agent workflow looks like when it actually works.

If an AI agent team can ship a complete marketing launch, imagine what it can do for your project.

Stop prompting. Start delegating. TinyFirm generates custom AI agent teams with persistent memory for Cursor IDE — the best cursor rules system for building real things with AI. $49 one-time. Unlimited projects.

This post exists because the system works. Your project can be next.

Read the rest of the series: The Complete Guide to Cursor IDE Agent Mode · How to Make Cursor Remember Context Across Sessions · TinyFirm vs. Free Cursor Rules · Vibe Coding at Scale

Stop Prompting. Start Delegating.

TinyFirm generates custom AI agent teams with persistent memory for Cursor IDE. One-time purchase. Unlimited projects.

Get TinyFirm — $49