Surviving AI

The 4-Month Reality Check

We thought automating content with AI would be fast. It took us four man-months of brutal manual work to build our first course. Here is the unvarnished truth about why.

Mark Jones
Mark Jones · Collab365

Before we burned down our legacy systems to build the Collab365 Spaces architecture natively, we had to prove a hypothesis. Could we actually use AI to generate world-class, rigorous problem-solving content at scale?

To find out, we ran two major internal experiments. We duct-taped together a hybrid process and forced ourselves to run it entirely manually.

It nearly broke us. And it was the most important thing we ever did.

Experiment 1: The AI Authority System

For our first experiment, we decided to build an entire flagship course. We called it The AI Authority System. We used AI tools, but humans managed every single step of the pipeline.

We hosted the course in WordPress on Thrive Apprentice. But instead of authoring inside that clunky editor, we built a private local knowledgebase. We authored the entire curriculum in pristine Markdown files using an AI assistant called Antigravity.

The AI Authority System source control using Antigravity
Mastering the curriculum in local markdown files using Antigravity gave us complete control over the structure before syncing it up to WordPress.

We wrote incredibly detailed prompts targeting specific avatars to ground the AI. We ingested thousands of research documents and scripts. We ran podcast transcripts through Google's NotebookLM to extract synthetic insights.

Then, we wrote a script. Anytime a human or an AI changed a local Markdown file, it automatically synced straight over the API to Thrive Apprentice.

For the first time, we had a master source of truth in version control that was completely disconnected from the LMS platform. It was vastly quicker than hand-cranking a course. But it still took two of us two whole months of rigorous manual verification to finish. That is four man-months of labor.

Experiment 2: The 150-Hour Documentary

At the same time, we wanted to see how far we could push video production using purely generative AI. I decided to produce a 15-minute documentary film about the reality of AI job displacement.

Every image, the voiceover, the music, and elements of the script were all generated by AI.

It still took 150 hours of human labor.

Our fully AI-generated documentary. Every image, voiceover, and motion sequence was generated by AI, yet it still took 150 hours of human labor to orchestrate.

Why? Because out of the box, AI produces slop. If you ask a video generator for a corporate office, you get an unsettling, brand-less nightmare. To get exactly what we needed, I had to run the pipeline 177 times. Each time to create a unique 8-second scene. For every single one, I had to:

  • Brainstorm the concept and have the AI write a specific image generation prompt.
  • Generate the image. Fix the prompt. Regenerate.
  • Upload that image into an AI motion generator and write the physics prompt.
  • Generate four video variants. Reject three. Pull the best one into the editing suite.
  • Layer on an AI voiceover and sync the audio to the synthetic motion.
  • Manually add the scene to the timeline and check continuity against every preceding clip.

Five different tools, used 177 times. Hundreds of decisions a day. Thousands of refinements. All conducted by a single human. It was immensely faster than hiring a film crew. That would have taken ten times longer and cost ten times more. But it proved a fundamental law of AI production.

AI does not replace humans. It supercharges the humans who conduct the orchestra.

Six Lessons That Birthed Spaces

By the time we had finished the two-month course build and the 150-hour film, the content generated was genuinely incredible. But the manual, hybrid process of constant verification across five different tools was entirely unsustainable.

If we had not gone through that excruciating pain, we would never have conceived Collab365 Spaces. The manual prototype process forced us to confront six unshakeable lessons that became the architectural blueprint for our new platform:

  1. The Human Consistency Problem. As the two of us built the AI Authority System, we found it incredibly hard to apply the exact same processes consistently. Even with pristine prompts and a local knowledge base, we were constantly tweaking our approaches, making it nearly impossible for multiple humans to follow a rigid standard.
  2. The "Copy/Paste" RAG Hell. Having AI create the content was fantastic, but dragging files and "knowledge" into the chat interface manually was a disaster. Keeping that knowledge current turned into a copy-and-paste nightmare, and we quickly lost track of what file belonged where. We realised we desperately needed proper, automated Retrieval-Augmented Generation (RAG).
  3. The Quality Drift. Keeping track of AI hallucination, thematic drift, and repetition across an entire course was brutally difficult. We had prompts specifically designed to catch these errors, but they required us to manually drag the validation prompt in alongside the lesson draft and explicitly ask the AI to fix it.
  4. The Synchronization Game-Changer. The script we wrote to automatically sync our local filesystem up to WordPress saved us hundreds of hours. It taught us a vital lesson: the second you let a CMS (like WordPress) become the master editor, you fall back into copy/paste hell. You must own the source of truth, otherwise you are stuck relying on brittle automation to update a live site.
  5. The Video Reality Check. AI video generation is incredible, but it is expensive and painfully time consuming. Keeping 8-second video chunks consistent and on-brand requires immense human orchestration.
  6. The Automation Ceiling. The AI Authority System has since sold to hundreds of people and the feedback has been exceptional. It validated our core hypothesis: AI accelerates the expert, it does not replace them. We will probably never reach a state where an AI can generate a perfect, 100% reliable technical curriculum without any human oversight. And that is exactly the point. The value is not in replacing the expert. The value is in giving the expert an architecture that removes all friction.

We knew exactly what we had to do. We needed to take everything that worked about the AI Authority System build and eliminate the copy and paste hell. We needed to natively automate the entire pipeline inside a single system. From research to markdown generation. From RAG to quality testing and syncing.

We proved the theory. We knew humans had to stay in the loop, and we knew the architecture had to be unified to eliminate friction. It was time to throw the traditional LMS model in the bin and build the engine.