Logo
Published on

The messy work embedded in AI-assisted creativity

Authors
  • avatar
    Name
    Ben Lesh
    Twitter

The Starting Point Is Always Small

Every book begins with a single question. Book 1: "Who am I really?" Book 5: "Can love transcend all boundaries?"

Not a plot. Not a character sheet. A thematic question that everything else has to answer. Then I sketch beats — maybe fifteen items for an entire book. Each beat is one sentence: "Sera discovers her mother is alive." "Prime almost dies." "The villain's identity is revealed." The whole book fits on a page.

That's the seed. Everything after that is conversation.

The Conversation Is the Work

It stops being "use a tool" and starts being collaboration. I don't say "write me a chapter." I throw an idea at Claude and we talk about it.

The best example I have is one session where I needed to figure out the series villain. I had nothing. Just a gap in the story where an antagonist needed to exist. So I started throwing spaghetti:

"I have to build an entire subplot around the Architect. I'm at a loss right now. Lost wife? Kid killed by dragon? AI gone rogue? Pure scientist who can't see why magic? Military commander lost in ideology?"

That's it. Five half-formed ideas in a single message. Claude came back with five paragraphs of backstory options, each one creating a different mirror with my protagonist, plus mix-and-match combination section that explored how those ideas could mix together. A comparison matrix showing which options served different story needs. It was amazing to experience a slightly more fleshed out version of each of my crazy ideas - I could see where they each lead, where they falter, what problems could crop up, etc.

I read through all of it and landed on a combination of two - the military commander who lost his way via ideology due the fact that he merged his mind with an AI.

My feedback was:

"The loss of humanity and individuality is baked into the mindset of the military. So the move from commander to AI-human hybrid makes the most sense."

I started with a bunch of ideas and ended on two sentences. Claude grabbed that and ran — it took my worldbuilding spec, character bibles, and geo-political map specifications from earlier conversations and built a backstory for the series villian and the people who follow him.

Then the real work started. Both Claude and myself began poking holes in the story. Questioning if parts were realistic or just wishful thinking. It was a back and forth question and answer session, where we explored tangents and really stress tested the idea. Both of us asking questions at times - I forced it to provide a plausible technical reason for the AI-Human merger, while it forced me to answer why someone would do this. Together we ironed out a meaningful backstory for the series villian.

An example: Claude asked about the timeline. When did the Architect emerge from the black hole?

I ignored the question entirely. Instead, I changed course:

"Can you reference when I first said the portals were dying? What timeline is embedded in the story already?"

This is the part that's hard to explain to people who haven't done it. I didn't answer the question because a different question was suddenly more important. The existing canon had timeline constraints I couldn't remember, and I needed those facts before I could think about emergence timing.

Claude pulled the timeline from my own documents — portal degradation started twenty years before Book 1. THEN it connected it back to the villain's emergence window and the question I ignored got answered, grounded in established facts and pulled from existing material we discussed earlier.

When the Threads Catch Fire

That same conversation is where the series found its deepest truth. And I didn't plan it.

After we'd nailed the villain's backstory and timeline, Claude laid out the dimensional physics for the endgame (asking an AI to purposefully hallucinate and make something sound plausibly realistic is actually quite entertaining!). This lead me to ask how the heroes could use this to their advantage. Phase transition physics was the answer. Same device, different frequency. One destroys, the other unifies.

I responded with what felt like a casual observation:

"I like it. Especially if magic and science turn out to be different aspects of the same thing like water and ice are."

That sentence changed the entire series. Claude took that idea and followed it to its conclusion: if magic and science are literally the same underlying reality in different phases, then the villain spent millennia perfecting a solution to a problem that doesn't exist. Their whole ideology (destroy magic) is built on a false premise. And then the cascade:

Science as a scaffolding for magic to create life from. A universe of logic and rules, layered on top with a universe of will and emotion. One gives form, another gives meaning. Both essentially originate from the same thing, they just present differently.

Every prior decision in the series snapped into alignment with a single thematic revelation that emerged from an offhand comment during an iterative conversation.

The Pattern

Looking back, the process has a consistent shape:

I throw out a messy idea. Half-formed, contradictory, sometimes just a list of fragments. Not polished. Not organized. Raw creative impulse.

Claude explores the ramifications. Takes each fragment seriously, builds it out, shows me what it would look like if I committed to it. Asks questions I haven't considered.

I don't always answer those questions. Sometimes I respond to the question. Sometimes I pivot to something completely different because the exploration triggered a new thought. Sometimes I challenge the premise. "What if the portal experience is peaceful instead of dramatic? The silence IS the portal." That wasn't in any plan. It emerged because seeing Claude's version made me realize what I actually wanted.

Claude gathers the chaos. After I've zigged and zagged through five tangents, said "actually no" three times, and introduced a constraint from a document I'd forgotten about, Claude synthesizes all of it into a cohesive whole. A decisions document. A specification. A framework that accounts for everything I said, even the contradictions, resolved into something that works.

I march through it. This is the part that feels most like traditional writing. I read the synthesis, challenge every assumption, confirm or reject each element. "Sera should survive. The event is fast. The device itself survives but can't be used." Rapid-fire decisions. The synthesis gives me concrete substance to react to instead of just blank page or a wild conversation. Many times I've scrapped it entirely and gone back to ideation again because it just didnt feel right.

Why Conversation Beats Solo Thinking

I couldn't have gotten to "magic and science are the same thing" alone. Not because I lack the creativity — because the iteration speed isn't possible inside my own head.

When I'm thinking solo, I can hold maybe three branches of a decision at once. In conversation, Claude holds the full context of five books, 110 chapters, and hundreds of named characters and can explore ten branches in the time it takes me to articulate one. When I say "what about this?" and it contradicts something from Book 3 that I forgot about, Claude catches it immediately. When I change course mid-thought, Claude doesn't lose the thread I abandoned — it's still there when I circle back to it two hours later.

The AI isn't generating ideas for me. It's giving my ideas room to breathe. It explores what I suggest faster than I can, shows me the consequences, and holds everything in place while I decide.

The Messy Middle

What I've described sounds elegant in retrospect. In practice, it's chaotic.

A real session looks like this: I start by asking about chapter structure, detour into villain psychology, pivot to questioning whether a character should even exist, realize the answer to that question solves a problem I had three books later, go back to chapter structure with completely different requirements, change my mind about the ending, then spend twenty minutes on what a portal sounds like.

Claude's job during all of that is to keep track of which decisions are still active, which got overridden, and which contradictions need resolving. At the end of the session, everything gets consolidated into a decisions document — the permanent record of what we agreed on.

The decisions document is the real output. Not prose. Not outlines. Decisions. Because decisions, once made, are what let everything else happen fast.

From Chaos to Specification

Eventually the conversations produce enough decisions to build a chapter specification. That's a structured document with character states, plot beats, dialogue requirements, and constraints. It's what the AI generates prose from.

But the spec isn't where the creative work happens. The creative work happened in all those messy conversations. The spec is just the contract that captures it.

I think of it like architecture. The conversations are the design process — sketching, arguing, changing your mind, discovering what you actually want. The specification is the blueprint. The prose is the building. Each stage matters, but the building is only as good as the conversations that shaped it.

What I'd Tell Someone Starting This

Don't try to have the whole idea before you start talking. Start with a theme and a handful of beats. Let the conversation expand them.

Be willing to ignore questions. If something more important just occurred to you, chase that instead. A good AI collaborator won't lose your place.

Pay attention to your offhand comments. The deepest insights in my series came from casual observations I almost didn't type. "Especially if magic and science turn out to be the same thing" was a throwaway line that restructured five books.

Document every decision. Not for the AI — for yourself. You will forget why you made a choice three months from now.

And accept that the process looks chaotic from the inside. It's supposed to. The coherence emerges from iteration, not planning.

The best ideas didn't come from me or the AI. They came from the space between us — in the back-and-forth, the tangents, the moments where I said "actually, what if" and everything shifted.

It started small. Everything does.