(We) don't use LLMs for writing

Meta
2 min read

All of our internal (values, playbook, proposals) and external (blog posts, product + architecture specs, emails, proposals) writing is generated the old fashioned way: bashing your fingers (and sometimes head, as it feels like) across a keyboard. It's not because LLMs aren't capable, but because writing is a fundamental form of thinking.

Given a personal data set, well-crafted prompt, and agentic "peer review" feedback loop, LLMs can match tone and craft logical arguments exceptionally well. The process:

  1. Have an idea
  2. Prompt the LLM to write
  3. Tune / refactor prompt
  4. Copy pasta
  5. Profit

While the output of this can be (jarringly) good, that’s not the point. What's missed is critical - friction. And in this case, friction drives understanding, discovery, and humility.

Understanding

You know the feeling - you’ve just read a book, go to explain a core concept for the first time to someone, and you quickly realize that you don’t understand it as much as you thought you did. This is partly due to the generation effect, which states that work generated from your own mind is both understood and remembered better than if it's purely read from a source.

Writing is the best way to uncover gaps in your understanding of a topic. Forcing yourself to synthesize through your own words will quickly expose cracks in the facade of your grip on a subject. Feynman demonstrated this through teaching - the ability to take something complex and distill it down to it's atomic and simple components. Writing is a vehicle to teach yourself, in that same capacity.

Also important is the muscle of filtering signal from noise. Writing forces us to take multiple sources (be it literature, shower thoughts, conversations, etc.) and bundle up insight into simple, compact reasoning. This is painful, but drives deeper understanding.

Discovery

Writing forces you to make connections between ideas that are floating around in your brain. Your lived context, thoughts, opinions, and understanding form an ecosystem where ideas can collide and form novel connections. The process of writing slows us down, probes these connections, and helps facilitate those collisions (see: Where Good Ideas Come From).

LLMs force a flow from thesis -> argument. In reality, good writing and novel ideas come from the inverse. Connecting the dots between ideas through writing can surface novel and often surprising opinions that often deviate from an original thesis.

By their nature, LLMs statistically will converge arguments to their most popular representation. This goes against the human “milleau” of outlier connections, ideas, and representations. As the majority of published writing shifts from human to AI, the human element of taste and weirdness will be what sets writing apart from the crowd.

Humility

There's a pervasive cognitive bias where limited understanding / competence leads to an overestimation of ability (The Dunning-Kruger Effect). Put another way - understanding breeds humility. And humility is essential to working together as humans.

By skipping out on writing, we're collectively generating false-earned confidence.


AI has dizzying, industry-shifting applications - which we use extensively to help build products and companies. We're just careful to apply it to areas where friction isn't a lever. Creative reasoning and weird, novel ideas are pretty important in a world where execution is cheap.