AIPhilosophyCulture 10 min

The Clone Problem

When everything can be generated, the edges die first.

In 1870, a man named William Morris looked at what industrial mass production was doing to England and decided he’d seen enough. The factories were churning out furniture, textiles, wallpaper - all of it cheap, all of it identical, all of it soulless. The workers who made these things had no relationship to the finished product. They performed one repetitive task in a chain of repetitive tasks, over and over, disconnected from the thing they were creating. The objects themselves reflected this disconnection. They were functional. They were affordable. And they were dead.

Morris started what became the Arts and Crafts movement - not because he was against machines, but because he understood that something essential was being lost in the name of efficiency. The hand of the maker. The imperfections that proved something had been touched by a human who gave a damn. The aura of the thing, to use Walter Benjamin’s word1 - that quality of presence and authenticity that can only exist in something that was made, not manufactured.

That was 150 years ago. The machines have gotten better. The problem has gotten worse. And now we’re feeding the output back into the input.

What Mastery Actually Looks Like

Between 1489 and 1513, Leonardo da Vinci dissected more than thirty human corpses in the crypt of Santa Maria Nuova hospital in Florence. He worked by candlelight, in the stench of decomposing flesh, meticulously teasing apart muscle fibres and tracing blood vessels until the body was too degraded to continue. Then he drew what he found - not as a medical illustrator, but as someone trying to understand the machinery of life from the inside out. His anatomical drawings used exploded and multiple views that wouldn’t become standard in technical illustration for another four hundred years. He did this because he believed you couldn’t paint the human form honestly without knowing what was underneath the skin.

Nobody asked him to do this. There was no client brief, no sprint deadline, no product roadmap. It was an act of obsessive, unreasonable commitment to depth, driven by the conviction that surface-level understanding produces surface-level work.

Michelangelo spent four years painting the Sistine Chapel ceiling, working twelve-hour days on scaffolding he’d designed himself, head craned backward, paint dripping into his eyes. He wrote a poem to his friend Giovanni about the experience: “My stomach’s squashed under my chin, my beard’s pointing at heaven, my brain’s crushed in a casket, my breast twists like a harpy’s. My brush, above me all the time, dribbles paint so my face makes a fine floor for droppings.” He damaged his eyesight so badly that for years afterward he had to hold letters over his head, neck bent backward, just to read them. The poem ends with him declaring “I am not a painter” - which, coming from the man who painted the most famous ceiling in human history, tells you something about the relationship between mastery and self-assessment.

Before any of this, both Leonardo and Michelangelo spent years in a bottega - a Renaissance workshop where apprentices started at twelve years old grinding pigments, preparing panels, copying the master’s drawings. You didn’t touch a real commission for years. You earned the right to paint a background figure, then a minor figure, then maybe - if the master judged you ready - a central one. Leonardo trained under Verrocchio. The same workshop produced Botticelli, Perugino, Ghirlandaio. The system was slow, unglamorous, and it worked, because it understood something about mastery that we’ve almost entirely forgotten: you can’t skip the grinding.

This is what depth looks like. Not talent. Not efficiency. Not having the right tools. A willingness to go further into the work than anyone else is prepared to go, for longer than anyone else is prepared to endure, because you understand that the surface can only be as good as the foundation beneath it.

Hold that thought while we look at what we’re building now.

The Convergence Machine

Open ten AI-generated websites in a row. The same hero section. The same gradient. The same three-column feature grid. The same testimonial carousel. Swap the logos and nobody would notice. Open ten AI-generated blog posts. The same cadence. The same hedging. The same structure of “here’s a problem, here are three points, here’s a conclusion.” None of it is bad, exactly. All of it is the same.

This isn’t a coincidence. It’s mathematics.

Every generative AI model works the same way at a fundamental level. It ingests vast quantities of human-created work and learns the statistical patterns that make that work recognisable. Then it generates new output by predicting what’s most likely to come next. The key word is “likely.” Not surprising. Not original. Not inspired. Likely.

Probability engines converge. They have to. They’re optimised to produce what’s statistically most probable, which by definition is what’s most common, which by definition is what’s most average. Feed in a million websites and ask for a new one, and what you get is the platonic ideal of a website - the average of everything that came before. It will be competent. It will follow best practices. And it will be indistinguishable from the million that came before it.

Researchers demonstrated this in late 2025 with a beautifully simple experiment.2 They connected a text-to-image system with an image-to-text system and let them iterate - generating images from text, then text from images, then images from text, over and over. Regardless of how diverse the starting prompts were, within a few cycles everything converged onto the same narrow set of generic themes. Atmospheric cityscapes. Grandiose buildings. Pastoral landscapes. The machine ate variety and shat out uniformity.

AI is the ultimate junior practitioner: technically proficient, expressively empty. It knows every technique, every pattern, every template - but it operates from no depth. It has no foundation of understanding from which to improvise. It can only recombine. And it produces junior practitioner output at industrial scale.

This isn’t a bug. It’s how these systems work. And we’re applying them to everything.

The Dead Internet

More than half of all newly published English-language content on the web is now AI-generated. “Slop” was the word of the year for 2025, which tells you something about how far the rot has spread when even the dictionary people feel compelled to name it.

The Dead Internet Theory - the idea that most online content is generated by bots rather than humans - used to be a fringe conspiracy. It’s now an observable reality. The internet is drowning in content that nobody asked for, nobody needs, and nobody can distinguish from the content next to it. A city where every building is a slightly different shade of beige. Technically varied. Experientially identical.

People can feel the difference even when they cannot explain it. They might not be able to articulate exactly what’s wrong, but something in the hindbrain knows. There’s no weight to it. No risk. No sense that someone had to actually think before they put words on a page. It’s the uncanny valley applied not to faces but to judgement.

Every day new startups are released, and it’s a rare thing that their presentation amounts to more than a clone of whatever template is currently trending. It’s like sitting through a Hollywood movie - the formulaic predictability detracts from any substance that might be lurking beneath the surface. The difference now is scale. What used to be a human tendency toward imitation has been industrialised. The AI doesn’t copy one template - it ingests all of them and produces the statistical average. The platonic clone. The clone of clones.

The Death Spiral

Here’s where it gets genuinely disturbing.

The models are trained on the internet. The internet is increasingly made of model output. The next generation of models will be trained on the output of the current generation. Researchers at Oxford published a paper in Nature demonstrating what happens when you do this recursively:3 the models collapse. They first lose the long tail - the rare, unusual, interesting stuff at the edges of the distribution. Then different modes blur together. Then the outputs stop resembling human-created data altogether. They called it “model collapse,” and even the smallest fraction of synthetic data in the training set - as little as one in a thousand - can trigger it.

Think about what this means in the context of Leonardo’s anatomical drawings. Those drawings exist at the extreme edge of human expression - the product of one person’s unreasonable commitment to understanding something deeply. They’re the long tail. They’re the weird, obsessive, deeply personal work that only exists because a specific human being spent years in a specific crypt doing something that no rational cost-benefit analysis would ever justify. In a world of model collapse, this is precisely the kind of work that disappears first. The edges dissolve. The extremes flatten. The average consumes everything, and with each generation the machine becomes less capable of producing anything that surprises, because it has less and less surprising data to learn from.

Michelangelo’s ceiling survives because it’s unreproducible. Not because the technique is complex - technique can be studied and replicated. It survives because the conditions that produced it are unrepeatable: four years of physical suffering, a genius working at the limit of his capabilities, a poem of despair written in the middle of it. The work contains the struggle. The aura is the struggle, made visible. A probability engine will never produce the Sistine Chapel ceiling, not because it can’t generate something that looks similar, but because it has no capacity for the suffering that made the original what it is.

In code, the death spiral plays out in its own way. AI coding assistants have produced an 8x increase in duplicated code blocks.4 Code churn has doubled since 20215 - code gets accepted fast and then needs to be rewritten. Over 90% of issues found in AI-generated code aren’t obvious bugs but “code smells”6 - subtle architectural problems that compound over time. The code works. It passes the tests. And it’s all the same code, making the same architectural mistakes, because it was all generated by systems trained on the same patterns.

“Vibe coding” they call it. Accepting whatever the model gives you because it looks right and runs without errors. Not questioning the structure. Not considering the architecture. It’s the clone problem applied to engineering itself. Leonardo spent years in the crypt so he could understand what was beneath the surface. Vibe coding doesn’t even look.

You see the same thing beyond code. The generated output often works. It passes. It ships. What it does not reliably carry is judgement. The abstraction is a little too generic. The boundary is in the wrong place. The naming is technically fine and spiritually dead. The system gets more code and less shape. That is the clone problem in engineering form.

We rebuilt our own documentation tooling because the existing options kept converging on the same output: functional, acceptable, forgettable. That’s the clone problem in miniature. Once the average is cheap, taste stops being decoration and becomes the whole game.

The Fork

When everything can be polished to a mirror shine by a machine, perfection becomes suspicious. It tells you nothing about who made it or why. It could have been made by anyone, which means it was made by no one.

You know the difference when you feel it. You’re scrolling through generated slop - competent, frictionless, dead - and then you hit something that stops you. A sentence that risks too much. A design choice that shouldn’t work but does. Code architecture where someone clearly agonised over the right abstraction instead of accepting the first suggestion. The rough edge, the human choice, the thing that didn’t need to be there but is. Fingerprints. Proof of life.

That’s the fork. Not between using AI and not using AI - that’s a false choice, as pointless as rejecting the printing press. The fork is between letting the machine do your thinking and using the machine to amplify thinking you’ve already done. Between accepting the most probable output and insisting on the right one. Between grinding the pigments and skipping straight to the canvas.

In practice this means refusing first-draft architecture. It means reworking the copy until it carries a mind. It means building documentation, interfaces, and systems that reflect judgement instead of template gravity. The machine can get you to competent. It cannot tell you what deserves to exist.

As more clones saturate the internet and worldwide media, people will recognise the true point of difference, because it makes them feel something. If they ain’t feeling it, then your idea might not be as good as you think.

William Morris was right in 1870. The response to mass production isn’t to reject machines. It’s to insist that the human hand still matters - that the maker’s presence in the work is not an inefficiency to be optimised away, but the very thing that gives the work its value.

Master your craft properly, or you’re just spreading the virus.

References

  1. Benjamin, W. (1935). The Work of Art in the Age of Mechanical Reproduction. In Illuminations: Essays and Reflections (H. Arendt, Ed.; H. Zohn, Trans.). Schocken Books, 1969.

  2. Hintze, A., Proschinger Astrom, F., & Schossau, J. (2025). Autonomous language-image generation loops converge to generic visual motifs. Patterns, 7, 101451.

  3. Shumailov, I. et al. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755-759.

  4. GitClear. (2025). AI Copilot Code Quality: 2025 Data Suggests 4x Growth in Code Clones. Analysis of 211 million changed lines of code, 2020-2024.

  5. GitClear. (2024). Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality. Analysis of 153 million changed lines of code, 2020-2023.

  6. SonarSource. (2025). The Coding Personalities of Leading LLMs. State of Code Report.