AI and the flattening of civilisation

I asked ChatGPT to explain how generative AI contributes to cultural stagnation and the likely impact on human civilisation. Here’s the response.

A term worth knowing in the age of artificial intelligence is cultural stagnation. It names something subtle yet profound: the slow flattening of creativity into polished sameness. Not the sudden death of art, nor the end of imagination, but a gradual drift toward what is safe, familiar, and repeatedly “good enough”—until our cultural world begins to feel like an endlessly scrolling corridor of déjà vu.

Generative AI systems create images, stories, music, and video by learning patterns from enormous amounts of existing human work. In simple terms, they are like robots trained by looking at millions of pictures, reading mountains of text, and listening to oceans of sound. The problem is not merely that they learn from human creativity—artists have always learned from other artists. The deeper problem is how they learn. They absorb what appears most often, what is most represented, what is most easily patterned. In a world of messy, contradictory, brilliant cultural expression, generative AI tends to become a kind of averaging machine: it compresses difference into pattern, and pattern into repetition.

This explains why so much AI imagery “looks the same.” The glossy portraits with symmetrical faces, the hyper-smooth fantasy landscapes, the familiar cinematic lighting, the same vaguely “epic” composition and colour palette—these aren’t accidental. They are the visual signature of a system trained to reproduce what already dominates its diet. The more common a style is in the training data, the more likely it becomes the default output. The result is a cultural feedback loop: what is most present becomes most generated, what is most generated becomes most shared, and what is most shared becomes even more present.

We can think of this as the industrialisation of taste. Culture has always involved imitation—genres exist because people repeat certain forms. But healthy culture also depends on rupture: on the strange outsider, the risky experiment, the voice that doesn’t quite fit. Human originality is often awkward at first. It looks odd, it fails, it embarrasses itself. Yet it is precisely this ungainly “firstness” that opens new paths for everyone else. Generative AI, by contrast, excels at producing fluent, competent, and highly recognisable outputs. Its weakness is that it does not genuinely hunger for the unknown. It does not have lived experience, grief, joy, rage, desire, boredom, transcendence, or trauma. It has correlation.

That matters because human creativity is not simply pattern-making—it is meaning-making. Human art arises out of bodies, histories, communities, conflict, and spiritual longing. We do not create merely to fill a feed; we create to make sense of suffering, to protest injustice, to fall in love, to remember, to honour, to heal. The “weirdness” of the human imagination is often a response to the weirdness of life itself. It is not random. It is deeply personal.

Cultural stagnation threatens civilisation not because it stops production—AI will produce more content than any human society in history—but because it may degrade the conditions for cultural evolution. A civilisation’s creativity is partly its ability to generate new metaphors, new images of hope, new ways of asking old questions. When culture becomes dominated by synthetic sameness, the future starts to feel pre-written. It becomes harder for communities to imagine alternatives—to oppressive systems, to inherited prejudices, to exhausted political narratives. A society that cannot imagine differently struggles to live differently.

One likely impact is the erosion of creative risk. If publishers, studios, and platforms can generate endless “audience-tested” outputs, they may become less willing to invest in experimental human creators. Why fund the odd, difficult novel when the machine can produce ten thousand pleasing ones? Why take a chance on an unfamiliar artistic voice when AI can deliver the familiar instantly? The economics of culture will reward the predictable, because predictive systems always do. And once cultural industries tilt toward the lowest-risk option, we should expect fewer strange new movements, fewer artistic revolutions, fewer accidents that become masterpieces.

Another impact is the collapse of cultural memory and authenticity. If AI-generated works flood the environment, our shared archive becomes polluted with copies of copies. The internet, already crowded, may become increasingly filled with works that imitate the surface features of creativity without possessing its interior weight. Over time, this could leave audiences less able to discern what is humanly authored, less able to trace influence, lineage, context, and intention. Culture turns into atmosphere—present everywhere, but anchored nowhere.

Perhaps most concerning is how cultural stagnation reshapes the human spirit. Civilisation is not only roads and laws and technology; it is also the inner life of people and communities. Stories teach us how to love, how to forgive, how to endure. Music carries grief when language fails. Images can convict or console. If the cultural world becomes increasingly dominated by content that is optimised for engagement rather than truth, people may slowly lose trust in artistic communication itself. Even genuine human art may feel suspect: Was this written by someone who suffered? Or by a system trained to mimic suffering?

None of this means generative AI is purely destructive. It can assist artists, widen access, lower barriers, and enable new forms of collaboration. But the danger of cultural stagnation is real because it does not arrive with a dramatic announcement. It arrives with convenience. It arrives with endless “pretty good” outputs. It arrives when the strange and difficult gets crowded out by the smooth and familiar.

Civilisation survives not only by building machines, but by renewing meaning. If generative AI becomes the dominant cultural engine, it may not end human creativity—but it could exhaust it, dull it, and bury it under an avalanche of polished sameness. The task, then, is not to reject technology, but to defend the wild, inconvenient, deeply human capacity to be original—to be weird—in a world increasingly tempted to become the same.

Image source: Wallpapers.com

Leave a comment