The Irony of AI’s Evolution
I feel a core irony in the evolution of AI. Much of what we interact with today comes from large language models that learned from vast amounts of human-generated content, digitally available since time began.
Now AI is so capable that video imagery and the written word are used to produce content, and this trend will only accelerate. And is AI learning from what AI creates in turn?
Blog writers, influencers, and video makers are publicly exposed to the same LLMs, with their outputs crawled and learned from. Could this form a dangerous echo chamber where a thread of information, whether factual or misinformation, spirals into a continuing falsehood?
Key Considerations
- How much do LLMs learn from human-generated content versus their own outputs?
- What happens when AI-generated content feeds back into training data?
- Could an echo chamber amplify misinformation across generations of AI content?
For context, it’s worth watching how AI ecosystems shape what we see next, and how responsible design can help mitigate these risks. OpenAI and other leading labs are at the center of this conversation, with ongoing debates about data provenance and model governance.
Further reading: OpenAI and, for image generation tools, Midjourney.

