And it’s worse than we thought.
Their September 2025 report on AI and Culture landed quietly, but the findings should concern everyone working in media, arts, museums, publishing, film, music, and design.
The core issue: AI systems trained predominantly on Western data aren’t just recommending our culture—they’re actively reshaping global creativity into algorithmic uniformity.

What UNESCO Found
The report warns we’re creating what they call a “cultural mono-culture.“ When AI generates images of cities, Stockholm looks like San Francisco. When it creates music, regional styles drift toward Anglo-American templates. Local traditions fade into what algorithms recognise.
Here’s the uncomfortable part: AI training datasets reflect existing power structures.
Hollywood films. American literature. European art. These aren’t neutral—they’re the dominant voices in systems now “creating” content for the entire world.
The result? What UNESCO describes as “Cultural Imperialism 2.0.”
The Question That’s Dividing Creative Professionals
Is AI-driven homogenisation an existential threat to cultural diversity—or are we romanticising cultural purity in the face of inevitable technological evolution?
I’ve been wrestling with this because both perspectives hold truth:
On one hand: Cultures have always influenced each other. Trade routes, colonialism, globalisation, the internet—every era brings new forms of cultural exchange and yes, dominance. AI might just be the latest medium.
On the other hand: Previous cultural exchanges involved human dialogue, adaptation, resistance. When Moroccan rai music influenced French pop in the 1980s, artists made choices. When AI “mixes” cultures using imbalanced training data, it’s extraction without exchange.
Where This Gets Personal (And Complicated)
For those of us in creative industries, this hits differently depending on where we sit:
If you’re from a dominant culture: Your aesthetic becomes the default. Your work trains the systems. But do you want algorithmic versions of jazz, blues, or hip-hop representing these art forms globally? Stripped of context, flattened by optimization?
If you’re from an underrepresented culture: You face a double bind. AI tools can democratize access—a musician in rural Andalucía can now produce studio-quality tracks without expensive equipment. But that same AI subtly pushes output toward the Western styles it was trained on. Is this empowerment or homogenization disguised as democratization?
If you’re a cultural institution: You provide the expertise and content. Silicon Valley provides the infrastructure and captures the value. Museums experiment with AI curation while Big Tech controls the underlying systems trained on your collections. Is that a fair exchange?
The Part UNESCO Emphasizes (That We’re Not Discussing Enough)
The report identifies something crucial: the collective dimension of cultural data.
When AI systems train on millions of creative works, they’re not just using copyrighted material—they’re extracting patterns, values, and ways of seeing that belong to entire communities. Traditional IP frameworks protect individual creators, but they don’t address the appropriation of collective cultural patterns.
Think about it: When an AI generates “flamenco-style” music without understanding duende, or “African village” images that erase the architectural diversity of 54 distinct countries, it’s commodifying cultural commons without consent or compensation.
Two Futures (Which Do You Want?)
World A: Governments mandate balanced, multilingual, culturally diverse training datasets. AI systems are required to disclose data sources. Cultural institutions receive benefit-sharing agreements. Regional AI development is publicly funded.
In 20 years: Music streaming serves truly localized recommendations. Architectural AI trained on regional styles. Publishing platforms promote minority-language literature. Cultural professionals use AI tools that reflect their traditions.
World B: Market forces dominate. AI homogenization continues unchecked. Economic efficiency determines what culture gets encoded into future systems.
In 20 years: 82% of AI-generated content is in English, Mandarin, or Spanish. Regional dialects are “corrected” by autocomplete. Algorithmic uniformity becomes the aesthetic norm. Cultural professionals compete with AI that undercuts their livelihoods while misrepresenting their traditions.
The Practical Question
If you could implement ONE policy change tomorrow to address this, what would it be?
UNESCO’s report suggests several paths:
- Mandating transparent disclosure of AI training data sources
- Establishing “cultural data trusts” for collective licensing
- Requiring diversity audits for AI recommendation systems
- Creating public AI infrastructure to reduce computational barriers
- Implementing benefit-sharing mechanisms for communities whose cultural data trains AI models
But I want to hear from this network:
→ Cultural professionals: Are you seeing homogenization in your work already? What would actually help?
→ Technologists: What’s technically feasible that respects cultural sovereignty?
→ Policy folks: What regulatory approaches could work without stifling innovation?
→ Creatives using AI tools: How do you navigate the empowerment/appropriation tension?
Why This Matters Right Now
MONDIACULT 2025 is happening in Barcelona next month. UNESCO Member States will shape global cultural policy for the next decade. The decisions made there will determine whether AI becomes a tool for cultural preservation and pluralism—or an accelerator of homogenisation.
For those of us in creative industries, this isn’t abstract. It’s about whether our work, our traditions, and our ways of seeing the world survive in recognisable form—or get flattened into what algorithms optimise for engagement.
My Take (Still Evolving)
I don’t think AI is inherently the villain here. The UNESCO report actually highlights examples of AI preserving endangered languages and supporting heritage conservation.
The problem is power concentration + data imbalance + lack of governance.
We built these systems at scale before asking who benefits and who gets erased. Now we’re retrofitting ethics onto deployed infrastructure. That’s backwards.
The question isn’t whether to use AI in cultural production—that ship has sailed.
The question is: Who controls it, who benefits, and whose culture gets to survive the algorithmic age?
I’m genuinely curious about your perspective on this—especially if you disagree with UNESCO’s framing.
Drop a comment.
Let’s think through this together…
