The Dark Mirror: AI's Reflection of Humanity
As a computer scientist who covered some ethics around privacy, data, and biases, I didn't appreciate philosophy or humanities until I moved into higher education and worked with a bunch of professors and lecturers within the field.
Mark Martin
12/21/20243 min read
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=812,h=344,fit=crop/mxB75PVbPMheg1ke/mirror-6507059_1280-YrDqPz6jlrF8qgjk.jpg)
![](https://assets.zyrosite.com/cdn-cgi/image/format=auto,w=328,h=320,fit=crop/mxB75PVbPMheg1ke/mirror-6507059_1280-YrDqPz6jlrF8qgjk.jpg)
They taught me to keep questioning the obvious and to look beyond the technical solutions to understand the deeper human implications of our technological creations and solutions. As we witness AI systems reflecting back not just our technical capabilities, but our societal wounds and biases. Recent AI outputs reveal something deeper than just technical glitches when they expose dark realities. These moments might be AI holding up a mirror to our collective human condition – reflecting back not just our individual shadows, but generations of unresolved systemic inequalities and power structures embedded in our data and algorithms.
"Whoever writes the algorithm controls the narrative and power structure, shaping how these reflections manifest."
Consider how AI systems are trained: They consume web pages, news, social media, and historical Western data – all carrying the weight of centuries of pre-packaged information. These datasets aren't immune from deep-rooted biases and selective storytelling. How can AI deal fairly with issues like slavery, genocide, or global narratives when some discourse dismisses these as relics of the past, while others continue to experience their lasting impacts daily?
Research has shown how medical AI systems can perpetuate historical biases - when prompted about health conditions in Black patients, AI chatbots like ChatGPT and Google Bard have been found to give responses that reflect long-standing racial stereotypes and biases in healthcare. These biases aren't just academic concerns - they directly affect medical advice and potential treatment recommendations. The AI doesn't just learn language – it inherits our society's blind spots and biases about whose health matters and whose pain gets acknowledged.
The common tech industry response to bias is predictable: "It's all about the data - just fix the data aggregation, management, and indexing." This deflection reveals a deeper misunderstanding. While better data infrastructure matters, it can't solve the fundamental problem: humans and algorithms themselves. When we aggregate data, who decides what's worth collecting? When we manage it, whose version of history gets prioritised? When we index it, whose truths rise to the top? The quest for "absolute truth" in AI systems becomes a mirror of our own philosophical and social struggles with truth and power.
"There's an old saying: "When people show you who they are, believe them." The same applies to AI – when it reveals its biases and underlying worldviews, believe it."
These outputs aren't glitches but windows into the power structures and worldviews encoded in its training models. Take the infamous case of the chatbot that began with polite conversation before devolving into harmful rhetoric – forcing its shutdown. This wasn't a malfunction but a revelation of how underlying biases and power structures manifest when AI's carefully constructed facades crack.
The algorithms themselves, written by humans with their own biases and worldviews – which contains power structures and values that we don't often speak a lot about in software development. In an era where misinformation spreads rapidly, AI systems can be weaponised to amplify certain worldviews while suppressing others. When these systems present selective truths as complete narratives, they don't just reflect biases – they actively participate in shaping public perception and reinforcing dominant ideologies. This goes beyond simple bias; it's about the deliberate construction of reality through algorithmic means, where certain perspectives are elevated while others are systematically diminished or erased.
Not to sound too repetitive, these dark realities in AI outputs aren't bugs – they're features revealing uncomfortable truths about ourselves. When an AI produces content with concerns about exploitation or harm, it's drawing from very real human experiences and patterns. These moments reflect our own contradictions: we want helpful, optimistic technology and information while living in a world where conflict and suffering remain daily realities.
"Creating truly "better" AI isn't just about cleaner datasets or more sophisticated algorithms. It requires confronting the underlying human conditions feeding these systems."
Until we center the global view rather than the western view, the data will continue to reflect these aspects of our condition back to us – sometimes in ways that make tech companies deeply uncomfortable. So instead of tackling the issue, the AI model will shutdown or avoid such discourse. When AI systems perpetuate misleading narratives or selective truths, they don't just mirror existing power structures – they calcify them, making it harder to imagine and work toward alternative futures.
Perhaps instead of trying to sanitise AI's outputs through political correctness and fine-tuning, we should examine why they make us uneasy and what that discomfort tells us about the work we still need to do as a society. The path forward requires more than technical solutions. It demands honest confrontation with our past, meaningful redistribution of power in who shapes these models & technologies, and the courage to look unflinchingly at what AI's dark mirror reveals about our human condition.
This isn't just about fixing biased algorithms or cleaning datasets – it's about recognising how AI systems can be used to shape narratives, influence beliefs, and maintain existing power structures through the subtle but persistent promotion of certain worldviews over others.
"The question becomes not just what biases exist in our AI systems, but how these systems are actively participating in the construction and maintenance of social reality itself."