When Philosophy Devours Artificial Intelligence
- Bruno Vide
- May 29
- 3 min read
What happens when a machine thinks without knowing why?
This isn't a futuristic paradox, it's a pressing contemporary concern. That’s the provocation Michael Schrage and David Kiron offer in their bold article "Philosophy Eats AI". If software devoured the world and AI consumed software, then philosophy is now entering the scene, not as ornamentation, but as foundation.
The article’s central idea is disruptive: philosophy is devouring AI because, as AI gains increasing autonomy and influence, it can no longer avoid responding to deep philosophical questions. Artificial intelligence, the authors argue, is not merely a technical construct, it is a construct grounded in epistemology, ontology, and teleology. It is inherently embedded in decisions about what is true, what is real, and what ought to be done.
Three Fundamental Philosophical Domains
Schrage and Kiron identify three core areas where philosophy infiltrates — or should infiltrate — the foundations of AI:
Teleology (Purpose): What is the ultimate goal of an AI system? What is it for, who benefits, and what is it aiming to achieve? These questions can’t be answered with code alone.
Epistemology (Knowledge): What qualifies as knowledge for AI? How does it validate what it "knows"? What sources or structures of trust underpin that validation?
Ontology (Representation of Reality): How does it represent the world? What categories does it construct? How does it distinguish what exists from what is noise?
These three dimensions aren’t theoretical in a pejorative sense, they are structural. They are the bedrock upon which AI decisions, inferences, and interactions are built.
Practical Implications
The article illustrates this point with a real-world case: errors made by image-generation systems such as Google Gemini, which failed in their attempt to represent human diversity. The issue was not solely technical, it was philosophical. The confusion between historical representation and contemporary justice revealed an ontological vacuum and a lack of teleological clarity: what was the image meant to do? Represent faithfully? Symbolically correct?
The absence of philosophical clarity leads to incoherent, even dangerous decisions. The risk, the authors warn, is that we build "intelligent" systems operating on invisible assumptions, often inherited from unexamined cultural, historical or ideological contexts.
The Strategic Role of Philosophy in Organisations
The article’s proposal is not philosophical in the academic sense. It is strategic. Schrage and Kiron argue that business leaders must integrate philosophy into the heart of AI strategy, not as a layer of ethical varnish, but as a decision-making engine. This requires:
Clarifying objectives based on values.
Understanding the knowledge models guiding systems.
Choosing representational structures that make sense for the organisation and its audiences.
Without this clarity, the authors say, AI becomes erratic. Powerful, but blind.
Beyond Ethics
Much of the AI debate centres on ethics, what is allowed or forbidden. But philosophy goes deeper: it asks why, to what end, and how we know what we know. That is what separates responsible use of AI from merely functional use.
Ethics focuses on boundaries. Philosophy on foundations. And without foundations, any boundary is arbitrary.
Think Before You Automate
Now more than ever, leading with AI requires more than engineers and data scientists. It requires thinkers. It requires philosophy. Because the future of artificial intelligence is not decided in silicon, but in semantics.
"Philosophy Eats AI" is not just a provocative title, it’s a strategic warning: without philosophy, AI consumes itself.
Sources:
Michael Schrage & David Kiron, "Philosophy Eats AI", MIT Sloan Management Review, 2024. https://sloanreview.mit.edu/article/philosophy-eats-ai/
Commentaires