“When readers discovered the truth about how the book was created, many were hurt. I deeply regret that, but it was necessary,” he says.
WIRED interviewed Colamedici in a conversation that explored the nuances of his project.
This interview has been edited for length and clarity.
WIRED: What was the inspiration for the philosophical experiment?
Andrea Colamedici: First of all, I teach prompt thinking at the European Institute of Design and I lead a research project on artificial intelligence and thought systems at the University of Foggia. Working with my students, I realized that they were using ChatGPT in the worst possible way: to copy from it. I observed that they were losing an understanding of life by relying on AI, which is alarming, because we live in an era where we have access to an ocean of knowledge, but we don’t know what to do with it. I’d often warn them: “You can get good grades, even build a great career using ChatGPT to cheat, but you’ll become empty.” I have trained professors from several Italian universities and many ask me: “When can I stop learning how to use ChatGPT?” The answer is never. It’s not about completing an education in AI, but about how you learn when using it.”
We must keep our curiosity alive while using this tool correctly and teaching it to work how we want it to. It all starts from a crucial distinction: There is information that makes you passive, that erodes your ability to think over time, and there is information that challenges you, that makes you smarter by pushing you beyond your limits. This is how we should use AI: as an interlocutor that helps us think differently. Otherwise, we won’t understand that these tools are designed by big tech companies that impose a certain ideology. They choose the data, the connections among it, and, above all, they treat us as customers to be satisfied. If we use AI this way, it will only confirm our biases. We will think we are right, but in reality we will not be thinking; we will be digitally embraced. We can’t afford this numbness. This was the starting point of the book. The second challenge was how to describe what is happening now. For Gilles Deleuze, philosophy is the ability to create concepts, and today we need new ones to understand our reality. Without them, we are lost. Just look at Trump’s Gaza video—generated by AI—or the provocations of figures like Musk. Without solid conceptual tools, we are shipwrecked. A good philosopher creates concepts that are like keys allowing us to understand the world.
What was your goal with the new book?
The book seeks to do three things: to help readers become AI literate, to invent a new concept for this era, and to be theoretical and practical at the same time. When readers discovered the truth about how the book was created, many were hurt. I deeply regret that, but it was necessary. Some people have said, “I wish this author existed.” Well, he doesn’t. We must understand that we build our own narratives. If we don’t, the far right will monopolize the narratives, create myths, and we will spend our lives fact-checking while they write history. We can’t allow that to happen.
How did you use AI to help you write this philosophical essay?
I want to clarify that AI didn’t write the essay. Yes, I used artificial intelligence, but not in a conventional way. I developed a method that I teach at the European Institute of Design, based on creating opposition. It’s a way of thinking and using machine learning in an antagonistic way. I didn’t ask the machine to write for me, but instead it generated ideas and then I used GPT and Claude to critique them, to give me perspectives on what I had written. Everything written in the book is mine. Artificial intelligence is a tool that we must learn to use, because if we misuse it—and “misuse” includes treating it as a sort of oracle, asking it to “tell me the answer to the world’s questions; explain to me why I exist”—then we lose our ability to think. We become stupid. Nam June Paik, a great artist of the 1990s, said: “I use technology in order to hate it properly.” And that is what we must do: understand it, because if we don’t, it will use us. AI will become the tool that big tech uses to control us and manipulate us. We must learn to use these tools correctly; otherwise, we’ll be facing a serious problem.