Not everything that replies, writes, creates images, or speaks to you with a human voice is truly “intelligent.” And yet we continue to call “artificial intelligence” something that, fundamentally, doesn’t think, doesn’t feel emotions, doesn’t understand.
This is the starting—and breaking—point for philosopher Luciano Floridi, professor at Yale University and director of the Digital Ethics Center, who in his recent paper proposes abandoning the label “AI” in favor of a more precise one: Artificial Agency.
According to Floridi, what we currently call artificial intelligence lacks consciousness, intentionality, and understanding. It doesn’t think, doesn’t feel, and isn’t aware of its own knowledge. It operates. That’s all. It’s a sophisticated form of automatic action—capable of interaction, autonomy, and adaptation—but entirely devoid of interiority or meaning. Calling it “intelligent” is tantamount to attributing, even unconsciously, human traits to it. It’s a semantic error, but also a political one: it creates expectations, fears, illusions. In a word, confusion.
The greatest risk? Anthropomorphizing what is purely algorithmic. We project onto machines desires, fears, and abilities they do not possess. In doing so, we end up delegating responsibility—responsibility that remains entirely human. If an AI makes a mistake, who is to blame? The AI? The person who programmed it? The company that marketed it? To stop calling it “intelligent,” Floridi argues, is not a linguistic game but a crucial step toward more effective, more sober, more ethical governance.
Supporting this perspective is HI! Human Intelligence (2025), the documentary by director Joe Casini, presented in 2025, which explores—through interviews, social experiments, and hybrid narratives—the many forms of human intelligence, from emotional to collective, creative to relational. The film moves across neuroscience, art, philosophy, and education to show that intelligence is not just logical calculation or memory, but also empathy, intuition, creativity, and collaboration. From this angle, using the same word—intelligence—to refer both to our complex and multifaceted cognitive universe and the predictive efficiency of a language model seems, at the very least, reductive. And perhaps, dangerous.
Speaking of artificial agency does not mean diminishing the achievements of AI, but placing them in the right context. We are not building new intelligences—we are building new forms of automated action. Powerful, pervasive, influential—but not sentient.
This is not merely a matter of labels. It’s a cultural, political, and even ethical issue. Because—as Floridi writes—“a more accurate definition doesn’t just change the vocabulary. It changes how we face the future.
Alessandro Mancini
Is a graduate in Publishing and Writing from La Sapienza University in Rome, he is a freelance journalist, content creator and social media manager. Between 2018 and 2020, he was editorial director of the online magazine he founded in 2016, Artwave.it, specialising in contemporary art and culture. He writes and speaks mainly about contemporary art, labour, inequality and social rights.