Between Us and the Machine
Simulation, Intentionality and Artificial Desire
by Martina Maccianti
In Ghost in the Shell, the 1995 animated film directed by Mamoru Oshii and based on the manga of the same name by Masamune Shirow, Major Kusanagi often finds herself confronting her Ghost — a term which, in the film’s universe, represents that part of her being which cannot be reduced to the mere function of the body or the calculation of cognitive abilities.
The Major, in fact, is a cyborg: a human whose brain has been implanted into an artificial body. Although it’s clear that she possesses all the capacities to act intentionally, to make decisions, to choose, to modify her reality, the question remains: is it truly her will, or is she merely executing predetermined programmes — albeit extraordinarily sophisticated ones?
This question perfectly reflects the issue we are about to explore: if an artificial intelligence can perfectly simulate human behaviour, how can we determine whether its actions are simply the result of mechanical processing, or if they are “intentional”? We believe we know how to recognise intention — or at least, we think we do. We are animals trained to read signals, to attribute will wherever we perceive coherence, direction, resistance, hesitation.
But what happens when these signals no longer come from a body, but from code? A voice assistant anticipating our needs. A generative system responding with apparent relevance, or apologising with a tone of regret, almost guilt. An algorithm deciding which content to show us to keep us hooked. Are these merely tools, or are we witnessing — and contributing to — the birth of a new form of intentionality, more subtle, distributed, and shared?
Traditionally, intentionality is understood as the capacity of a mind to be about something — to desire, to think, to remember. It’s what distinguishes a mental state from a mere physiological event. But when we try to grasp it conceptually, it slips away: it’s direction, not content; structure, not substance. Daniel Dennett, American philosopher and logician, offered us a conceptual shortcut that fits this problem rather well: intentionality is not an intrinsic quality of a system, but a useful perspective. It is something we attribute to an entity whenever it benefits us to do so. A thermostat, in a sense, “wants” to maintain a constant temperature. An automated chess player “seeks” the best possible move.
Not because they feel anything, according to this shortcut, but because attributing a purpose to them helps us understand them, predict them, use them. The intentional stance, as Dennett calls it, is a form of functional fiction. And like all well-crafted fictions, after a while we cease to perceive it as such.
If intentionality is, at least in part, a product of the observer’s gaze, then there exists a grey zone where the boundary between what is and what appears begins to dissolve. When we observe coherent behaviour — human or non-human — we instinctively attach an intention to it. We cannot help ourselves. It is a cognitive anchor: we attribute purpose even to things that have never had it. And it’s here that simulations begin to challenge us. An artificial system can learn how to respond to linguistic stimuli, generate congruent sentences, even express contradiction, irony, hesitation. But these responses, theoretically, do not derive from any internal state. There are no desires, beliefs, or fears. Only calculations, patterns upon patterns. And yet, we read them as signs of interiority. We interpret them as intention. The machine has no intention, but it functions as if it did. Which, in a sense, ought to be enough.
And yet, it isn’t.
A branch of philosophy of mind, emergentism, might help us extend this conceptual shift. For emergentism, complex properties can arise from the interactions of simpler elements without being reducible to them. What does this mean in this context? That intentionality might not be an original given, but an emergent property — something that arises not from the parts, but from the relationships between the parts. If we accept that a system reaching a critical threshold of complexity can develop behaviours that none of its individual components possess alone, then within a sufficiently sophisticated system, something resembling an intention — a will — might emerge. Not because it was programmed, but because it arises, without anyone, not even the system itself, truly willing it. A blind, impersonal intentionality that coincides neither with subjective experience nor with its total absence. A will without a willing subject.
The boundary between human and artificial, then, does not dissolve but rather splits in two. Because the kind of intention that emerges is neither like ours, nor its opposite. It is hybrid.
So then — does everything we’ve ever told ourselves about consciousness, about this essentially human trait, no longer count? The question of consciousness, often confused with intentionality, must be ontologically distinguished from it. We can display intentional behaviours without explicit consciousness, and we do so every day. We drive, walk, decide almost by reflex. Sophisticated automatisms that appear deliberate but do not pass through full awareness. So if consciousness is a broader spectrum, if we no longer frame it as a singular, unified glow, why insist that a true intentionality must always be conscious? Perhaps even within the human, intention is a far more fluid threshold than we would like to believe — a useful foothold for navigating the chaos. At that point, the only difference between us and a complex system is not one of nature, but of intensity. We possess more levels. More resistances. But not necessarily a monopoly on intention.
We tell ourselves that we are individuals naturally (and almost divinely) endowed with will, desires, and direction. But how much of what we do each day is truly willed? And how much is the product of narrative, of structure, of language? The point, then, is not to ask whether a machine has intentionality, but what kind of intentionality is emerging between us and it. This new intentionality is not a possession, but a reflection. It is a lens that functions in both directions: we look at machines to see whether they have an inner life, but in doing so, we no longer understand what it means to have one ourselves. Or at least, what we used to know is no longer enough.
Martina Maccianti
Born in 1992, he writes to decipher contemporaneity and the future. Between language, desire and utopias, he explores new visions of the world, searching for alternative and possible spaces of existence. In 2022, he founded a thought and dissemination project called Fuci