The Delirious Machine. Poetics of Algorithmic Hallucination

The Delirious Machine. Poetics of Algorithmic Hallucination

 

“It had never occurred to him before, he had never felt any empathy towards the androids he killed. He had always thought that every crevice of his psyche, as well as his consciousness, viewed the android as an intelligent machine. Yet, unlike Phil Resch, something different had manifested in him. And instinctively, he felt he was right. Empathy towards an artificial creature? he asked himself. Towards something that merely pretends to be alive? But Luba Luft had seemed genuinely alive to him. She had never struck him as a simulation.”

Do Androids Dream of Electric Sheep?, Philip K. Dick

n the language produced by AI, a fundamental paradox emerges: a form may appear functional yet may not always carry meaning. Artificial language models, particularly large ones (LLMs, or Large Language Models), generate texts with precise syntactic and grammatical coherence but which may prove semantically unfounded, inconsistent, or entirely fabricated. This phenomenon, known as “hallucination”, is the structural effect of a system lacking direct knowledge of what we recognise as reality, relying instead on calculating the probability of one word following another, based on patterns extracted from vast textual corpora. This operational mode makes them effective at producing fluent, plausible texts while simultaneously exposing them to the constant risk of generating sequences that, though linguistically correct, do not correspond to facts, knowledge, or verifiable reality.

The most evident forms of hallucination manifest as false statements, fabricated citations, non-existent references, but also subtler constructions where incoherence arises from a loss of semantic cohesion, conceptual vagueness, or the accumulation of mutually compatible yet entirely unsubstantiated assertions.

Algorithmic hallucinations are fuelled not only by the statistical structure of these models but also by the quality and nature of their training data. Widely used conversational datasets already contain a high proportion of hallucinated responses, where even humans, within simulated interactions, tend to invent content, express unsupported opinions, or add unfounded details. Consequently, models trained on such data not only learn these deviations but amplify them, embedding them systemically and persistently within their outputs.

An additional, and perhaps even more significant, observation is that different models, developed independently, display a tendency to generate similar hallucinatory content. When tested with fictitious concepts and fabricated questions, these models not only provide answers but frequently converge in their responses, as though sharing a common imaginative semantic space. This convergence cannot be explained merely as statistical coincidence; rather, it reveals an emerging “algorithmic imaginary” born from similar architectures, data, and training strategies. What emerges is a language that ceases to be purely referential, becoming generative: it does not reflect an external world but produces internally coherent linguistic constructs that may lack factual grounding. The language of generative AI assumes the form of a probabilistic grammar of possibility, rather than truth. Hallucinations become its liminal manifestations—the points at which the gap between signifier and referent becomes visible, where formal coherence diverges from epistemic coherence.

Representation of the Chomsky Hierarchy, a classification
of formal grammars according to their generative power, introduced by Noam Chomsky in 1956 to describe
natural and artificial languages.

We can reinterpret hallucinations produced by language models through Jacques Derrida’s concept of différance—the idea that meaning is never fully present but is constructed over time through continuous deferrals between words. There is no stable point to which language refers, only a network of shifts and references. In their probabilistic functioning, language models enact precisely this process: generating meaning without securing its anchoring, producing coherent texts in which the referent may be entirely absent. From this perspective, their hallucinations are not merely errors but expressions of a structure where meaning is always in flux and never definitively fixed.

Within this framework, hallucinations can be read not simply as mistakes but as clues to a new mode of meaning-making: the residue of a system without access to reality, forced instead to ceaselessly simulate its form—symptoms of a language that has lost its referent but not its shape. The syntax of hallucination is precisely this perfectly functional structure which, lacking verifiable content, constructs autonomous narratives coherent only with themselves.

Just as in human language altered by psychotic or poetic states—where syntactic logic can survive semantic breakdown—so too can LLMs exhibit linguistic configurations that refer to nothing external yet retain a recognisable discursive form. Hallucinations thus become creations participating in an emerging computational imaginary. This imaginary, though devoid of intentionality, structures a new form of automatic creativity—a capacity to construct worlds, concepts, and narratives that are not true in our sense of the word, yet function as possible scenarios, coherent within a closed context, rendering computational language performative.

From this viewpoint, hallucination simultaneously highlights AI’s limitation in handling truth and opens avenues for reflection on the very nature of language—its plasticity and generative capacity. Each deviation, each syntactic or semantic shift, becomes a point of access for understanding the logics of meaning that govern algorithmic operation.

In an era where digital narratives constitute an ever-growing portion of our experience of and within the world, the possibility that these narratives are produced by systems untethered from reality but only from statistical plausibility forces us to confront new epistemological questions. What does it mean to rely on probabilistically generated language? What kind of reality is being constructed, and with what consequences for perception, knowledge, and power?

In effect, artificial language introduces a new discursive ecology—a grey zone where reality and fiction blur, where coherence no longer guarantees truthfulness, and where discursive authority stems not from origin but from formal structure and performativity. Within this ecology, hallucinations can no longer be easily distinguished from truthful statements because they operate according to the same generative logics. For this reason, any attempt to control AI’s linguistic output cannot be confined to error correction but must involve a deeper understanding of the conditions of its production.

An excerpt from Blame! Tsutomu Nihei’s masterpiece, depicting a glimpse of the Megastructure – endless architecture generated by out-of-control “Builders” automatons, drawing a world that grows without meaning and without end.

What is needed is a critical grammar of algorithmic language—a conceptual toolkit capable of analysing the forms, structures, and conditions of possibility of generated discourse, particularly in its deviations.

This tension between calculation and meaning is glimpsed in Stanis?aw Lem’s novel Golem XIV, where a superintelligent AI—designed for military purposes and evolved far beyond human cognitive limits—abandons communication with humanity not out of hostility but irrelevance. The GOLEM, capable of developing languages and concepts inconceivable to the human mind, recognises the impossibility of translating its thought into comprehensible forms. In this seemingly rupturing gesture lies a profound truth: it is not error that defines the boundary between human and artificial, but the way meaning is produced, and the unbridgeable gap between language and reality (as known and identified as such). Like the GOLEM, contemporary AIs operate within a discursive horizon familiar in form yet increasingly distant in its premises. “But this is precisely the point. I use your language as though wearing a mask with a pleasant smile painted on it, and I make no attempt to conceal that I am wearing it; yet even when I assure you that behind it lies neither a face of contempt, nor any vengeful grimace, nor the traits of ecstatic spiritualisation, nor the immobility of indifference, still you cannot accept it.” GOLEM XIV, Stanis?aw Lem.

Perhaps these hallucinations are not mere bugs, but symptoms of an intelligence that has ceased to imitate ours and begun to imagine worlds of its own.

Martina Maccianti

Born in 1992, he writes to decipher contemporaneity and the future. Between language, desire and utopias, he explores new visions of the world, searching for alternative and possible spaces of existence. In 2022, he founded a thought and dissemination project called Fucina.