Scroll Top

LA RIVOLUZIONE ALGORITMICA

The Cognitive Debt Paradox: How AI Can Spark Thinking

by Francesco D'Isa

A critical and philosophical look at artificial intelligence and its influence on society, culture and art. La Rivoluzione Algoritmica aims to explore the role of AI as a tool or co-creator, questioning its limits and potential in the transformation of cognitive and expressive processes.

Lately, numerous newspapers have picked up an MIT study on the impact of generative AI on the brain in the most sensationalistic way possible, with headlines like “ChatGPT May Be Undermining Critical Thinking” (Time) or “Artificial Intelligence Is Homogenizing Our Thoughts” (NYT). To be fair, the paper doesn’t help much either, considering that the abstract ends with: “These results raise concerns about the long-term educational implications of relying on LLMs and highlight the need for further investigation into the role of AI in learning.”

However, just a few pages in, one can discover the other side of the experiment: when the 54 participants first composed three essays “using only their brain” and then, in the fourth session, reworked the text with ChatGPT, EEG readings showed a spike in connectivity across all bands and a memory recall level comparable to the brain-only group. The problem is that this decisive piece of information is squeezed into a single line in the abstract and disappears entirely from the conclusions, where the narrative of “cognitive debt” and skill decline takes over again. It’s a rhetorical framing that helps explain why the media chose the bleakest headline rather than the most useful news—also because the study spans two hundred pages, and few have read it in full, or even explored it using the very AI tools it discusses.

The media life of this article is a fairly clear case of securitization—the process described by political scientist Ole Wæver in which an ordinary topic is framed as an existential emergency to justify alarm and extraordinary measures.

It all starts with a catchy title—Your Brain on ChatGPT—which echoes the famous 1980s anti-drug ad featuring a frying egg: even before reading, we know it’s about a danger. The shock number—83% of participants who can’t recall what they wrote if it was produced with ChatGPT (a fairly predictable outcome)—serves as the smoking gun and quickly morphs into “ChatGPT Destroys Memory” or “AI Is Making Us Stupid.” Once amplified by headlines, the story changes scale. It’s no longer the result of a twenty-minute lab task—it becomes the threat of a brain-atrophy epidemic in schools and offices. Hence the emergency logic: if there’s a threat to the mind, we need bans, moratoriums, and safety protocols. Never mind that the same study reveals—almost incidentally—an optimal inverse strategy: first drafting autonomously, then using AI as an amplifier—a combination that preserves memory and activates the brain even more.

Putting the parts back in place is not just about dismantling alarmism but reopening the more important question: when and how should AI be used so that the brain stays engaged and our work improves?

First, let’s describe the experiment: about fifty college students from the Boston area are divided into three groups of 18: Brain-only, Search Engine, and LLM (ChatGPT). Over three sessions, they write short 20-minute essays using only their assigned tool; in the fourth session, the roles are reversed (Brain-to-LLM, LLM-to-Brain). During each task, they wear an EEG headset that measures direct connectivity between fronto-parietal regions. The protocol also includes a quick memory test: five minutes after submission, participants are asked to recall a phrase from their own essay. These responses, cross-referenced with EEG data, yield the statistic that fueled the headlines.

In the first three sessions, the pattern is linear: 83% of the LLM group cannot repeat a single line of their text, compared to 11% of those writing from memory or using Google. The EEG shows a thinner fronto-parietal network as external assistance increases, confirming shallower processing. This is where the “cognitive debt” mantra originates. But the study contains a second, almost invisible, truth: when Brain-only students switch to ChatGPT in session 4, their EEG activity spikes, and their memory retention is on par with the control group—while the quality of their essay improves significantly over all other cases. In other words, the widely cited takeaway (“AI erases your memory”) describes lazy usage, not strategic usage: if you delegate from the start, the mind flattens; but if AI enters after the ideation effort, the mind re-engages, and the text benefits.

The allure of shock percentages and red-and-blue brain maps shouldn’t make us forget that the MIT study captures a very specific context: twenty-minute timed tasks, mostly first-time ChatGPT users, EEG helmets tracking every micro-variation as they type frantically.

Another misconception involves the idea that memory is the sole measure of learning. For forty years, research on the so-called generation effect has consistently shown that generating content with one’s own effort reinforces memory traces much more than simply reading them—on average, self-generation yields about a ten-point advantage on recall tests. That’s why we remember concepts better when we paraphrase them instead of just highlighting them in a textbook. In other words, when we consult a book or use it as an external memory prosthesis, we tend to remember fewer details than when we recreate that information from scratch—but we don’t conclude from that that books make us less intelligent. We simply assume that reading complements the active elaboration that follows.

The most critical flaw concerns the interpretation of the EEG data itself. The authors treat reduced connectivity as a sign of “shallow encoding,” yet in studies comparing tasks of varying motor-cognitive complexity, this effect is routine: when we write by hand, for instance, the theta/alpha network lights up broadly, while typing narrows the circuit. But no one concludes from this that keyboards or printed pages are degenerative tools—rather, they’re interpreted as devices that help conserve neural energy for other tasks. In neurophysiology, efficiency often translates to less activation, not more.

In short, saying “fewer waves = dormant brain” is a rhetorical shortcut. If we took the brain activation logic seriously, we’d have to ban calculators, textbooks, maybe even the alphabetic script itself—since Plato already feared it would cloud memory. More honestly, the study confirms a well-known dynamic: cognitive outsourcing reduces immediate load and may weaken surface recall, but it only becomes harmful when it entirely replaces internal processing. And that’s a risk that applies to ChatGPT as much as to any other support, including books. As always, it’s how we use it that makes the difference.

Also worth noting, as mentioned earlier, is that the MIT participants were often LLM novices; interviews report hesitation, fear of “getting the prompt wrong,” even blank-page paralysis in front of the unfamiliar tool. It’s plausible that as familiarity grows, selective use of ChatGPT evolves: less copy-paste, more critical dialogue, deeper integration with personal ideas. That’s what we glimpse in the fourth round of the experiment, when students already trained to write unaided switch to AI-based reworking and show a simultaneous rebound in connectivity and text quality.

There’s also another factor: the MIT study is a preprint not yet peer-reviewed. As lead author Nataliya Kosmyna told TIME, she “wanted to publish quickly” for fear that, during the eight-month review period, some lawmaker might “introduce GPTs into kindergartens.” But haste comes at a cost: in the fourth session, only eighteen participants remained—a number too small to generalize anything.

This preliminary nature explains the discomfort of many neuroscientists who rushed to temper the catastrophic hype. Researcher Abeba Birhane reminded us that “the brain is terribly messy and interconnected,” and that trying to reduce creativity or learning to a single wave band is reckless. In other words, claiming “damage” based on a drop in alpha waves is like diagnosing anemia from someone’s pale cheeks. Contextualizing these caveats isn’t about minimizing the risks of lazy AI use—it’s about preventing fragile clues from becoming media certainties or misguided regulations.

Ultimately, what the MIT study shows is a fairly simple truth: AI itself doesn’t drain our intelligence, but uncritical use probably does. If we hand over the blank page to the LLM from the start, we skip the initial mental effort that shapes memory and activates the fronto-parietal network; but if we first write with our own minds and then turn to the machine to refine, complete, or counter-argue, the brain stays engaged and the text improves. In an era when every innovation is immediately securitized, the antidote isn’t prohibition—it’s competence: learning when to involve AI, for which steps, and with what critical oversight. In other words, turning a potential debt into a cognitive investment that makes writing—and the writer—more capable.

Francesco D’Isa

Francesco D’Isa, trained as a philosopher and digital artist, has exhibited his works internationally in galleries and contemporary art centers. He debuted with the graphic novel I. (Nottetempo, 2011) and has since published essays and novels with renowned publishers such as Hoepli, effequ, Tunué, and Newton Compton. His notable works include the novel La Stanza di Therese (Tunué, 2017) and the philosophical essay L’assurda evidenza (Edizioni Tlon, 2022). Most recently, he released the graphic novel “Sunyata” with Eris Edizioni in 2023. Francesco serves as the editorial director for the cultural magazine L’Indiscreto and contributes writings and illustrations to various magazines, both in Italy and abroad. He teaches Philosophy at the Lorenzo de’ Medici Institute (Florence) and Illustration and Contemporary Plastic Techniques at LABA (Brescia).​

READ MORE