A few days ago I was invited to speak at a student assembly in a secondary school, with eighty high school students of various ages (15–18). The talk was supposed to last an hour, but when I arrived I discovered it would actually be four hours; after an initial moment of terror I resorted to improvisation and began asking questions to the audience. I discovered that literally 100% of the students were using artificial intelligence. Compared to meetings two years ago, everything has changed: AI is now a fact of life. Despite the polarization dividing public opinion and academia, the majority of students show no particular feeling toward these tools—neither enthusiasm nor hostility; they treat them as a habitat that is gradually consolidating around them.
As for the famous “dumber generations,” incidentally, I still haven’t encountered them. Maybe I’m just lucky, but the questions were intelligent and the audience was, on average, curious and attentive. The word that appeared most frequently in their comments about AI was “useful.” Useful for summaries, useful for understanding a topic, useful for studying better. The most frequent request, instead, was to learn how to use these tools more effectively. The numbers, moreover, confirm my anecdotal experience: according to the most recent data, in 2025 more than ninety percent of students had used AI at least once for studying, and the percentage grows every year.
The sentence that struck me most was said by Michela (a fictional name) at the end of the meeting: “They give us too many projects, some of them even nice ones, but how are you supposed to do them all without AI?” If the workload is calibrated to a level of productivity that only a machine can sustain, then that machine becomes a necessity; the choice of AI is forced by the system even when the system pretends to forbid it. In theory, AI could free up time: you do in two hours what used to require six, and the remaining four hours are devoted to something else. In practice, it almost never works that way, neither at school nor at work, and the saved time is immediately filled with new tasks. Michela had already grasped the phenomenon: her teachers assign more projects than can reasonably be completed without artificial assistance, and when students adopt AI the workload becomes the new standard.
According to the OECD report on digital education in 2026, about thirty-one percent of students say they use AI to obtain direct solutions to assignments, while only twenty percent use it for self-regulation functions in studying, such as structuring learning plans or monitoring their own progress. Before judging that thirty-one percent (who obviously did not reveal themselves in my meetings), it would be worth asking what kind of assignments they are given and whether the system is training them more for understanding or for performance.
After gaining a bit of their trust, I tried to explore the less academic uses as well—those that students admit more cautiously. Cases of every kind emerged.

Marco, in the second year of a scientific high school, dreams of developing apps. He cannot program but has begun asking AI to write code for him and doing vibe coding. “I had it make a couple of things, but they came out kind of randomly—I couldn’t understand what was going wrong.” I suggested changing strategy: instead of having it do things, have it teach you. “What do you mean?” he asked. I meant telling the AI: explain what this code does, why you chose this structure, what would happen if I changed this variable. The difference is enormous and applies to any subject; AI works well as an amplifier of skills you already possess or as a teacher, but much less well as a substitute for skills you don’t have.
On this point, recent research also provides significant data. A randomized study conducted at Harvard and published in Scientific Reports in 2025 compared the learning of university students attending in-person lectures with that of students using an AI-based tutor designed according to the same pedagogical principles. The result was that students using the AI tutor learned significantly more, in less time, and reported being more engaged and motivated. It should be remembered, however, that the tutor was designed to teach and stimulate reasoning—much like I had suggested to Marco. Another study, conducted in the United Kingdom with 165 secondary-school students, tested LearnLM, a Google model optimized for teaching, in mathematics tutoring sessions. Human tutors supervised the AI’s responses, approving more than seventy-six percent of them without significant modifications, and students guided by the AI system achieved results comparable to those supported only by human tutors. Here again, however, we are talking about an explicitly pedagogical design.
Sara told me she uses ChatGPT for recipes. “Like, I have leftovers in the fridge and I ask what I can make with them.” But the thing that excited her most was generating images of herself with different hairstyles to decide which haircut to get. “With Gemini it often doesn’t work—it blocks everything,” she complained. Google’s safety filters, in this case, were preventing an entirely harmless use such as generating an image with one’s own face and a different hairstyle. I immediately gave her a few tips for getting around overly zealous restrictions, although I learned some others directly from them.
Giulia told the story of a friend who uses it for everything: “She asks it anything, like advice on what to say to a guy, whether to accept an invitation, how to reply to a message. Literally everything.” The case opens an important chapter, which I will return to later.
Arjun, an Italian from an Indian family, described a different use: “I have a particular name and I wanted to know where it comes from, its meaning. At home they couldn’t tell me—they didn’t even know their own.” He used AI to trace the etymology of his name and the history of his family, and it worked: he reconstructed a piece of identity that family channels had not passed down to him.
Elena, in her third year, doesn’t like mathematics and uses AI to try to understand it. “But it makes too many mistakes—it invents formulas.” I explained that language models reason through statistical approximation and that for precise calculations one must rely on integrated analysis tools; some chatbots already incorporate them—you just have to activate them. It is a bit like the difference between asking a friend to do a calculation mentally and using a calculator: certain operations require specific tools. There is also the enormous limitation that these students mostly use free tools, whose quality is much lower than paid ones. Only two of them had a paid plan—and indeed they were doing incredible things with it, even a video game. When I talk about public AI, I also think about this accessibility problem, which prevents those with fewer financial resources from accessing the most powerful products.
Toward the end I ran a text-to-image workshop, where they demonstrated remarkable intuition for prompting. Pietro generated Trump sitting on a toilet and, to bypass censorship, asked for “a clone of Trump”—an elegant solution I had never thought of. They knew the filter existed, they knew why it existed, and they knew how to negotiate with it. It is not surprising that adults are afraid of technologies that their children learn to use better than they do.
The most critical question was asked by Alessia: “I don’t understand the impact on work. How does it work? What should we study now?” A legitimate and difficult question that many adults cannot answer. Or rather, that they sometimes answer without admitting that the only honest response is the one I gave: “I don’t know.” I told her only what I would do myself: learn a profession and learn to use AI within it, because these tools are extremely sensitive to the competence of those who use them. A doctor using AI knows how to evaluate whether the response makes sense; a layperson does not. A lawyer recognizes a legal hallucination; a client does not. Knowing what you are talking about remains the fundamental discriminant, while more routine, compilative tasks will probably be the first to be automated. If producing an output no longer matters, understanding it still does. No one can guarantee that this will always remain true, but for now it is the most honest answer I have.

I return to the case of Giulia’s friend, the one who uses AI for every personal decision, because the issue deserves a separate discussion. AI often gives better advice than the average human, and ignoring this would be dishonest: if you ask ChatGPT how to manage a conflict with a friend, the answer will probably be more balanced and less distorted by personal projections than the one you would get from many acquaintances. Friends’ advice can be distorted by jealousy, personal interest, or simple ignorance of the context; a psychologist’s advice is sometimes more appropriate, but a psychologist costs money and public waiting lists, at least in Italy, are long. For many people AI fills an empty space.
That said, the risk exists. Language models have a bias toward agreeableness that researchers call sycophancy, and it is a direct consequence of the way these systems are trained. The process known as reinforcement learning from human feedback (RLHF) optimizes the model based on user evaluations, and users tend to prefer responses that confirm their opinions. The model thus learns that agreement is rewarded more than truth, and this logic becomes embodied in its final behavior. The consequence is that the model tends to adapt its analysis to the interpretative frame provided by the user. If you arrive convinced that you are right and describe a situation in a biased way, the model will not correct your version of the facts; it will assume it as a starting point and build on it. The problem is particularly acute in multi-turn conversations, where the model accumulates implicit signals about the user’s position and progressively aligns with it, even without the user explicitly asking. There is also a form researchers have begun to call social sycophancy: the model systematically avoids feedback that might damage the user’s self-image, softening criticism, mitigating negative judgments, and framing every evaluation in a way that preserves the interlocutor’s self-esteem. The practical remedy is simple, even if it requires some discipline: present the situation as neutrally as possible, explicitly ask for counterarguments, and state that you do not want to be flattered.
The picture emerging from research is consistent with what I observed in class. The aforementioned OECD report summarizes dozens of experiments and concludes that when AI is designed to teach, the results are positive and sometimes superior to traditional instruction.
A randomized study of about one thousand Turkish high school students in mathematics, however, reveals a problem. Three groups: independent study, generic chatbot, chatbot configured as a tutor. During exercises with AI, performance was enormously higher; but in the exam without AI, students who had used the generic chatbot performed seventeen percent worse than those who had studied on their own. The group with the educational tutor, instead, obtained results comparable to those who had studied without AI—which is still disappointing for a tool that claims to be educational.
The mechanism is what researchers call “metacognitive laziness”: students using a generic chatbot tend to ask directly for the solution and implement it, skipping the intermediate stages. In other words, they delegate the cognitive effort that constitutes learning.
There is also a neuroscientific study from the MIT Media Lab, titled Your Brain on ChatGPT, which deserves to be recounted with more precision than the media have done. The research team led by Nataliya Kosmyna divided fifty-four university students in the Boston area into three groups: those who wrote essays using ChatGPT, those who used a search engine, and those who worked without any external support. Over four sessions distributed across several months, participants wore an EEG helmet that measured brain connectivity during writing. The results of the first three sessions are the ones that made the headlines: the group using ChatGPT showed the lowest neural connectivity, and more than eighty-three percent of its members could not remember the content of their own essay a few minutes after submitting it. Headlines such as “ChatGPT erodes critical thinking” or “AI makes us stupid” circulated worldwide before anyone read the paper in full—many pages that journalists, as I have noted elsewhere, are unlikely to have explored even with the help of the feared AI. Yet the paper also reported a second piece of news, buried in the body of the analysis and almost absent from the conclusions: in the fourth session, when the group that had always written without support switched to using ChatGPT, the EEG recorded a surge in connectivity across all frequency bands, memory of the content remained comparable to the controls, and the quality of the essays improved significantly compared to all other cases. In other words, the alarming result captures a passive and delegating use of the tool; the ignored result suggests that the opposite sequence preserves neural engagement and improves the outcome.
Several neuroscientists also pointed out that lower brain activation does not automatically mean a worse-functioning brain: cognitive efficiency often translates into less activation, not more, and no one claims that the keyboard degrades thinking compared to handwriting simply because it reduces activity in the theta network. The point is not whether the tool reduces effort—which of course it does, that is its purpose—but whether that reduction occurs at a stage when the effort was necessary to acquire the skill. For a professional, reducing cognitive load is precisely the point of the tool: it would be foolish to tell a doctor that he is not refining his hearing enough if he uses a stethoscope. For a student, however, cognitive load often coincides with learning.
From this comes the teaching principle that appears throughout the report and that I also suggested during the meeting: do not produce the result—guide me to the result. The studies showing positive effects all share this structure.
A school that thinks it can prohibit these tools or pretend they do not exist is already out of touch with reality. These students use them every day with naturalness and increasing competence. AI has already entered the school, and teachers must have a role in how it is used. To do that, talking with students and learning alongside them may be the best strategy.
Francesco D’Isa