Hyperbolic Leaders, Large Language Models, and the Drift Toward Passivity
On hyperbolic leaders, large language models, and the quiet erosion of critical engagement.
It seems that everyone, everywhere, is talking about either hyperbolic political leaders, or a leader, or AI—large language models — and how both, in different ways, contribute to the fearsome degradation of culture. On one side are hyperbolic leaders — figures who command attention through exaggeration, spectacle, and emotional manipulation. On the other side are large language models — tools that, more quietly but no less profoundly, may be flattening our discourse and homogenizing our ideas. Are we witnessing new tools and tactics, or are we living through a subtle, yet profound, paradigm shift in how we think, learn, and understand the world?
We are being bombarded by pronouncements from leaders who seem more interested in spectacle than substance. The hyperbole of either utopia or apocalypse distracts from the essential work of shaping technology, and culture that serves human values. In this environment, vigilance is not optional; it is the cost of freedom when faced with forces we do not yet fully understand.
A recent study out of MIT examined the use of ChatGPT and its effect on subjects’ ability to recall what they wrote, as well as the quality of their writing compared to those who had no access to AI. Even though I am decades past my graduate school training in research design in psychology, I couldn’t help but notice flaws in the study. The MIT researchers reported that subjects using AI exhibited less alpha and theta brain wave activity, concluding they were less engaged. But this misreads what brainwaves mean. Engagement isn’t measured by alpha or theta alone; beta waves often reflect cognitive effort. My concern isn’t just the study’s flawed conclusion — it’s also what we risk when we rely on AI, or our leaders, to do the thinking for us.
In The New Yorker this week, the MIT study and Sam Altman’s blog were both cited to argue that large language models have the potential to homogenize thinking. This concern is not unfounded, but perhaps not the whole story. When I speak with people about AI, I notice a generational divide. Younger users often embrace AI to make life easier, sometimes without fully immersing themselves in the material. Older generations — those of us who remember the blood, sweat, and tears of deep study — worry that something essential may be lost.
The advancement and diffusion of knowledge is the only guardian of true liberty.” — James Madison
I was fortunate-or perhaps unfortunate, depending on how one sees it-to attend a traditional undergraduate college for my first two years. I learned to be a good student, to earn A’s, but not to learn. On my own initiative, I transferred to a small college “in the woods” where we were encouraged to engage deeply with original sources: Cervantes’s Don Quixote, Montaigne’s essays, Marlowe’s Doctor Faustus, Shakespeare’s comedies and tragedies, Boccaccio’s Decameron, Rabelais’s Gargantua and Pantagruel, Machiavelli’s The Prince, and more. There, I learned what it meant to truly learn — not just to consume for an A, but to engage, question, challenge the author in the margins, and think critically.
That is what I fear AI risks eroding — not because of what Sam Altman calls a gentle singularity, but because of a slow, almost imperceptible drift away from true dialogue with great works, toward derivative, pre-digested content. The danger is not that AI can give us knowledge in seconds, but that we may stop doing the harder, slower work of reading, marking, and challenging what we encounter.
Our task is not to surrender to the loudest voices or the flashiest claims. It is to insist on clarity, honesty, and accountability. The promise of AI lies not in fantasy or alarm, but in the thoughtful application of knowledge. And that knowledge isn’t limited to great books or formal education. It’s found in the work we do, in the families and communities we care for, in the questions we ask, and in the meaning we create through our daily lives. Liberty is protected not by elites, but by engaged citizens—in every walk of life.
Perhaps what we’re experiencing is a Kuhnian paradigm shift — not an explosive rupture, but a gentle, creeping revolution in how we define knowledge, creativity, and participation. The challenge isn’t to resist these new tools, but to use them without letting them use us.
The point is this: even in dialogue with AI — what Sam Altman calls “more powerful than any human who has ever lived” — we must remain vigilant. We must not let it do our thinking for us. Just as we must challenge our leaders, we must challenge AI, and resist any passivity that would dull our capacity to think, question, and create.
Disclosure: How I Used AI to Participate in Crafting This Post
When I read the MIT study and The New Yorker article, I felt disturbed — important elements seemed overlooked. I quickly drafted this reflection on hyperbolic leaders and AI. I saw an unlikely connection and recalled my reading of The Structure of Scientific Revolutions. I asked ChatGPT once I had finished my post to help ensure I hadn’t misrepresented the articles, to check my memory of EEG brain wave activity, and to revisit Kuhn’s paradigm shift theory. We also discussed title options and structure. Each time I flagged something that felt off to me, the AI responded with encouragement, such as, “Good catch — that does improve clarity.” Ultimately, the collaboration helped give me confidence in my ideas, and I take full responsibility for the connections drawn and conclusions reached.
The initial idea, the working through it, and this final piece is all my own. AI is not a substitute for the hard work of writing and reflection.
References
MIT study on ChatGPT and cognitive engagement, 2025.
Sam Altman, “The Gentle Singularity,” 2025.
The New Yorker, AI and Cultural Homogenization, 2025.
Thomas Kuhn, The Structure of Scientific Revolutions, 1962.
This is a lovely musing about what we lose when we don't do our own critical thinking. I love how you used AI to check your facts and generate titles--a good use of this tool I think.
Thanks, Susan. So much important here, I think, and I would love to talk with you more about it. I've resisted "talking" with ChatBT thus far, but have observed others jumping to use it...more so in younger generation, but even my twin brother seems kind of enamored (having been shown how "amazing" it is by his oldest son). I love your thoughts about what we might/must be missing, and your slow down and open ourselves and others to deeper critical thinking. But perhaps I will be left behind, stuck in ruts? Nahhh, I think I shouldn't get too hung up in such worries. And I won't find myself loving Big Brother, a la 1984.