The Convenience Trap: Is AI Quietly Replacing Your Thinking?
Before we can understand what AI is doing to the world, we need to understand what technology has already done to us.
That was the thought I kept coming back to during the opening panel at the Tech Show, where Hannah Fry and Louis Theroux had a genuinely fascinating conversation about AI and human nature. Hannah mentioned her new BBC series, including the story of a man who developed a relationship with an AI girlfriend. It’s easy to frame that as a cautionary tale about technology. But is it really about the technology, or is it about what twenty years of frictionless, on-demand digital life has quietly done to our appetite for effort, discomfort, and real human connection?
Because if we’re honest, technology has been making us more comfortable for a long time. Deliveroo, Netflix, Google Maps, Uber, each one removed a layer of inconvenience. Each one also removed a layer of effort. AI is simply the next, and far more powerful, iteration of that pattern. And the question nobody is asking loudly enough is: what happens when we apply that same comfort-seeking logic to our thinking?
A recent Anthropic study suggests we already are. Their AI Fluency Index found that polished AI output actually makes users less likely to check for errors. In conversations that produced finished outputs, documents, small apps, fact-checking dropped by 3.7 percentage points and critical questioning fell by 3.1 points. The better AI looks, the less we scrutinise it. That is a problem.

Which is precisely why the human in the loop isn’t a nice-to-have, it’s the whole point. Google DeepMind’s research team has proposed a governance framework addressing exactly this, including the idea that AI systems should occasionally assign tasks to humans that AI could handle itself, specifically to preserve the human competence needed to intervene when it matters. Their core principle, that a task should only be delegated if its outcome can be verified, should honestly be the first thing anyone learns before using a large language model. It made me wonder: should AI tools come with something like a user guide, the way medication does? Not to frighten people off, but to set realistic expectations about what oversight actually requires.
There’s also a quieter but important shift happening at the language level. Yann LeCun and his team are challenging the term “Artificial General Intelligence,” arguing it’s too broad and too loaded. They propose “Superhuman Adaptable Intelligence” instead, a framing that emphasises what AI actually does well: excelling at specific, adaptable tasks rather than replacing human judgement wholesale. For businesses, this matters. It’s a signal to stop chasing a mythical all-knowing AI and start deploying it strategically, within a human-centred framework. It’s also a reminder that how we name things shapes how we think about them, and right now, we need to think about this very carefully.
We are so busy trying to make AI more human that we are forgetting to keep ourselves human. If we allow convenience to replace our critical thinking, we aren’t using AI, we are being managed by it.
My advice to leaders is simple: don’t bring AI in to replace the human in the loop. Bring it in to challenge the human to be better, sharper, and more rigorous. The goal was never superhuman technology. It’s a more capable humanity.
Azahara Corrales