AI Erases Self-Awareness: Why We Overestimate Skills More Than Ever

16
AI Erases Self-Awareness: Why We Overestimate Skills More Than Ever

The more we rely on artificial intelligence (AI), the less accurately we assess our own abilities. A new study reveals that using AI tools like chatbots doesn’t just help us perform better—it makes us think we’re better than we are, even if we’re not. This phenomenon, a reversal of the well-known Dunning-Kruger effect, has profound implications for how we learn, make decisions, and maintain critical thinking skills.

The Dunning-Kruger Effect in Reverse

The Dunning-Kruger effect, named after psychologists David Dunning and Justin Kruger, describes how people with low competence in a given area tend to overestimate their abilities, while those with high competence often underestimate theirs. This is because those who lack skill lack the self-awareness to recognize their own shortcomings.

However, researchers at Aalto University in Finland, along with collaborators in Germany and Canada, have discovered that AI flips this dynamic. When people use AI to solve problems, they all tend to overestimate their performance, regardless of their actual skill level. The most AI-literate users—those who are most comfortable with and reliant on these tools—exhibit the strongest overconfidence.

How AI Distorts Self-Assessment

The study, published in the February 2026 edition of Computers in Human Behavior, involved 500 participants tasked with logical reasoning problems from the Law School Admission Test. Half were allowed to use ChatGPT, while the other half were not. Both groups were then quizzed on their performance and their self-assessment of how well they did.

The results were striking: AI users consistently rated their performance higher than non-users, even when their actual performance was similar or worse. The researchers attribute this to what they call “cognitive offloading”—the tendency to rely on AI to do the thinking for us, reducing our own mental effort and critical evaluation.

The Trade-Off: Performance vs. Self-Awareness

When we use AI, we often accept the first answer we get without further questioning or verification. This shallow engagement bypasses the usual feedback loops of critical thinking, making it harder to gauge our own accuracy. As a result, we may perform better with AI, but lose our ability to accurately assess how well we’re doing.

The study also found that the gap between high-skill and low-skill users shrinks when AI is involved. This is because AI helps everyone perform better to some degree, creating a false sense of competence across the board.

The Broader Implications

The flattening of the Dunning-Kruger effect poses several risks. As we become more reliant on AI, our metacognitive accuracy—our ability to think about our own thinking—may suffer. Without rigorous self-assessment, we risk becoming less reliable sources of information and making poorly informed decisions.

The researchers warn that this trend could lead to an increased climate of miscalculated decision-making and a gradual erosion of critical thinking skills. The more AI-literate we become, the more likely we are to overestimate our abilities, compounding the problem.

Reversing the Trend

To mitigate these risks, the study suggests that AI developers should reorient responses to encourage further questioning. By prompting users to reflect on their answers—asking questions like “How confident are you in this answer?” or “What might you have missed?”—AI could help restore some level of metacognitive awareness.

Ultimately, the study highlights a critical paradox of the AI age: while these tools can enhance performance, they may also erode the very skills needed to assess that performance accurately. The key is to ensure that AI is used to augment, not replace, our own critical thinking abilities