Why comparisons between AI and human intelligence miss the point

Claims that artificial intelligence (AI) is on the verge of surpassing human intelligence have become commonplace. According to some commentators, rapid advances in large language models signal an imminent tipping point – often framed as “superintelligence” – that will fundamentally reshape society.

But comparing AI to individual intelligence misses something essential about what human intelligence is. Our intelligence doesn’t operate primarily at the level of isolated individuals. It is social, embodied and collective. Once this is taken seriously, the claim that AI is set to surpass human intelligence becomes far less convincing.

These claims rest on a particular comparison: AI systems are measured against individual human cognitive performance. Can a machine write an essay, pass an exam, diagnose disease, or compose music as well as a person? On these narrow benchmarks, AI appears impressive.

Yet this framing mirrors the limitations of traditional intelligence testing itself: cultural bias, and a reward for familiarity and practice. The rise of AI should therefore prompt more thought about what we mean by intelligence, pushing us to move beyond narrow cognitive metrics, and even beyond popular expansions such as emotional intelligence, toward richer, more contextual definitions.

Intelligence is not individual brilliance

Human cognitive achievements are often attributed to exceptional individuals, but this is misleading. Research in cognitive science and anthropology shows that even our most advanced ideas emerge from collective processes: shared language, cultural transmission, cooperation and cumulative learning across generations.

No scientist, engineer or artist works alone. Scientific discovery depends on shared methods, peer review and institutions. Language itself – arguably humanity’s most powerful cognitive technology – is a collective achievement, refined and modified over thousands of years through social interaction.

Studies of “collective intelligence” consistently show that groups can outperform even their most capable members when diversity of perspectives, communication and coordination are present. This collective capacity is not an optional add-on to human intelligence; it is its foundation.

AI systems, by contrast, do not cooperate, negotiate meaning, form social bonds or engage in shared moral reasoning. They process information in isolation, responding to prompts without awareness, intention or accountability.

Embodiment and social understanding matter

Human intelligence is also embodied. Our thinking is shaped by physical experience, emotion and social interaction. Developmental psychology shows that learning begins in infancy through touch, movement, imitation and shared attention with others. These embodied experiences ground abstract reasoning later in life.

AI lacks this grounding. Language models learn statistical patterns from text, not meaning from lived experience. They do not understand concepts in the way humans do; they approximate linguistic responses based on correlations in data.

This limitation becomes clear in social and ethical contexts. Humans navigate norms, values and emotional cues through interaction and shared cultural understandings we are socialised into. Machines do not.

A narrow slice of humanity

Proponents of AI progress often point to the vast amounts of data used to train modern systems. Yet this data represents a remarkably narrow slice of humanity.

Around 80% of online content is produced in just ten languages. Although more than 7,000 languages are spoken worldwide, only a few hundred are consistently represented on the internet – and far fewer in high-quality, machine-readable form.

This matters because language carries culture, values and ways of thinking. Training AI on a largely homogenised data set means embedding the perspectives, assumptions and biases of a relatively small portion of the world’s population.

Human intelligence, by contrast, is defined by diversity. Eight billion people, living in different environments and social systems, contribute to a shared but plural cognitive landscape.

AI does not have access to this richness, nor can it generate it independently. The data on which it is trained stems from a highly biased sample, representing only a percentage of world knowledge.

The limits of scaling

Another issue rarely addressed in claims about “superhuman” AI is data scarcity. Large models improve by ingesting more high-quality data, but this is a finite resource. Researchers have already warned that models are approaching the limits of available human-generated text suitable for training.

One proposed solution is to train AI on data generated by other AI systems. But this risks creating a feedback loop in which errors, biases and simplifications are amplified rather than corrected. Instead of learning from the world, models learn from distorted reflections of themselves.

This is not a path to deeper understanding. It is closer to an echo chamber.

Useful tools, not superior minds

None of this is to deny that AI systems are powerful tools. They can increase efficiency, assist research, support decision-making and expand access to information. Used carefully and with oversight, they can be socially beneficial.

But usefulness is not the same as intelligence in the human sense. AI remains narrow, derivative and dependent on human input, evaluation and correction. It does not form intentions, participate in collective reasoning or contribute to the cultural processes that make human intelligence what it is.

The rapid progress of AI has generated excitement – and, in some quarters, exaggerated expectations. The danger is not that machines will out-think us tomorrow, but that inflated narratives distract from real issues: bias, governance, labour impacts and the responsible integration of these tools into society.

A category error

Comparing AI to human intelligence as though they are competing on the same terms is ultimately a category error. Humans are not isolated information processors. We are social beings whose intelligence emerges from cooperation, diversity and shared meaning.

Until machines can participate in that collective, embodied and ethical dimension of cognition – and there is no evidence they can – the idea that AI will surpass human intelligence remains more hype than insight.

by : Celeste Rodriguez Louro, Associate Professor, Chair of Linguistics and Director of Language Lab, The University of Western Australia

Source link

Capital Media

Read Previous

Prospectus: Two-Year Bank of Mauritius Notes

Read Next

Rate for : CANADA