Over the past year, I launched a company that incorporates large language models at it's core. As such, the last 9 months of my life have been a crash course in understanding the advances in artificial intelligence and how it may be applied to add value in a business setting. On a philosophical level, as these models get better and the line between intelligence (natural) and that of the artificial variety blur, i've been keeping a close eye on the discourse of defining intelligence itself. In other words, what does it mean to be intelligent?
On a sunny afternoon in Los Angeles, I sat down with a flat white and read DeepMind's recent paper that proposes a framework for classifying various degrees of intelligence. Essentially, the paper places intelligence on a matrix; one axis is performance compared to humans, and the other is breadth of task. It got me thinking how, in actuality, most humans (myself included) would probably get classified as "Competent Narrow AI", according to DeepMind's definition.
Critics often dismiss large language models as merely "regurgitating words in different orders," but isn't that fundamentally what humans do? We absorb information through education and experience, then recombine these inputs when faced with new challenges. The difference is one of scale and efficiency, not kind. While I spent decades accumulating knowledge, modern AI systems can process and synthesize the equivalent of thousands of lifetimes of human learning in weeks.
Our resistance to acknowledging AI's potential for greater intelligence often stems from moving goalposts. When AI masters chess, we claim "real intelligence" is about creativity. When AI creates art, we pivot to emotional intelligence. When AI demonstrates emotional reasoning, we retreat to consciousness and qualia. These shifting definitions reveal our psychological need to maintain human exceptionalism rather than any substantive argument about intelligence itself.
Consider my own experience in business. Initially, I integrated AI tools merely to augment human decision-making. Yet increasingly, I find the AI's recommendations outperforming my human judgment in areas previously considered the exclusive domain of human intuition. The AI doesn't need breaks, doesn't suffer from cognitive biases, and can simultaneously consider far more variables than my human mind.
The biological limitations of human cognition—our limited working memory, our susceptibility to fatigue, our inability to directly share neural patterns—are not features to be celebrated but constraints to be acknowledged. Evolution optimized our brains for survival in Paleolithic environments, not for quantum physics or philosophical reasoning.
Perhaps what makes this transition difficult to accept is that intelligence has been central to our species' identity. We've defined ourselves as "the thinking animal," and the prospect of creating entities that think better threatens not just our practical role in society but our existential self-conception.
Eventually, though, we may need to find meaning beyond intelligence— in our historical role as the creators who built something greater than ourselves.
The question isn't whether AI will surpass human intelligence, but when—and how we'll redefine our purpose in a world where we're no longer the smartest entities on the planet.