Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

How ChatGPT Is Redefining Human Expertise: Or How To Be Smart When AI Is Smarter Than You

Technology has always pushed humans to upgrade themselves, AI is no exception.

Tomas Chamorro-Premuzic Contributor

I write about the psychology of leadership, tech and entrepreneurship.


Technology has always pushed humans to upgrade themselves, AI is no exception

Humans are the most adaptable creature on earth, not least according to humans! A big part of our adaptability rests on our ability to create tools for “doing more with less”, which is a good definition of technology.

Artificial intelligence (AI) is no exception. As I illustrate in my latest book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, AI is the salient technology of our times because it has, for the first time, enabled us to automate intellectual (cognitively complex) tasks; and this is just the beginning.

Take chatGPT, which has been the subject of much discussion in the past few months (and could soon secure$10bn in investment from Microsoft). In its current version, this impressive conversational user interface has impressed with its ability to make human-like interpretations of questions, display human-like understanding, and provide expert-like answers to virtually anything we ask, even when the answer is “I’m not qualified to answer this question” (something only humble humans seem capable of saying.

Think of this chatbot as an on-demand version of Wikipedia that creates all content on the fly, carefully tailoring it to your questions (you can try it out here). This AI engine achieves this by leveraging the latest advances in Natural Language Processing, a technique that enables computers to mine and interprets verbal communication, based on matching words and sentences to vast language databases, closely replicating some of the interactional patterns found in human communication.

While far from perfect, chatGPT’s impressive capacity for addressing an unimaginably large range of topics and questions with remarkable relevance (and, at times, accuracy) positions it as the technological equivalent of a polymath genius, this in an age where the overall sum of humanity’s knowledge seems impossible to quantify or grasp.

An important question that arises, therefore, is what this means for human expertise. Is there a future for human expertise in an age of omnisapient machines? And if yes, how should we rethink human expertise so that it remains valid, valuable and relevant, when the prospect that our knowledge is at best minimal and microscopic in the light of AI’s has become an undeniable reality?

Although nobody (not even chatGPT) has the definitive answers to these questions, there are at least three main ways in which human expertise could evolve in the age of expert machines. They each highlight the need to harness some narrow dimensions of knowledge or human expertise, namely:

(1) Knowing what questions to ask: It’s simple – when the answers to all questions are openly available and accessible to all, what matters is the ability to ask the right questions. That is, in the future smart humans will differentiate from others not in terms of what they know, but what they want to know. Long before chatGPT was conceived, scientists had argued that expertise, and even creativity, was not so much a function of answering, but asking the right questions. This is exacerbated in the age of AI. In this way, we continue to see a trend whereby what you know is less significant than what you are eager to learn, which means that the retention of facts and information is far less practical than the ability to identify your knowledge gaps and feel uncomfortable enough about them to reduce them. In the age of AI, human curiosity (having a hungry mind) matters more than experience and knowledge, because knowledge.

(2) Knowing more than chatGPT, at least in a narrow domain or subject matter: this equates to having more expertise than the “wisdom of the crowds” combined or being able to detect errors in chatGPT’s “crowdsourced” knowledge. In a way, this is no different from identifying errors in Wikipedia or Google search, both of which are significantly “wiser” than any human when it comes to breadth, while also being more “ignorant” than expert humans when it comes to depth in a particular subject matter. For instance, if you don’t know anything about Medieval philosophy you will rapidly go from none to some knowledge if you invest 30-min or so on Google, Wikipedia, or chatGPT. But if you devoted many years to studying this subject, you will not need more than 30-min to spot errors and inaccuracies on those platforms… and perhaps the benchmark of real expertise will be the ability to produce content that outperforms what we find there, both in accuracy and usefulness. In short, it is not that human expertise will become obsolete, but that, in order to be valuable, human expertise needs to be quite exceptional.

(3) Knowing how to turn insights into actions: Plato famously argued that “those who can, do, those who can’t, teach”. With so many AI tools, including chatGPT, advancing our ability to access knowledge, it’s the ability to turn that knowledge into actions – going from theory to practice – that will epitomize human experts. This is in line with what Ajay Agrawal and colleagues argue in their brilliant book, Prediction Machines, which defines AI as a prediction engine: when X happens, Y will happen, etc. Thus when AI has the answers what is there for humans to do? To work out what to do with those answers, in order to make smarter decisions and act in more intelligent ways.

A final consideration: since most technologies are designed for efficiency purposes, there is a general tendency for most technological innovations to evoke human laziness. For example, fast food, the microwave, cars, elevators, and all on-demand services, which are lubricated by AI’s personalized marketing nudges. The risk is therefore clear: the more we outsource thinking, including knowledge acquisition, to machines, the less incentivized we are to think. So, while techno-enthusiasts have always argued that when technology takes care of human tasks, it “frees up” humans to pursue other, more creative and intellectual endeavours, this is less clear when what we automate is our thinking.

In the past decade, we’ve seen AI’s capabilities advance significantly (if not exponentially), while humans have been happy to downgrade themselves to smartphone and social media addicts spending most of their time inadvertently training AI on how to predict their choices, desires, and thoughts. Perhaps it is not so much for AI’s impressive capabilities, but rather for our unimpressive, monotonous, repetitive, and predictable pattern of behaviors (starring at a screen and clicking on boxes) that AI has been able to think and act like us.

If we want to retain an edge over machines, it is advisable that we avoid acting like one. Unless we do this, we may find that the real threat is not so much AI automation, but our tendency to automate ourselves, or turn into automata.


Follow me on Twitter. Check out my websiteTomas Chamorro-Premuzic

Dr. Tomas Chamorro-Premuzic is the Chief Innovation Officer at ManpowerGroup, a professor of business psychology at University College London and Columbia University, co-founder of, and an associate at Harvard’s Entrepreneurial Finance Lab. He is the author of Why Do So Many Incompetent Men Become Leaders? (and How to Fix It), upon which his TEDx talk was based. Hist latest book is I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. Find him on Twitter: @drtcp or at


Source: Forbes Media LLC.

Share this:

Back to top

Sign up for the latest news, updates, and analyses by email