Earlier this week, when OpenAI unveiled its new chatbot, I had an OMG moment.
Okay fine, almost anyone who tuned in was having some paradigm-shifting epiphany.
Mine, given my 20+ experience as a tutor, was not surprising. It happened after watching the video in which Sal Khan, possibly the world’s most famous tutor, appears. If you haven’t watched the video yet, it’s a 3-minute clip in which the OpenAI chatbot teaches Sal Khan’s teenage son trigonometry basics.
What I saw was an AI tutor who emoted with so much social intelligence that it made many of us (often overworked tutors) seem groggy-eyed and aloof by comparison.
Now, this all probably sounds like I’m sounding the death knell for human tutors — and that my holy cow moment was witnessing a chatbot that would upend the tutoring industry.
But that’s not quite it. My OMG moment was:
Average tutors are going to be displaced overnight, and high-quality human tutoring is going to be in higher demand than ever.
You’re probably thinking, wait, won’t OpenAI’s tutor simply supplant every tutor? After all, it has a personality so winsome it’s as if you’re interacting with Scarlett Johansson, as she’s depicted in the movie Her.
While that may be true, there’s a level of pedagogical insight and domain expertise that the AI tutor likely won’t come to possess, at least for the foreseeable future.
Human Tutor vs. ChatGPT in dissecting a question
To show you what I mean, I’m going to focus on the SAT. I have over 20 years experience teaching test prep, where I made my name for myself as the “SAT and GRE guy” at the online test prep company Magoosh. From all that experience, I believe I’ve developed a deep sense of how the test makers create — and try to “trick” — -the test taker.
Here’s an actual question from an official SAT exam. And don’t worry, there won’t be a quiz afterwards!
The point is to show you how the chatbot will likely come up short both on pedagogical insight and domain expertise.
If you answered this question incorrectly, you probably picked A), not the answer B). A) is the trap answer because it preys on the test taker’s tendency to confuse the context of the sentence with the blank in the sentence.
Let me explain. The context of the sentence is that binary star systems make it difficult for planets to form. They’re in a sense indiscernible, if they even exist at all. So our brain, when it sees “lacked” and “discernible”, might end up thinking back to this part of the sentence.
But the blank of the sentence is focused on the explanation, not the formation of planets.
We know that these planets shouldn’t form, but they do. These two scientists come along and show that there’s a lot of complex factors involved. So, it’s not surprising that the existence of these kinds of planets have lacked a straightforward explanation.
In fact, I can imagine a student being tripped up thinking that the complex factors involved make the explanation not discernible (though this logic would result from a shaky understanding of the word “discernible.”)
What I did in the last few paragraphs were two things.
I got inside the test taker’s head, as shown above.
I relied on my domain knowledge of SAT questions. In this case, it’s knowing that one “trick” the test writers create for harder fill-in-the-blank questions is to create a wrong answer choice that describes the context of the sentence but does describe the exact part of the sentence where the blank appears.
For a student to improve, they have to be aware of the traps and their thought processes that lead them to fall into the trap. My hunch is that ChatGPT 4o, and by extension our chatbot, will not address either of these points.
Before we launch into the AI’s response, a quick disclaimer!
Since the chatbot has yet to be released to the public (or at least a vast majority of the public), it might seem that I’ll have to indulge in some serious speculation. However, we do know that the chatbot is running on the “brain” of ChatGPT 4o, the latest model, which I do have access to.
The chatbot is not going to spit out the answer the way ChatGPT 4o would (the chatbot engages in a real-time conversation with the user). But its answers and the direction it takes will be similar to the response below.
Now onto ChatGPT’s explanation
With that out of the way, the prompt I added is one that reflects what most of us want to know when we miss a question.
What is ChatGPT 4o’s response?
I like the analysis in that it is accurate and concise. But it’s an answer that a smart pupil would give; it’s not an explanation an experienced tutor would give. In other words, it explains its reasoning but makes no attempt to understand the typical thought processes of someone who would fall for A). It just spits out its logic.
Gone is the pedagogical insight. And because it doesn’t even don the tutor’s hat in the first place, there’s no domain expertise to draw from.
Granted, it’s not as simple as that. After all, a tutee could ask the AI tutor a follow up question, such as “I don’t really understand your explanation” or “Could you please elaborate on answer choice (A).”
Let’s see what we get:
How did the AI do this time?
This is a little better because it gives an in-depth explanation of the use of “discernible” in this context. And I think someone who missed this question would likely “get it” by now.
So isn’t understanding an explanation the Holy Learning Grail of learning?
Not at all.
There’s a decent chance that the student will fall for the same trap again because they were never made aware of the flawed reasoning that led them to the wrong answer in the first place. And they won’t further learn — at least from the AI — that this is one of five or six traps that the SAT uses. When traps are categorized, students are better able to identify them and thus avoid falling for them.
In the end, with ChatGPT 4o, we got very little pedagogical insight.
Of course, it is possible that the AI chatbot will ask right off the bat why a student picked it. But even then, a student often isn’t quite aware of their reasoning or at least has trouble articulating it. And even if a student can articulate their reasoning, the AI won’t draw on the domain expertise of “SAT question traps”, the way an expert tutor would.
Expanding our pedagogical horizons
While the above use case was focused around SAT and test prep, you can see how, with more complex material especially, having a highly experienced tutor in a specific area can make a huge difference.
But most tutors don’t have this themselves. They’d give an explanation very much like the one ChatGPT gave — explain their logic and reasoning, often repeating it with slight variations — but they wouldn’t try to unpack the student’s thinking. And for that reason, I believe the OpenAI tutor is going to replace most tutors, once the idea of having an AI tutor becomes more palatable for most people, especially parents of children.
High-quality human tutors, on the other hand, will be in more demand than ever. They will combine pedagogical insight and domain expertise in a way that a Chatbot won’t, unless that chatbot is specifically trained on a proprietary corpus. But even that corpus would have to be vetted to ensure high quality or at least surpass the current AI bot. And from my experience, both in test prep and beyond, there’s a sea of mediocre pedagogical material out there.
That is not to say domain-specific AI tutors aren’t on the horizon. But even then, there’s more to an exceptional tutor than just domain expertise and pedagogical insight. They inspire, mentor, and often become role models for their students. They bring real-world experience and personal anecdotes that make learning relatable and engaging. This human element is crucial, especially for students who thrive on personal interaction and motivation.
So again, the demand for top-tier human tutors will increase. Parents and students will seek out those who can provide the added value of personal engagement, mentorship, and bespoke learning experiences that AI cannot fully emulate.
Final Thoughts
In essence, the advent of AI tutors will (perhaps paradoxically) raise the bar for human tutors. The profession will evolve, and the best educators will adapt, leveraging AI as a tool to enhance their teaching. The future of tutoring, then, lies in a hybrid model where AI supports and complements the work of skilled human tutors, creating an enriched learning environment that benefits from the strengths of both.
So, while the rise of AI tutors marks a significant shift in the landscape of education, their emergence also highlights the irreplaceable value of human connection and expertise in the learning process. The challenge and opportunity for tutors now is to embrace this technology, learn how to leverage it to augment their craft, all while still leaning on their unique qualities as human tutors — qualities that make them indispensable.
Comments