Why is ChatGPT really a threat to the education system? In this article, which started as a video essay on YouTube by the channel “Esh Tatla,” the author explores the impact of ChatGPT on modern education, examining the fears it has stirred among educators, the historical context of technological advancements, and how AI might revolutionize learning by addressing the fundamental flaws in our outdated educational system.
When ChatGPT was launched back in 2022, it became the fastest-growing internet app of all time—an AI chatbot that could answer virtually any question within seconds. People began turning to ChatGPT instead of Google. Entrepreneurs began using ChatGPT as their personal assistant. Developers began using ChatGPT to replace themselves. And students—well, students began using ChatGPT to cheat.
- “Teachers are concerned about the new AI program, which can generate answers and write essays with perfect grammar.”
- “Students are using it to cheat on tests and even write essays. This new technology, which makes it easier to cheat.”
- “… raising questions in schools about its use and what is considered cheating.”
- “… artificial intelligence being used to cheat.”
- “… called ChatGPT.”
Experts claiming that ChatGPT could put an end to education as we know it. And they might be right, but not for the reason that you think.
Let’s start with the education system. At around the age of five, children are sent to school to prep for the real world. But the real world is changing very fast. Our current education system was modeled back in 1840, designed during the Industrial Age, where the goal of education was to mass-produce factory workers who were docile, obedient, and understood basic orders. And it worked well. It was an economically viable model that created agreeable workers. But 200 years later, we’ve moved a long way past the Industrial Age, yet classrooms still look identical.
200 years later, we’ve moved a long way past the Industrial Age, yet classrooms still look identical.
In 1956, educational psychologist Benjamin Bloom outlined a framework to better understand how we learn, hoping to start teaching students more effectively. He named it Bloom’s Taxonomy after himself. In short, it divided cognitive learning into six categories, ranked in a hierarchical order depending on how much thinking was required for each. Simply memorizing and recalling information doesn’t require much brain processing power, whereas tasks that emphasize creativity—like thinking outside the box—require much more thinking. The Industrial Age didn’t need people to think outside the box, so the education system was built on a low order of thinking.
This is the foundation of the classroom we see today: a 30-student-to-one-tutor environment where students are expected to passively absorb the information presented to them, and they are then assessed on their ability to remember that information in standardized tests. Yet now, in the 21st century, the world needs innovators. But it’s been forgotten that at the top of Bloom’s Taxonomy was creativity—the ability to imagine and build something from nothing. The current education system doesn’t acknowledge creativity. It wasn’t designed for it. In fact, it sucks it out of us.
In 1968, a study followed 1,600 children from the age of five until they were 15. The aim was to determine the effect that school had on creativity levels. In the study, creativity was determined by divergent thinking. Children were asked how many uses they could think of for everyday objects like a paperclip. The more uses they could think of, the more creative they were deemed to be. The average adult can think of maybe 10 to 15 uses, but the study found that before school, 98% of five-year-olds were classed as geniuses based on this measure of creativity. The same children were then retested at 10 years old and again at 15. The results were astounding. What we concluded, wrote George Land, the author of the study, is that non-creative behavior is being learned.
Our education system is broken and continues to churn out conformist, cookie-cutter people who can’t think for themselves—not just broken, but outdated too. So let’s fast forward to the present day. Educators are describing ChatGPT as a tool for cheating, but maybe they’ve just got an outdated definition of what cheating is.
Let me give you an example. In school, you’re told to sit down, stop talking, get out your books, go to page 69, and complete the questions. No peeking at the answers on the back. You’re rewarded for doing exactly what you’re told. In this scenario, cheating could look like you talking to your friend and sharing answers, otherwise known as collaboration. In the real world, cheating could be you googling how to work out a specific question, otherwise known as productive self-learning.
Do you see my point? There is no definitive definition of what cheating is. None of this is about cheating. It fundamentally comes down to one thing: fear. Specifically, a fear of AI.
“It’s a bit of an odd one that we thought that language, in a way, was quite human, quite unique, and quite special. And it turns out that it can be emulated quite effectively based purely on statistical models. That, I think, is a surprise for a lot of us. It’s quite unsettling as well.”
That’s Dr. Leis Potter. He’s the founder of GeekyMeds.com, the largest medical education platform in the world. They’ve been incorporating ChatGPT into their platform to help health professionals all around the world learn more effectively.
GeekyMeds.com, the largest medical education platform in the world. They’ve been incorporating ChatGPT into their platform to help health professionals all around the world learn more effectively.
We recently sat down and discussed this fear of AI in education.
“History is littered with these technological innovations that come about, and the current status quo and the people involved in that feel threatened and feel like, ‘Oh, is that going to take away my role, my skill set?’”
Whether it was the airplane, the television, the internet, or even writing on paper. The Greek philosopher Socrates spoke about how the invention of writing would promote forgetfulness. Whatever it was, history keeps on repeating itself. The adoption of something transformative is always met with resistance until something called “the chasm”. This is the point where the world begins to appreciate the transformative value an innovation has, and only once society has moved past the chasm do we begin to see mass adoption. Depending on the innovation, it could take society decades to move past it.
So the question is, how do we speed it up? We need to tackle the root cause. Firstly, why are people fearful in the first place?
“There’s a great example of the elevator. When that was invented, people previously pulled on ropes to lift you up. When the automated elevator came out, nobody would get in them because they were terrified of putting their lives in the hands of this machine.”
Let’s draw some parallels from the past. Back in 1825, people believed that trains would rip you apart and that high speeds would cause your body to melt.
We now, of course, know this isn’t true. But AI is feared for a very similar reason. People fear what they don’t know or understand. The lack of knowledge surrounding what AI actually is and what it does, as well as Hollywood pushing a narrative that AI will take over the world, is causing that same sense of rejection for the same reasons we saw with the train. Since the beginning of automation itself, people have feared that their jobs would be replaced by machines.
Two hundred years later, we’re still seeing these exact same headlines.
Perhaps the most relevant is the introduction of calculators in school. Back in 1966, there was a huge worry that using a calculator would diminish people’s abilities to think for themselves.
Sound familiar? It’s the exact same argument that educators are using when trying to ban ChatGPT.
In every single example throughout history, the same cycle repeats itself.
“If you look back, all that happens is that we adapt. Almost always, human plus technology is better than technology alone.”
With our minds now open, let’s explore how AI like ChatGPT can fix our broken education system. In 1984, Benjamin Bloom came up with the Two Sigma Problem. He found that students who received one-on-one tutoring performed two standard deviations better than students who learned in the traditional 30-to-1 classroom environment.
To give some context, that’s enough to turn an average student into an excellent student.
The problem part comes from the fact that it’s not realistic to provide every single person with their very own personal tutor for every single subject—from both an economic and practical point of view. Since 1984, educators from around the world have been scratching their heads, thinking of ways to overcome this problem. Until now.
AI tools like ChatGPT turn Benjamin Bloom’s Two Sigma Problem into a Two Sigma Opportunity—an opportunity for every single student to have their very own AI personal tutor.