Learning from Things That Are Mostly Smarter Than You
Teaching and learning in an age of effortless answers
The problem with learning from things that are smarter than you is that they sometimes aren’t.
Or, they may actually be smarter in some respects — but often not in the ways that matter most.
We’re entering an era when machines can write essays, solve problems, and even create entire video presentations far faster than even a professional would be able to, and so quickly and seamlessly that some students forget to edit out the AI prompt and/or follow-up message out of their copy-pasted submissions.
One of my programming students recently messaged me about his experience of AI’s new programming abilities with a mix of delight and disbelief:
“Mr. Hewlett, I can see why people aren’t bothering to learn coding 😅 — In the space of one hour, I got AI to create a dynamic PHP text adventure, complete with stats & storyline, that was completely bug-free.”
A link followed, along with a note that of course he’d “still make one from my own coding, where I relegate AI to the business of making the program I created look pretty”. (He’s a thoughtful and conscientious student!) I clicked through the link and found that the AI had indeed created a program that was far beyond his, or even my own abilities.
My reply was half encouragement, half warning:
“The issue with AI is that it makes it look instant and effortless, but if you want to build anything really complex … you’ll need to debug the AI’s code. To debug the AI’s code, you need to know how to code. To know how to code, you have to have done some coding yourself. Hopefully you can see the vicious cycle here…”
He did.
Herein lies the educational problem. For the first time, students can produce correct answers faster than they can understand. The AI can hand them a working program before they’ve learned what a loop is. It can generate a polished essay before they’ve grasped the grammar of thought that essays require.
This creates a not-so-subtle inversion of mastery. We used to learn by gradually approximating the expert’s mind; now we begin by commanding an expert-like machine and work backward toward comprehension — if we ever get there.
And the irony is that to learn effectively from AI, you have to already know enough to know when it’s wrong. And it will be wrong, sometimes spectacularly. But if you can’t tell the difference between brilliance and nonsense, you won’t know when to trust — and when not to.
And that’s why parents and teachers and the student’s own inquiring mind remain indispensable: not because we can out-calculate the machine, but because as we bring our own hard-won wisdom to bear, and as students question in light of what they’ve learned from teachers who have earned their trust, together we can discern. We can see when a perfect answer has missed the point. We can tell when learning has become mere automation.
The danger is not that AI is smarter than us, but that it’s smart enough to make us stop thinking.
To learn from something mostly smarter than you, you have to keep hold of the one thing it can’t imitate: the desire to understand. That means debugging its output, questioning its confidence, and never mistaking fluency for truth.
Because the real intelligence in education isn’t artificial at all. It’s the fragile, persistent human kind — the kind that still asks “why?” even when the machine says “done.”
Every now and then I test the machines to see how good they are getting at writing—or, more importantly, at formulating and articulating actual wisdom. This was one of those tests. After a chat with ChatGPT-5 about the essential principle of keeping a “human-in-the-loop” (which apparently originated as a military term emphasizing the importance of human involvement in and ultimate control over automated weapons systems, and has, interestingly, become a key term in AI), a chat with one of my students got me thinking along different lines, and I decided to try to get ChatGPT to write a reflection piece around our conversation. The result was scintillating on the surface, but, as usual, it got somewhat dingier the deeper one dug. Still, it was significantly better than my first test along these lines, so I decided to take ChatGPT’s article as a starting point and edit it for brevity, logic, and consistency. The result was the article given above—for anyone interested in checking out how my “human-in-the-loop” edit differs from the original, the AI’s original version can be read here.
More on this and a completely human-written article next time, as I share my thoughts on the importance of having a “human-in-the-loop”.


