Thanks for the reference. It's funny that you describe my piece as anti-AI though just because I am critical of the new Study mode. The irony is I think AI is very useful for a lot of things. For the record, I think your observations are fair - I don't have good answers to all those questions nor do I know precisely how student use of AI (whether teacher abetted or not) is likely to unfold any differently this year. Those "if" questions are very much conditional - I don't think kids want to learn alone with a chatbot but they will if the adults in charge of teaching them aren't doing a great job. The Study mode struck me so hard because a) it was such an easy fix that could have been done two years ago and b) makes it absolutely clear that OpenAI is racing to corner the education market. My experience with students who are very honest about their AI use is that most don't use it very well which means most aren't really looking to "study" with it in the traditional sense. Maybe that will change. My other two cents is to be wary of reading too much into the GPT-5 rollout - yes, it seems we may get only incremental improvements for awhile (a good thing since it will give a lot of people time to catch up) but I'm always suspicious of people taking victory laps before the final verdict is in. It's been a couple of weeks. Let's see where we are a year from now. But I appreciate engaging with my writing.
Thanks, Stephen! I felt a bit bad that yours was the piece I was most critical of when I saw that you were the first to like and comment on my piece - especially given the kindness of your response. I agree with your assessment that OpenAI seems to be racing to corner the education market and, like you, I don’t have definitive answers to the very good questions you posed.
My thoughts that AI has plateaued somewhat are based largely on my own instinctive sense of the limitations of what it’s based on (namely, the statistical averages of human language without any real understanding of even its own responses) and on the fact that someone as well-informed as Gary Marcus seems to agree with and support my instinctive response. That being said, computer technologies in general and AI technologies in particular have both plateaued before and then taken off again to even greater heights, so even if it is the case that the technology has plateaued, I agree that the best we can say is that it has plateaued “for now”.
Finally, I do appreciate your cautious response to the use of AI in your history classes, especially as I myself started out as a Social Studies-History teacher and, as I’ve noted, can very much see the limitations of AI in that field!
Don't feel bad at all. I've never been a social media person and putting your thoughts out to the wide world on Substack means they get tested in real time which is important. Not everyone is going to react the same or agree with everything. Op-Ed writers must develop a thick skin early on. I've learned a lot from other perspectives and its shaped my own attitudes, hopes, and fears around these fundamental questions - will kids actually learn to use AI effectively and should we be teaching them to do so? So far, I think AI has done more harm than good in schools but the devil is always in the details. There are ways it can be effective - even in history I've had some excellent results with Deep Research prompts that are not simply "writing the paper" and can provide genuine and useful material much faster and more efficiently than before. The big question is whether we show kids how to use and test it which makes all the difference in the world. My issue with the incredible glee with which people who are opposed to AI are taking with GPT5 is twofold - first, it's not at all clear that the model is as "bad" as people are saying. I've seen a lot of mixed coverage. But, more importantly, OpenAI is one company - where are Google, Meta, Anthropic, and others on this? Honestly, the entire AGI debate I find tedious - if and when it does arrive, the questions at that point are going to be so much bigger and we can't even answer the questions posed by what we have now. There is an excellent piece in the Times making this very point from yesterday by Eric Schmidt Selina Xu comparing the way the US and China are reacting to the current AI systems that's worth a read. We still seem as a society extremely lukewarm (at best) on AI and, ironically, a lot of that can be laid at the feet of the people creating it.
Yes, it’s not my intent to gleefully dogpile on GPT-5 here. In fact, while I had to push back on its hallucination of a non-existent 1927 book on the Chilcotin uprising in my historical test (and, fortunately, had the prerequisite knowledge to do so), I was impressed that it made me aware of an apology by our Prime Minister for the event of which I had not previously been aware (even if it did hallucinate a non-existent apology by a previous Prime Minister), that it did the research necessary to discover and correct its mistakes when I called it out, and that it found an online PDF of my father’s thesis. Even if this is more or less the pinnacle of where LLM AIs get to for a while, it’s a pretty impressive plateau!
Thanks for the reference. It's funny that you describe my piece as anti-AI though just because I am critical of the new Study mode. The irony is I think AI is very useful for a lot of things. For the record, I think your observations are fair - I don't have good answers to all those questions nor do I know precisely how student use of AI (whether teacher abetted or not) is likely to unfold any differently this year. Those "if" questions are very much conditional - I don't think kids want to learn alone with a chatbot but they will if the adults in charge of teaching them aren't doing a great job. The Study mode struck me so hard because a) it was such an easy fix that could have been done two years ago and b) makes it absolutely clear that OpenAI is racing to corner the education market. My experience with students who are very honest about their AI use is that most don't use it very well which means most aren't really looking to "study" with it in the traditional sense. Maybe that will change. My other two cents is to be wary of reading too much into the GPT-5 rollout - yes, it seems we may get only incremental improvements for awhile (a good thing since it will give a lot of people time to catch up) but I'm always suspicious of people taking victory laps before the final verdict is in. It's been a couple of weeks. Let's see where we are a year from now. But I appreciate engaging with my writing.
Thanks, Stephen! I felt a bit bad that yours was the piece I was most critical of when I saw that you were the first to like and comment on my piece - especially given the kindness of your response. I agree with your assessment that OpenAI seems to be racing to corner the education market and, like you, I don’t have definitive answers to the very good questions you posed.
My thoughts that AI has plateaued somewhat are based largely on my own instinctive sense of the limitations of what it’s based on (namely, the statistical averages of human language without any real understanding of even its own responses) and on the fact that someone as well-informed as Gary Marcus seems to agree with and support my instinctive response. That being said, computer technologies in general and AI technologies in particular have both plateaued before and then taken off again to even greater heights, so even if it is the case that the technology has plateaued, I agree that the best we can say is that it has plateaued “for now”.
Finally, I do appreciate your cautious response to the use of AI in your history classes, especially as I myself started out as a Social Studies-History teacher and, as I’ve noted, can very much see the limitations of AI in that field!
Don't feel bad at all. I've never been a social media person and putting your thoughts out to the wide world on Substack means they get tested in real time which is important. Not everyone is going to react the same or agree with everything. Op-Ed writers must develop a thick skin early on. I've learned a lot from other perspectives and its shaped my own attitudes, hopes, and fears around these fundamental questions - will kids actually learn to use AI effectively and should we be teaching them to do so? So far, I think AI has done more harm than good in schools but the devil is always in the details. There are ways it can be effective - even in history I've had some excellent results with Deep Research prompts that are not simply "writing the paper" and can provide genuine and useful material much faster and more efficiently than before. The big question is whether we show kids how to use and test it which makes all the difference in the world. My issue with the incredible glee with which people who are opposed to AI are taking with GPT5 is twofold - first, it's not at all clear that the model is as "bad" as people are saying. I've seen a lot of mixed coverage. But, more importantly, OpenAI is one company - where are Google, Meta, Anthropic, and others on this? Honestly, the entire AGI debate I find tedious - if and when it does arrive, the questions at that point are going to be so much bigger and we can't even answer the questions posed by what we have now. There is an excellent piece in the Times making this very point from yesterday by Eric Schmidt Selina Xu comparing the way the US and China are reacting to the current AI systems that's worth a read. We still seem as a society extremely lukewarm (at best) on AI and, ironically, a lot of that can be laid at the feet of the people creating it.
https://www.nytimes.com/2025/08/19/opinion/artificial-general-intelligence-superintelligence.html
Yes, it’s not my intent to gleefully dogpile on GPT-5 here. In fact, while I had to push back on its hallucination of a non-existent 1927 book on the Chilcotin uprising in my historical test (and, fortunately, had the prerequisite knowledge to do so), I was impressed that it made me aware of an apology by our Prime Minister for the event of which I had not previously been aware (even if it did hallucinate a non-existent apology by a previous Prime Minister), that it did the research necessary to discover and correct its mistakes when I called it out, and that it found an online PDF of my father’s thesis. Even if this is more or less the pinnacle of where LLM AIs get to for a while, it’s a pretty impressive plateau!