We Should Listen to Our Sci-Fi Prophets
It’s all in Sci-Fi, all in Sci-Fi: Bless me, what do they teach them at these schools?
During the COVID era, or at least by the end of it, I found myself often thinking, “It’s all in Sci-Fi, all in Sci-Fi: Bless me, what do they teach them at these schools?” I don’t imagine that I was alone in this (although the Lewisian shape of the thought might have been unique to me): it seems to me to have been the fairly natural response of any lifelong science-fiction fan who finds themselves actually living in a Sci-Fi dystopia.
But the other half of that thought was a devout wish that society at large should have read—and paid attention to—a lot more 1950s and ’60s science fiction.
I was reminded of this this morning, as I woke up to find
’s insightful comment on my “AI and Student Agency” post, which is well-worth quoting in full:One of my mental models for thinking about AI is that before AI, humans were the sole means of personalization, and systems were the means of scale and efficiency. Systems themselves could not capture exceptions that weren’t already defined in the system. But AI now is essentially systematizing personalization, enabling personalization, scale, and efficiency together, which in theory is definitely something that can revolutionize education.
I don’t want to fall into the trap of this lofty thinking becoming an “AI saves all” mentality, but I do see that theoretically it should enable a radical change in how we see education - I’m imagining AI everywhere, not just in the learning process but in the selection of projects for students and the curriculum and in testing!
I’m not suggesting this is desirable based on the quality of LLMs we have access to today but rather trying to conceive of a future where AI did enable full personalization at scale in education.
For example, I wonder if, empowered and emboldened by a technological revolution, standardization in education can be rethought entirely, including curricula and even how we assess what it means to be educated in the first place.
Do we need to fit everyone into the same mould, or can we imagine education that rewards excellence in a wider variety of different paths learners take - made practical via AI tooling?
Taking this to what I think is the logical conclusion, what if all education becomes homeschooling due to AI - parents as the primary oversight guiding the AIs that guide the development of their children to reach their unique potential - as you say homeschooling is the ultimate example of personalization and so perhaps that is where education will go eventually.
Ironically the scary AI revolution may actually end by enabling the most human form of education by empowering parents to teach their kids - or this may be my naive techno-optimism speaking :)
The thing about thoughtfully and imaginatively engaging with the trends of one’s time, especially when one does so in the company of other thoughtful and imaginative people, is that one finds one is not alone.
Way back in 1951, noted science-fiction author Isaac Asimov (of Foundation fame) wrote a short story called “The Fun They Had” as a personal favour to a friend. It was initially published in a children’s newspaper, and became, as Asimov put it, “probably the biggest surprise of my literary career”, reprinted as it was over 30 times by 1973, with more reprintings planned. It too is worth quoting in full—it’s only two pages long—but, in the interests of avoiding any copyright strikes, I’ll be at least a little bit selective.
Set in 2155, it begins with Margie’s friend Tommy discovering “a real book”:
It was a very old book. Margie’s grandfather once said that when he was a little boy his grandfather told him that there was a time when all stories were printed on paper.
They turned the pages, which were yellow and crinkly, and it was awfully funny to read words that stood still instead of moving the way they were supposed to — on a screen, you know. And then when they turned back to the page before, it had the same words on it that it had had when they read it the first time.
“Gee,” said Tommy, “what a waste. When you’re through with the book, you just throw it away, I guess. Our television screen must have had a million books on it and it’s good for plenty more. I wouldn’t throw it away.”
I’ve commented before on the common misconception that newer forms of technology will necessarily entirely replace and make obsolete older forms, but here Asimov is playing this for laughs—as he is doing throughout the story, surmising that the reason it was so popular was that kids got a bang out of the irony—and, in so doing, defamiliarizing our own experience of printed books. While he of course didn’t get the delivery mechanism for e-books quite right, he did successfully predict the main strength of the e-book—you can store a lot of them in one place—as well as one of the technology’s major weaknesses: the words don’t stay in the same place on the page like they do in a printed book.
But then we get to the meat of the matter. The old book, it turns out, is about school. We then get a glimpse of Margie’s experience of school in 2155 and why she hates it:
Margie always hated school, but now she hated it more than ever. The mechanical teacher had been giving her test after test in geography and she had been doing worse and worse until her mother had shaken her head sorrowfully and sent for the County Inspector.
He was a round little man with a red face and a whole box of tools with dials and wires. He smiled at Margie and gave her an apple, then took the teacher apart. Margie had hoped he wouldn’t know how to put it together again, but he knew how all right, and, after an hour or so, there it was again, large and black and ugly, with a big screen on which all the lessons were shown and the questions were asked. That wasn’t so bad. The part Margie hated most was the slot where she had to put homework and test papers. She always had to write them out in a punch code they made her learn when she was six years old, and the mechanical teacher calculated the marks in no time.
The Inspector had smiled after he was finished and patted Margie’s head. He said to her mother, “It’s not the little girl’s fault, Mrs Jones. I think the geography sector was geared a little too quick. Those things happen sometimes. I’ve slowed it up to an average ten-year level. Actually, the overall pattern of her progress is quite satisfactory.” And he patted Margie’s head again.
Margie was disappointed. She had been hoping they would take the teacher away altogether. They had once taken Tommy’s teacher away for nearly a month because the history sector had blanked out completely.
There is so much here that is fascinating from an historical perspective, as, once again, Asimov gets some key predictions very wrong, even as he sets up a central insight that I think aligns rather well with what Jamie is speculating might happen. But perhaps my favourite detail is Asimov’s ironic inversion of the “an apple for the teacher” trope, when the County Inspector gives Margie an apple and then takes the teacher apart.
My favourite bit that Asimov gets very wrong is that, at the age of six, Margie had to learn a “punch code” to enter her homework and test papers in a form that the mechanical teacher can then mark instantly. This is a classic case of extrapolating from the present—in 1951, punch cards were the main way of entering data into the computers of the day—and, in the process, missing the invention of all sorts of user-interface advancements, even as Asimov gets the main user interface, the “big screen”, largely right. That being said, this is an occupational hazard that all futurists face: not only is it impossible to anticipate all future inventions and their implications, but if one gets too speculative, not only can one get it very wrong, but, if one gets it very right, the speculative details can themselves then distract from the story’s central and most meaningful predictions. In fact, I would suggest that one of the key failures of Ray Bradbury’s spectacularly insightful Fahrenheit 451 is actually all the technical details he got right: the book is almost better known for its “clamshell radios” and interactive wall-sized TVs than for its central message about the importance of literature. So, I’d suggest we should forgive Asimov this particular spectacularly wrong oversight and go on to engage with his story’s central prediction, which is, of course, that in the future, when education is entirely computerized, learning will be completely personalized (except, of course, when the mechanical teacher gets uncalibrated!) and will be done at home.
Moving on in the story, Margie now encounters one of the most shocking realizations about education in the past, which leads to what is perhaps Asimov’s most important prophetic insight. Looking over Tommy’s shoulder at the book, she comments:
“Anyway, they had a teacher.”
“Sure they had a teacher, but it wasn’t a regular teacher. It was a man.”
“A man? How could a man be a teacher?”
“Well, he just told the boys and girls things and gave them homework and asked them questions.”
“A man isn’t smart enough.”
“Sure he is. My father knows as much as my teacher.”
“He knows almost as much, I betcha.”
This, I would suggest, gets at the heart of what is really the most important part of the current “AI Crisis” in education—at least for us teachers: the computers know more than we do.
There’s a very personal connection for me here. I got my start at the “homeschooling centre” that is now the online school that I work for by being hired to be the English and Social Studies specialist for homeschooling parents who were not confident of their ability to teach their children what they needed to know in the upper levels of high school. The plan was that I and my fellow subject specialists—who were trained teachers, and therefore (presumably) a bit more knowledgeable than the parents in our respective subject areas—were to develop a set of online courses that their homeschooled students could work though at home. Knowledge is important in teaching, but what happens when the computer is more knowledgeable than you are?
We then get to the heart of the matter: Asmiov’s central insight about the current educational system and its main weakness versus the personalized future of an entirely computerized education done at home. Continuing from where we left off, above:
Margie wasn’t prepared to dispute that. She said, “I wouldn’t want a strange man in my house to teach me.”
Tommy screamed with laughter. “You don’t know much, Margie. The teachers didn’t live in the house. They had a special building and all the kids went there.”
“And all the kids learned the same thing?”
“Sure, if they were the same age.”
“But my mother says a teacher has to be adjusted to fit the mind of each boy and girl it teaches and that each kid has to be taught differently.”
“Just the same they didn’t do it that way then.”
Learning in lock-step with an age-selected cohort of peers has been the central model of our public educational system since its inception. It’s efficient, even as it of course leaves behind a few of the less motivated students and/or those with less aptitude for the specific subject being taught. But, Asimov speculates, what if it doesn’t have to be that way? What if we might be able to leverage both the immense body of subject-specific knowledge that is “out there” by bringing it into our computer systems and then presenting it in a manner that is perfectly personalized to the aptitudes and interests and current skill-level of the individual student? This is, in fact, a version of the AI-driven future that Jamie has postulated in his “naive techno-optimism”. And, to be honest, it may not be that far off.
But what I love about Asimov’s presentation of this vision is that, even as he’s predicting a possible techno-utopian future, he’s also reflecting on what is lost as it is left behind. Margie’s newly repaired/recalibrated mechanical teacher’s screen lights up and it says,
“Today’s arithmetic lesson is on the addition of proper fractions. Please insert yesterday’s homework in the proper slot.”
Margie did so with a sigh. She was thinking about the old schools they had when her grandfather’s grandfather was a little boy. All the kids from the whole neighborhood came, laughing and shouting in the schoolyard, sitting together in the schoolroom, going home together at the end of the day. They learned the same things, so they could help one another with the homework and talk about it.
And the teachers were people…
The mechanical teacher was flashing on the screen: “When we add fractions ½ and ¼...”
Margie was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had.
And this is why we should be listening to our science-fiction prophets. Not because they get everything right—they’re not inspired in that way—but because they are imaginatively engaging with the future impact of present technologies with a story-driven focus that is experiential, and thus fundamentally human.
Neither Jaime, nor Asimov, nor I can actually predict the future—we’re not really prophets—but insofar as we reflect imaginatively on current systems and future technological trends in an embodied and story-driven way, we may well stumble across key insights that might at least be helpful to consider as we pick our way through the fantastically cluttered semantic wastelands that are our AI-driven future. If we’re there as people inhabiting this post-apocalyptic landscape, informed by the past predictions of our greatest speculative fictional authors, we may just be able to pick out, from what the future reveals to us, that which preserves true humanity, even in the face of our future robotic overlords.
OK, so that’s a little more bleak than I intended, so I won’t end quite there. But I do think that Jamie is onto something here. As a homeschooling parent, and as an in-person and online teacher, I care deeply about instilling in my students and children the same love of learning that has inspired me throughout my life. And, as a technology aficionado, I don’t see AI as the enemy: it’s just another tool—an oddly shaped one, to be sure, and definitely a disruptive and revolutionary one—but one that is, as always, best used in collaboration and dialogue with those who know us best. And, as a lover of literature, I would suggest that this life-long learning community should include not only our parents and teachers, but also some of the greatest prophets of our age: the writers—Sci-Fi, fantasy, and otherwise—who have speculatively engaged with what it means to be truly human, and who have thought deeply about the implications of current societal realities and the potential impacts of future technological developments, and have then embodied these thoughts in the fundamentally human medium of story. None of the members of this community—barring access to genuine divine inspiration—ever get everything completely right, of course, but that’s why it needs to be a learning community. And, if we have that community as our foundation of learning, AI—and any other future technologies that we aren’t even able to imagine yet—can be a valuable additional component to add to our educational toolset.
And one group of “prophets” that might actually be more valuable additions to our learning community than we realize, are those from the “golden age” of science fiction, when the “technological revolution” was really just getting started.
Many, many good points. But I doubt that computers are capable of responding to questions that the programmers have not anticipated. The only thing that I can imagine computer to be reliable good at is storing information. Their ability to return all of that information is yet to be determined. Neither can they yet accurately rate the value, helpful or harmful, of their output. Human ability is limited enough in that respect. The story of Adam and Eve in Genesis 3 indicates that very clearly. Not only in their decision to eat the forbidden fruit, but in the ancient sage's conception of an omniscient God who, 1) put the tree in the garden, and 2) that he told the couple not to eat it, and 3) that he created Satan who could rebel. There is only so much weight that the argument that God would not have been satisfied with created (functionally) robots can bear. Surely an omniscient, omnipotent God could have found a solution.
I really like the point of sci-fi helping inform us as we navigate and engage with these technological changes. I need to read more sci-fi! Everyone does!
I’m generally not a fan of the mindset that there’s an inevitability to how this will all go down, but rather that we ought to participate in it with eyes open.
I like what Steve Jobs says, “Life can be so much broader, once you discover one simple fact, and that is that everything around you that you call ‘life’ was made up by people who were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. Once you learn that, you’ll never be the same again.”
We’re not just watching something happen, we can influence it. We can bypass incentives and help push for a future that is most human!