In the weeks since I last posted, OpenAI has put out two “educational resources”: a student guide to writing with ChatGPT and “ChatGPT Foundations for K-12 Educators,” a free online course launched in partnership with Common Sense Education. These are, at best, press releases for ChatGPT, resources that fail to put the tool in proper context and fail to offer insight into how to mitigate its flaws. OpenAI is, unsurprisingly, more interested in selling the tool than in helping educators understand the technology behind it (and helping them become more aware of all the other non-OpenAI tools they could use instead).
These resources also reflect a problem schools have long had with educational technology: focusing on tool-based tactics instead of prioritizing the durable, transferable skills that empower people to learn how investigate technology and use it in service of their own goals.
Think of it this way: In less than two minutes, you can have ten tabs in your browser open, each with a different generative AI tool that makes tempting promises about its benefits to education. Do you know enough to be able to access these tools and try their features? Can you assess how safe each one is? How valuable each one is to you and your students? Can you mix and match features across tools to design meaningful learning experiences or personalize assistance to your own context? Learning how to answer these questions is more valuable than learning “top tips” for one tool owned by one company, a tool that will, by this time next year, look a lot different than it does now.
Over the next couple of months, I’m going to write a few posts about skills that both apply to generative AI and look beyond it. These are skills that we have more research on, that we have more experience practicing (consciously or not), and that are not reliant on a single technology or other tool in order to be practiced. In a complex world defined by rapidly emerging technologies, these skills will continue to matter.
Extending the Mind
To start, I’m going to lean on Annie Murphy Paul’s excellent 2021 book The Extended Mind, a title she takes from a phrase coined by the cognitive philosopher David Clark. Paul uses decades of research to debunk our “brainbound” notions of what, why, and how we think. Our brains are not input-output machines like computers; instead, we use our bodies, our surroundings, and our relationships to help us think, as extensions of the mind that strengthen our ability to build new knowledge and skills.
Consider Paul’s argument that experts do not rely solely on their own minds to develop mastery, a rebuttal to the popular (and very brainbound) theory of the “10,000 hours” required for expertise: “Experts are those who have learned how best to marshal and apply extraneural resources to the task before them… They are more apt than novices to make skillful use of their bodies, of physical space, and of relationships with others. In most scenarios, researchers have found, experts are less likely to ‘use their heads’ and more inclined to extend their minds.”
Despite all the compelling evidence of the value of extending the mind, Paul finds that schools and workplaces don’t know about, much less encourage, the skills and strategies people need in order to extend their minds effectively. She outlines a number of them in her book, and I recently reread it in search of potential applications to generative AI.
Offloading can be good for us
One of the primary worries and objections I hear about generative AI is “cognitive offloading,” using our external environment to reduce the cognitive load of a task. This has been a concern with technology for decades. For example, we no longer know how to do certain kinds of math because of the calculator. Or, we no longer know how to navigate effectively because of GPS. Or, nobody knows anybody’s phone number anymore because we have contact lists on our mobile devices. The concern is that offloading results in overreliance on technology and thus loss of skill and knowledge.
But certain kinds of offloading have well-documented benefits. What Paul calls “continuous offloading,” like writing in a journal or taking daily notes, has been shown to result in deeper and more creative thinking. Graphic organizers and mind maps help us rely on sources beyond our own memory. Physical gestures help us remember (think about people who do math on their fingers). Simply talking about or debating an idea with another person reshapes and extends our own understanding.
Generative AI can be used as a tool for positive offloading. I think about a student I met who takes pictures of her handwritten class notes and uploads them into ChatGPT to help check for clarity and generate review questions. I think about how Stefan Bauschard uses AI with his debate students to externalize process arguments as they compose them. I think about a teacher I work with who asks his students to use voice-based AI tools to sharpen their ideas and build confidence for class discussions. In my own work, I recently wrote about how using NotebookLM revealed connections and ideas in my own work that had not occurred to me.
Make the abstract concrete
To extend our minds, Paul argues, “we should endeavor to transform information into an artifact, to make data into something real—and then proceed to interact with it, labeling it, mapping it, feeling it, tweaking it, showing it to others.” Humans are more effective at dealing with the concrete than wrestling with the abstract.
In education, so much of effective pedagogy is making the abstract concrete for students. We do demonstrations in science class, we use graphs and visualizations in math, we do reenactments and role plays in language and history classes. The power of artifacts is why examples are such effective learning tools. Paul cites the work of Ron Berger and his emphasis on models of excellence. If you watch his “Austin’s Butterfly” video, you can see the impact a good example has on a young person’s understanding of “excellence.”
This is an area where generative AI can be an assistive technology. At the most basic level, interacting in a text-based chat with AI moves the thought from the brain to the screen. You get to read and react to something rather than simply think about it. I’ve seen teachers create galleries of models and examples with generative AI, where before it would take them hours to create just one or two. I’ve seen both students and teachers use image generators like Adobe Firefly or audio generators like Eleven to see or hear text or concepts in new ways. These uses of generative AI focus on its ability to produce content quickly and in many different formats. Instead of thinking for us, it helps us concretize our own thoughts.
Re-spatialize
Our brains are designed to think spatially. I love Paul’s explanation that humans evolved to look at a landscape and find a path through it. This spatial orientation of our brains is why devices like memory palaces, concept maps, and data visualizations are so appealing and effective: they organize information into spaces that our brains are built to navigate.
In an educational context, Paul cites learning activities like sketching, mapmaking, diagramming, and creating charts and tables as research-based extension tasks that help students process and remember. Using Post-It’s in classes or meetings allows us to visualize our thoughts and literally move them around. Even using our hands to try to make a shape out of an idea has been shown to clarify thinking.
Generative AI’s multimodal capabilities can be helpful in this area. I meet teachers and students who use AI to help them reformat prose into bullets, tables, or outlines to make it easier to understand. AI can take diagrams or other spatial tools we create ourselves and help us improve them, annotate them, or imagine ways to use them for deeper thinking. In my own work, I use AI graphic tools like Claude’s Artifacts and Napkin not to create amazing visualizations—which they never are—but rather to try to see something I’m reading or something I’m thinking about in new ways.
Re-embody and re-socialize
This one is a bit of a trick. Not all of Paul’s ideas can or should apply to generative AI; in fact, many of them demand the rejection of technology. For example, one proven extension strategy is to “alter your state,” which means physical movement or changing your environment, a literal stepping away from the task at hand in order to engage your mind and senses in different stimuli. The long walk I take every day with my dog is an essential element of my work; many of the ideas you read about in this Substack occur to me when I’m on that walk, not when I’m sitting at my computer messing around with AI.
Another extension strategy is to “re-socialize,” which essentially means to bring your thinking to another person and talk about it. I suppose you could use this strategy with generative AI by asking it to question or reframe your ideas, but as Leon Furze writes, these tools are trained to be sycophantic, so you would have to explicitly prompt bots to push against your ideas. In addition, Paul emphasizes that learning from interacting with humans includes being aware of emotional, physical, and other non-verbal reactions in us and the other person. For some extension strategies, the fact is we need a physical environment and other human beings.
An Ecosystem of Assistance
Paul’s book is a good reminder that part of our response to generative AI and to educational technology in general should be learning when not to use it, when a tech-free approach to thinking is not just useful, but preferable. What’s important, Paul argues, is not that we prioritize certain extension strategies over others, but that we embrace the “loopiness” that makes us human: moving in and out of our brains to help us think. We need to make loopiness a habit, embedding extension activities into everyday experiences for ourselves and our students. Some of these might involve technology like generative AI. Many will not.
This was a connection I made between Paul’s book and Ethan Mollick’s book about generative AI, Co-Intelligence. Human brains do not work like computers, and large language models do not work like human brains. Mollick argues that the power of generative AI lies in our ability to use it to extend our own minds. To do this, we must 1) act based on the understanding of what AI is good for (and not good for) and 2) exert agency over AI by acting as a “human in the loop.”
Generative AI is a powerful and complex addition to the ecosystem of assistance in which we and our students are immersed, assistance that includes, but is not limited to, technology. When students want assistance, they look to their teachers, but they also look to family members, peers, tutors or coaches, the internet and social media, the library, and, now, generative AI. When educators want assistance, they use many of the same tools. What makes this assistance “right” or “wrong” is a question that is far bigger and more important than generative AI. The long-term question here is not “how to use ChatGPT”; it’s how to teach people to identify meaningful, healthy assistance and use it to extend their minds.
Upcoming Ways to Connect with Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, reach out for a conversation at eric@erichudson.co or take a look at my website. I’d love to hear about what you’re working on.
In-Person Events
February 24: I’ll be at the National Business Officers Association (NBOA) Annual Meeting in New York City. I’m co-facilitating a session with David Boxer and Stacie Muñoz, “Crafting Equitable AI Guidelines for Your School.”
February 27-28: I’m facilitating a Signature Experience at the National Association of Independent Schools (NAIS) Annual Conference in Nashville, TN, USA. My program is called “Four Priorities for Human-Centered AI in Schools.” This is a smaller, two-day program for those who want to dive more deeply into AI as part of the larger conference.
June 5. I’m a keynote speaker at the Ventures Conference at Mount Vernon School in Atlanta, GA, USA.
June 6. I’ll be delivering a keynote and facilitating two workshops (one on AI, one on student-centered assessment) at the STLinSTL conference at MICDS in St. Louis, MO, USA.
Online Workshops
January 23. I’m facilitating an online session with the California Teacher Development Collaborative (CATDC) called “AI and the Teaching of Writing.” We’ll explore the impact generative AI is having on writing instruction and assessment, and how to respond. Open to all educators, even if you don’t live or work in California.
Links!
I did two online events with Toddle, and the recordings are free to view. The first is a video podcast episode where I discuss the ideas, strategies, and tools I prioritize in my work with schools. The other is a webinar for technology leaders on “Selecting the Right AI Tool for Your School.”
The International Baccalaureate (IB) has released a document with 13 AI scenarios and how IB teachers should respond to each one.
If you’re interested in a direct response to OpenAI’s ChatGPT guide for students, I liked Arthur Perret’s “A Student’s Guide to Not Writing with ChatGPT.”
Pooja Agarwal, an expert on retrieval practice, offers eight prompts for teachers to push student thinking beyond what a chatbot might feed them.
“This is the ongoing blind spot of the ‘AI is fake and sucks’ crowd. This is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.” Casey Newton with a helpful reminder that a lot of AI skepticism is its own kind of hype. Look closely, and you’ll see a lot of other, real reasons to be curious and worried.
I am a passionate advocate for student journalism. Here’s just one more example of why it matters: the Harvard Crimson found that almost 10% of Harvard’s recent undergraduates come from just 21 high schools.
Eric, this is brilliant. Your insights are running parallel to a research study I am about to launch via my next semester course Artificial Intelligence Theory and Composition. Students need strategies to learn. AI can be incorporate to manifest learning---make it visible---as you say--thus reinforcing links between the cognitive processes of selecting, organizing, and integrating. At least that is what we are seeing anecdotally right now.
Love this piece! I'm running a small, experimental class called Prompt and Circumstance in spring focused solely on writing prompts to achieve whatever goals the students want. This article plus one by E. Mollick will set the tone beautifully. You've also got me thinking about distributed cognition (or extended cognition -- I'm still working on understanding the difference). I intend to approach P&C from the co-intelligence perspective. Adding extension, perhaps emphasizing it, makes for a more powerful frame that avoids some of the anthropomorphic rhetoric around AI. Thank you!