Four Priorities for Human-Centered AI in Schools
Or, what I'm going to be working on this coming year
A week into a month-long break from the road, I’m doing what a lot of educators are doing right now: sifting through everything I’ve done over the last year and making decisions about what I want to keep and what I want to create over the next year.
About 80% of my work has been on generative AI. I’ve visited many schools, facilitated many workshops, been in many meetings, had many conversations, and met many people. I’ve learned a ton, and I’ve seen firsthand how challenging this topic has been for schools. It’s not just a strategic or programmatic question, it’s an emotional and cultural one.
So, the question I’m asking myself right now is, How can what I’ve learned shape what I will do in the coming year? I came up with four priorities that will drive my work. I even made a little card that I’ve taped to my desk:
What I have seen in schools, for the most part, is the mirror image of these priorities. Schools are highly focused on the prospect that AI can automate what we do (depending on who you ask, this is tempting or it is terrifying). They are trying to write policy when they can barely keep up with AI’s rapidly evolving capabilities. They are looking for technological solutions to pedagogical problems that AI raises. They are feeling overwhelmed by the number of decisions they think they need to make. These are understandable responses to a sudden disruption: we elevate the urgent over the important.
If I accomplish anything over the coming year, it will be helping schools approach AI in a more sustainable, more human-centered way. That begins with resetting our priorities.
Augmentation over Automation
Effective, ethical, human-centered use of AI should be defined by augmentation, not automation.
I first wrote about this idea last September, in a post about AI and the question of rigor. It comes from an essay by Erik Brynjolfsson at Stanford, “The Turing Trap.” The basic argument is that if we focus too much on how AI can automate human tasks (known as passing “The Turing Test”), then we are both ignoring the true potential of AI and making ourselves more vulnerable to its flaws. Instead, we should focus on how it might augment human capabilities, allowing us to engage actively with the tool to realize goals and ideas we previously had not been able to.
Nearly a year after I first wrote about it, I’ve found augmentation to be more important than ever, not just for philosophical reasons, but for a really important practical one: AI still cannot be trusted to automate most tasks. This past year is littered with broken promises about AI’s automation capabilities, from Google’s terrible AI search summaries to McDonald’s incompetent AI ordering tool to error-ridden AI content farms. It’s also an ethical issue: using AI to automate hiring has raised bias and privacy issues (take note of this when it comes to admissions in higher education) and using AI to automate grading of student work is rife with problems.
When I think about augmentation, I think about how
used ChatGPT to produce better feedback than Khanmigo, an AI tutor. About students who are uploading photos of their class notes into AI for summary and clarification. About teachers using generative AI to create multiple and multimodal versions of instructions and assignments to support differentiation. About administrators who are doing more and better classroom observations because they can speak their thoughts into generative AI and use it to create concise and usable written reports that might have taken them hours to compose on their own. Looking ahead, I’m intrigued by how tools like Claude’s projects will support more effective collaboration among educators, both within and across schools.These examples are powerful to me because they are examples of humans exerting their agency over AI, using their imaginations and critical thinking skills to partner with AI’s multimodal capabilities and do new or better things. Finding, sharing, and practicing examples of augmentation will be a priority for me this coming year.
Literacy over Policy
Policy is about control. Literacy is about empowerment.
Yes, schools should spend time articulating their definition of and guidelines for responsible use of generative AI at school. In my experience, these can often be grounded in existing policies about academic integrity, academic assistance, acceptable use of technology, and online behavior. It need not be a long, tortured process.
The problem with prioritizing policy is that all policy does is tell people what to do. It doesn’t teach them how to do it. Prioritizing literacy means providing people with the information and practice they need to make good decisions about AI. AI literacy is a durable, transferable, teachable skill that will last far longer than any AI policy.
Let’s return to how Anna Mills defines critical AI literacy in her presentation “AI for Research Assistance: Skeptical Approaches”, which I first wrote about in a post about student literacy:
To successfully work with generative AI
You ask it for what you want.
Then you question what it gives you. You revise, reject, add, start over, tweak.
To do this, you need
critical thinking, reading, and writing skills.
subject-matter expertise.
knowledge of what kinds of weaknesses to look out for in AI. Let’s call that critical AI literacy.
I’d argue that not only should our students know how to do these things, but educators and administrators should know how to do these things before they try to write or implement an AI policy. Writing an AI policy without AI literacy is like writing a recipe for something you’ve never actually cooked.
I’d also argue that our emphasis on policy over the last 18 months has created closed cultures around AI rather than open ones. In a culture defined by literacy, we would recognize AI as a tool to be learned and discussed, and people would feel comfortable asking questions and sharing ideas, navigating the evolving boundaries of appropriate use together.
A culture defined by policy replaces open dialogue with furtive and adversarial behaviors: teachers are using unreliable AI detectors to scan student work, students are using online tools that help them deceive the detectors, teachers are embedding hidden text in assignments to sabotage students who copy/paste them into AI, and students are trading intel on which teachers “know about” AI and which don’t. These behaviors are not new; they reflect years of surveillance culture in schools that pre-date generative AI. This approach distracts from learning and erodes trust.
I have spent a lot of time advising schools on policy in the past year. I’m not sure I’ve used that time well when it comes to helping them prioritize literacy. That will change.
Design over Technology
This is the priority I will take into all of my upcoming work on assessment and AI. I’ve spent most of my career at the intersection of technology and teaching, and if there’s one thing I’ve learned, it’s that good pedagogy trumps a good tool every single time.
The concerns teachers have about AI and assessment are not really about technology; they are about friction. Technology tools like generative AI are designed to reduce or eliminate friction in thinking, working, and creating. This is great for efficiency, and detrimental for learning.
So, we need to assess how vulnerable our assessments are to AI, and then design to ensure they maintain the positive friction that sparks learning. I have been introducing this design challenge to schools using Maha Bali’s four options for educators:
Make AI use impossible.
Discourage AI use by redesigning assessments to forms AI would not perform well.
Allow AI use within boundaries.
Allow indiscriminate AI use.
Choosing one of these options is less about your AI skill and more about how deeply you know and can be creative with the design of your assessments. We can revise inputs, outputs, processes, the roles of students and teachers, and use many other pedagogical levers to pursue these options. We can also embrace targeted uses of AI that create friction instead of reduce it, especially if we and our students have some baseline literacy.
Whatever option we choose, we’re going to have to do more than simply ask students to do all their work in class or require them to cite AI or demand that their writing is documented in an online revision history or somehow find the time to review transcripts of their chatbot interactions in addition to their actual work. These are not sustainable solutions, and they don’t address the core design issues that AI raises.
We are past the standalone chatbot phase of generative AI and moving quickly into the integration phase: recent moves by Google, Microsoft, and Apple reveal how deeply embedded AI will become in the platforms and workflows we use every day. It’s going to become so much easier (even frictionless!) to collaborate with AI, and thus so much harder for us to distinguish where our work ends and AI’s contributions begin. This is a deeper issue than cheating. This is about learning how to know when we are thinking for ourselves.
In an AI world, how do the structure, culture, and pedagogy of our classrooms and schools need to change to ensure we know what it looks like to think for ourselves? The answers will vary based on subject area, grade level, learning goals, and the individual teachers and students. If we believe that educators are designers (which I do, deeply), then this is the challenge before us, and supporting teachers in this challenge is where I’ll dedicate a lot of my own design work this coming year.
Vision over Decisions
As I wrote about in in my last post, this is a particular priority for school leaders. A troubling trend I’ve observed in schools is inconsistency in the student experience with classroom use of AI. Here’s an example: a student is taught to use AI in science class to process large public datasets and code visualizations. When the student asks to do the same thing for a project in statistics class, they are told that it is unethical behavior and they would be punished if they used AI for that task. How should we expect the student to process these conflicting experiences? Who can help them resolve it?
A clear vision for AI should provide everyone in a school with a framework and common language for talking and making decisions about AI. A few months ago, I listened as a head of school posed this question to the senior leadership team: “How can we take an assets-based approach to AI?” Even this simple question suggests a vision and offers guidance on how educators and students should talk about and use AI. A vision can drive culture, and from my perspective it’s the culture of AI at schools that needs the most work right now.
The four priorities I wrote about here capture my own vision: harnessing AI’s potential in education means acting in the most human-centered way possible.
Upcoming Ways to Connect with Me
Speaking, Facilitation, and Consultation. If you want to learn more about my school-based workshops and consulting work, reach out for a conversation at eric@erichudson.co or take a look at my website. I’d love to hear about what you’re working on.
Leadership Institutes. I’m re-teaming with the amazing Kawai Lai for two leadership institutes in August (both offered via CATDC): “Cultivating Trust and Collaboration: A Roadmap for Senior Leadership Teams” will be held in San Francisco, CA, USA, from August 12-13 and “Unlocking Your Facilitation Potential” will be held in Los Angeles, CA, USA, from August 15-16.
Links!
Anthropic has made Claude 3.5 Sonnet available for free to all users. Depending on the use case, it’s as good or better than ChatGPT-4o. Be sure to try the “Artifacts” feature. Anthropic is one of the most transparent AI companies, releasing white papers like this one that offer insight into how they design and train their models.
Joseph Moxley has designed an undergraduate course called “Writing with Artificial Intelligence,” and if every teacher of writing committed to doing one of these activities (which are easily adaptable to high school and middle school) with their students over the coming school year, I think we’d be in much better shape.
An investigation into how Perplexity steals and fabricates in order to provide its “search-based” answers.
Really useful column from leaders at HuggingFace (an open-source AI platform) on what we can do about deepfakes.
- explains the strengths, weaknesses, and underlying design of Google’s LearnLM specifically and other “AI tutors” generally.
- with an honest, empathetic account of how she and so many other people use technology, including AI, to process their grief.
If you want to better understand the internet ecosystem of which AI is just a part, I highly recommend two podcast episodes: On The Media’s interview of Cory Doctorow, “The Ensh*ttification of Everything,” and the first two segments of this Hard Fork episode on social media warning labels and current research on disinformation.
“Kids need much more freedom to play, explore, get to know themselves, find and follow their own interests, develop courage, and experience the real world into which they are growing. This is what we have taken away from them and this is why they are suffering.”
on what research tells us about the toxicity of high-achieving schools.
Such a useful set - thanks for this piece! Vision before decisions is an idea that could have been in Same as Ever which to me is high praise. I'll be coming back to this one.