In an AI world, what's the work?
Designing for goals and roles, not tasks and behavior
More than a decade ago, immersed in research on competency-based education, I read Leaders of Their Own Learning, a comprehensive guide to designing student-centered assessments.
The authors make a distinction between “learning goals” and “learning tasks.” Goals “should describe what students will learn as a result of a lesson, learning experience, or unit of study, not what they will do as the task.” It’s the difference between “I can write a five-paragraph essay” and “I can craft an organized, evidence-based argument.” It’s the difference between “I corrected my test” and “I can use critical feedback to improve my work.”
The purpose of phrasing goals in this way (the authors prefer the “I can…” format) is to equip students with language and targets that they can use themselves to assess their work. By understanding the learning (read: the purpose) behind the task, students are better equipped to monitor their progress and, as the book’s title suggests, lead their own learning.
Focusing on goals can also free up both teachers and students to think in new ways about the purpose and design of tasks.
AI threatens many tasks, but not as many goals.
I think about the goals and tasks distinction in the context of AI all the time, especially in the last couple of months, when widespread adoption of agentic AI tools (Claude Cowork, Google’s AI Studio, OpenAI’s Codex, etc.) has revealed how good AI is at completing tasks autonomously and competently.
Because of the arrival and rapid development of generative AI, so many of the goals that undergird our work at schools are more important than ever: judgment, critical thinking, meaning-making, self-awareness, self-regulation, applying feedback, communication, etc. And yet, also because of AI, so many of the tasks that we are used to assigning are no longer reliable proxies we can use to assess those learning goals.
Do the tasks we assign students continue to serve the learning goals we have for them? This is not an AI question. It’s a learning design question made urgent by AI.
When I ask students how they make decisions about AI use, they often respond with answers based in ethics (Is this the right thing to do?), practicality (What do I have time for?), and purpose (Does this work really matter?).
An assignment without value to the student is the most vulnerable to being delegated to AI. It makes sense: if something feels like a task to be completed rather than learning worth experiencing, why not use AI to make time for other, more meaningful priorities? As Colleen Ferguson puts it, “If you could print money, wouldn’t you?”

Instead of “What’s the task?” begin with “What’s the role?”
Tasks need to have meaning for students, and meaning can be found in the different roles students can play in learning.
In their 1989 essay “Intentional Learning as a Goal of Instruction,” education researchers Carl Bereiter and Marlene Scardamalia argue, “The skills a student will acquire in an instructional interaction are those that are required by the student’s role in the process.”
For example, what role is a student asked to play in a lecture-based classroom? Listener and notetaker, for the most part. What role is a student asked to play in a call-and-response classroom? Question answerer, for the most part.
It’s not that these roles are bad for learning. Quite the opposite: listening, note-taking, and answering questions are important ways to absorb, process, and demonstrate knowledge. Bereiter and Scardamalia’s argument is that if students are only asked to play these roles, their learning will be limited by those roles. They will learn how to listen, not to argue. They will learn how to record information, not find it. They will learn how to answer questions, not form them.
What’s more, a lot of the roles we currently ask students to play can now be played by AI.
This is where, for me, goals and roles come together in a meaningful way. Designing in the age of AI begins with deep understanding of learning goals, but continues with redefining student roles in a way that activates agency, elevates their ownership of the process, and prioritizes authentic engagement. Defining roles with these priorities in mind could (and probably should) change the kinds of tasks students do in pursuit of learning.
Using goals and roles to respond to generative AI
In his book Cultures of Thinking in Action, Ron Ritchart offers a useful breakdown of how shifting the roles of teachers and students can change what and how students learn.
I’ll offer a few examples in an AI context. What role are these educators asking students to play? How does the role align with the goals? How does AI factor into it?
Mike Taub and Scott Kern have built a 25-lesson AI literacy course that begins with students articulating their own personal goals and identifying community or global challenges that matter to them. From there, they learn how AI can and can’t help them.
Byron Philhour vibecoded a fictional research platform called Teria so his astrophysics students could explore and map an unknown universe, using skills they gained studying our universe. Byron believes vibecoding (designing applications using AI’s coding capabilities) is a skill that will give all teachers the chance to create custom tools for and with students.
A colleague came to Douglas Kiang’s computer science class and asked students to develop a communication app for his wife, who suffered from the degenerative illness ALS. The students embarked on a design thinking exercise, and Douglas encouraged them to use AI to accelerate their work building the app. They tried, and they found the errors and the sometimes weird way bots edited the code to be frustrating, so they just completed the work themselves. For them, the stakes were too high to delegate work to AI.
These are just educators who are generously publishing their experiences. In the last few weeks, I met a computer science teacher who built a Gem to support her students on some independent work while she was in a workshop with me. She asked them to use the Gem to answer clarification questions, evaluate code, and break down tasks. Every student had to submit a short video message at the end of class sharing their work, reflecting on their learning, and giving both the Gem and the teacher feedback on its usefulness.
I met a history teacher who re-envisioned a writing assignment to put students more in charge of visualizing and analyzing their own processes. She hypothesized this shift would make it more difficult to delegate tasks to AI. The bigger impact was that students produced better work that they could explain with more sophistication.
I met a theater teacher who encouraged his students to use chatbots as part of character studies so that they could generate scenarios the characters might face, expand on/complicate their initial views of the character, and develop more nuanced interpretations.
I met a math teacher who challenged his students to see if they could answer definitively, “Is AI bad at math?” so that they could learn more about chatbots’ capabilities and limitations.
All of these teachers are playing with goals and roles as a way to revise or transform tasks. None of these teachers is “adopting AI” in the edtech integration sense of dropping a specific tool into the classroom. Rather, they are designing with an awareness of, and sometimes in partnership with, AI.
Students and teachers are partners in learning.
The idea that AI is “a student problem” or “a teacher problem” ignores the fact that learning is a process where teachers and students share responsibility. For students, that’s doing the work with integrity and commitment. For teachers, that’s designing meaningful learning experiences with a deep understanding of both students and goals.
If you want to embrace this design mindset and don’t know where to begin, consider David Duncan’s approach in “How Do Workers Develop Good Judgment in an AI Era?” Duncan’s focus is on entry-level and early-career employees, people whose foundational learning is accomplished by performing foundational tasks. The arrival of AI tools which can now automate these foundational tasks “simultaneously increases the need for judgment and erodes the experiences that produce it.” I would argue we face the same challenge in education.
Duncan’s suggestion? Redesign the work. He offers four questions to audit current practices:
Who is actually making consequential decisions, and who is merely reviewing work shaped by others or by machines?
Where do people experience the downstream effects of their choices, including failures?
Which roles have lost the repetitive, low-stakes tasks that once built judgment over time?
Where are people being shielded from ambiguity rather than asked to wrestle with it?
There’s a very human element to all of this that supersedes any discussion of whether we should use AI. One of the most interesting pieces I’ve read in the last couple of months was Phillipa Hardman’s analysis of an MIT Media Lab study on AI-generated feedback for students. The MIT study finds that AI feedback is useful for information, but not for motivation. Hardman uses this study to further investigate the question of what drives motivation in rigorous learning experiences.
“Learners are more motivated, persist longer, think more deeply, and learn more when they experience visible instructor presence, social connection, being seen as individuals, belonging, and human support for self-regulation. These aren’t soft outcomes or student preferences. They’re robust predictors of learning across hundreds of studies and multiple theoretical frameworks.”
If we want students who are motivated and persist through struggle, we should consider how to make the work matter. That begins with giving them a meaningful, goals-aligned role to play and the confidence that they have access to a human who cares about them and who is committed to guiding them along the way.
Upcoming Ways to Connect With Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, take a look at my website and reach out for a conversation. I’d love to hear about what you’re working on.
In-Person Events
April 11. I’ll be delivering the opening keynote at “The AI Exchange,” a teaching and learning conference where breakout sessions will be led by classroom educators sharing on-the-ground insights and questions about AI. This free event is open to all educators and is hosted by Buckingham Browne & Nichols School in Cambridge, MA, USA.
June 16-18. I’ll be facilitating a three-day AI program called “Learning and Leading in the Age of AI.” This intensive residential program is designed for school teams to have time and space to design classroom-based and schoolwide AI applications for the next school year. Hosted in partnership with the California Teacher Development Collaborative (CATDC) at the Midland School in Los Olivos, CA, USA.
June 23-26. I’ll be joining the Summer AI Institute at Lakefield College School (Lakefield, Ontario, Canada) as a speaker and coach. This event is for teams of educators from Canadian independent schools to advance their AI work, design classroom and schoolwide AI initiatives, and learn from each other’s work.
Online Workshops
May 13. I’ll be facilitating “AI and the Question of Assessment,” a design workshop for educators who are looking to redesign assessments to be responsive to generative AI. We’ll explore big questions about learning and motivation as well as practical design strategies that do and don’t use AI to address those questions. Offered in partnership with the Association of Independent Schools in New England (AISNE).
Links!
It’s worth reading this post by Mike Perkins, one of the co-developers of the “AI Assessment Scale” that has inspired dozens of school AI policies I’ve seen. He clarifies that the scale was never intended as a policy document, but as an assessment design tool, and he has altered the language to make that more clear.
Jon Ippolito tackles one of the buzziest words in AI in education right now—“friction”—and suggests the kinds of friction we should hold on to, and the kinds we should let go of.
For those struggling with change management in schools, Lori Cohen asks us to stop viewing our skeptical colleagues as the “late majority” we need to move and start viewing them as “the sages of change” we need to consult.
I thought this Ezra Klein interview with Jack Clark, the head of policy at Anthropic, was illuminating. What Clark answers with confidence, what he hedges on, and what he refuses to talk about at all are glimpses into the mindset and goals of the major AI companies.
Julia Freeland Fisher, who is doing interesting research on AI and social networks and connections, offers five charts that illustrate AI’s relational grip on us.
Peps Mccrea with a succinct case for teachers to embrace collective alignment. If we agree to do certain things together and consistently, we don’t just serve students, we save ourselves time, energy, and frustration.
I had two great conversations with heads of preK-8 schools about AI. I spoke with Emily Brown of Chandler School on her school’s podcast about talking with students and parents about AI. I also talked with Sam Shapiro of Marin Montessori about raising humans in an AI age on the “Grounded and Soaring” podcast.



I'm not an education professional, but every single sentence in this article gave me hope. What a gem. I'll be sending this around to many. Thank you for writing it.
Brilliantly clear thinking as always, Eric. This distinction between goals and tasks is a really useful framing to use in considering how we make the most if AI in the classroom.