Note: Earlier this week, OpenAI released a GPT-4o-powered image generator inside ChatGPT. It’s an important step up in the variety, quality, and accuracy of images we can now create with AI. I’ve included some examples throughout this post, including two comparisons of what the same prompt produced before and after the update. If you want to learn more, here is a useful overview.
About a month ago, someone asked me during a school workshop, “How has your AI use changed in the last two years?” It’s a great question, and I thought it might be helpful to share an expanded version of my answer.
It’s important to know the kind of user I am: I am a consultant, not a school-based educator or administrator. My work includes primarily writing, research, presentations, instructional design, and advising, and to a lesser degree design and data analysis. Nothing I share here is meant as a recommendation; I offer it as a window into one person’s use and hopefully as a spark to get you thinking about how and why you use generative AI.
I rarely use specialized AI tools.
As the general purpose chatbots (ChatGPT, Gemini, etc.) have improved, I have stopped using specialized AI tools for my own work. I still experiment with new tools as they emerge, and knowing how to use education-specific AI tools (MagicSchool, Diffit, etc.) is part of my job, but I would say more than 90% of my AI use is prompting general purpose bots. This is one thing that has not changed in two years: prompting still matters. Good prompting maximizes the effectiveness of general purpose bots, which for me are the most powerful and flexible tools available to the general public right now.
I am not loyal to any one chatbot. Over the past month, I have used ChatGPT, Claude, Gemini, DeepSeek, Perplexity, and Llama for various tasks. I use ChatGPT and Claude nearly every day (I pay for premium subscriptions to ChatGPT, Claude, and Gemini, and my subscription to Perplexity Pro is included in my Zoom account). I can count on one hand the specialized tools I use: NotebookLM for building and using knowledge bases, Elicit and Consensus for curating academic research, Eleven for text to speech, Napkin for first-draft visualizations. I don’t pay for premium access to any of these tools. I don’t use them enough.

I use it as a mind extension tool.
As the models have improved, especially in reducing hallucinations and increasing the sophistication of their reasoning abilities, I use chatbots often to help me “externally process” as I research, write, or prepare for workshops. I’ve written before about the potential of AI to be a mind extension tool, and I now include interacting with AI alongside taking notes, doodling, talking to a human, etc. as one of the many helpful ways I try to move thoughts out of my brain so that I can work with them.
I prefer Claude for this kind of work. I find its output to be the most cogent, concise, and interesting, and sometimes it will even surprise me. Here it is disagreeing with me a couple of months ago, pushback that, as gentle as it is, is not something I saw in the early days of chatbots.
I talk to AI more.
Voice mode has improved dramatically over the past two years, and I speak with AI far more than I used to. It’s especially useful if I am multitasking: as I am writing something or reading something or analyzing something on my computer, I will open voice mode in the ChatGPT or Gemini app on my phone and speak out loud, which is easier than toggling between tabs or devices to type a prompt. My increased use hasn’t alleviated my concerns about the “artificial intimacy” of bot conversations. If anything, it’s deepened them: I can’t go more than five minutes talking to a bot without wondering what this means for the future of human relationships.
And, we’re already moving beyond voice mode. One of the most compelling AI demos I’ve seen recently was Ethan Mollick’s December post where he experiments with “Show Gemini,” a feature of Google’s AI Studio that allows you to share your desktop with Gemini and then interact in voice mode about what’s on your screen. OpenAI has also released a “Live” feature in the ChatGPT mobile app where you can share your camera with ChatGPT and talk to it about what you see. And, of course, there’s Meta’s AI glasses. We are in or very near to the era of competent, real-time AI assistants that can see, hear, and interact with the same things we do. This has huge implications for how we navigate the world. As a concrete, education-specific example, I used to do a lot of private tutoring, and Show Gemini is the closest thing I’ve seen to AI replicating the kind of work a tutor would actually do. For those educators still committed to using AI detection tools, no AI detector is going to be able to identify when a student has sought assistance in this way.

I explicitly ask for concise answers.
The verbosity of chatbots has finally gotten to me. I now often add this line to my prompts: “Make your answer as concise and precise as possible. I’ll invite you to elaborate if needed.”
I use it for research.
The “Deep Research” options in Gemini, ChatGPT, and Perplexity are not perfect, but they are powerful, and they are now a regular part of my research habits. This is probably the first time since NotebookLM came out that a new AI feature or tool has had a meaningful impact on my day-to-day work. In Deep Research, you submit a prompt and generative AI responds to your query by suggesting a research methodology and areas of focus, which allows you to re-prompt to ensure its approach reflects your goals. It then generates a robust report that includes dozens of linked citations. Kara Kennedy has a good summary with links to useful resources and prompting tips.
There are a few limitations: for now, these tools only search the internet, so you’re not getting access to paywalled databases like JSTOR or Gale. In addition, just like any AI tool, Deep Research can hallucinate and can be biased in any number of ways. For these reasons, AI is one of three or four different things I try when I launch research projects. But, I have generated genuinely useful reports on the cognitive benefits of writing, the history of AI, the digital divide in education, the history of strategic planning, the efficacy of project-based learning, and more. These reports have not only accelerated my research process, but also improved it, connecting me to resources and synthesizing ideas that help me think through what I need to know.

I use it to code.
I am a programming novice, so it’s taken me a while, but I’ve found a few uses for generative AI’s coding capabilities. I’ve used both Claude and ChatGPT to code and execute text mining work on large amounts of qualitative data (such as survey results) like word frequency, sentiment, and contextual analysis. On the recommendation of many people, I use it to write formulas for spreadsheet tools or scripts to automate workflows. I have just started asking it to write HTML for web design tasks.
I don’t make bots anymore.
In the early days of generative AI, I was very excited by tools like GPTs and Poe’s Create-a-Bot tool that allowed you to build custom bots powered by large language models that perform specific functions. I created a lot of bots in those heady days, and I’ve left almost all of them behind. The one exception is a GPT I built in 2023 to write alternative text that aligns with web accessibility guidelines. This remains of the most reliable AI tools I have (you can look at the alt text on the images in this post, all AI-generated and unedited, to judge it for yourself).
Today, the general purpose chatbots are easier to prompt, more versatile in capabilities, and less prone to hallucination. I prefer using them to creating new bots which require ongoing maintenance and updating. I save prompts that work particularly well in a document, so if I need to repeat a task, I have the right prompt at my fingertips.
I am more ethically literate about AI.
In my last post, I wrote about ethical decision-making as one of four essential AI skills that we and our students will need to navigate our AI future. In the last two years, I have prioritized my own ethical AI literacy, learning more about the tradeoffs of using AI and how my choices do and do not reflect my values. I have made no progress in resolving what my own ethical use of generative AI should look like, but high on my list of AI skills to develop this year is reducing my dependency on the frontier models by learning how to install and use local models on my own devices. I want to know more about open-source AI technology and the capabilities of lighter-weight tools that are not connected to the internet and that I own. I will not stop using generative AI, but I know I can refine the way I use it.

Not Replacements. New Categories.
As of this writing, generative AI has not replaced anything I do by myself; instead, it has introduced new categories of ways to work into my daily life. Sometimes I process thoughts with AI. Sometimes I research with AI. Sometimes I talk with AI. And, a lot of the time, I don’t use AI at all.
I have added a Post-It to my desk that says, “Can AI help you with this?” It’s a reminder of the powerful tool at my disposal, and a reminder that I am in an ongoing process of discovering how AI will change how I think about and do my job. This is the biggest change in two years of using generative AI: I am not just experimenting with AI anymore. I’m developing new, useful, AI-powered habits.
Upcoming Ways to Connect With Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, take a look at my website and reach out for a conversation. I’d love to hear about what you’re working on.
In-Person Events
June 6. I’ll be delivering a keynote and facilitating two workshops (one on AI, one on student-centered assessment) at the STLinSTL conference at MICDS in St. Louis, MO, USA.
June 9. I'll be facilitating workshops on AI at the Summer Teaching and Learning Institute at Randolph School in Huntsville, AL, USA.
Online Workshops
April 14 and 16. In partnership with the California Teacher Development Collaborative (CATDC), I'm offering "AI and the Teaching of Writing: Design Sprint," a two-part, hands-on workshop where teachers will deconstruct and redesign a writing assessment to be responsive to generative AI. We'll use high-quality prompting techniques as well as "AI-resistant" and "AI-assisted" teaching strategies to ensure our writing assessments achieve their intended goals. Open to all educators, even if you don't live or work in California.
Links!
I am new to this 2021 article “How I Got Smart” by Jeopardy! champion Amy Schneider, but it couldn't be more relevant to the conversation about the role AI should play in human thinking and, therefore, in school.
Alison Gopnik’s work on human development, specifically the intelligence of babies and toddlers, has had a real impact on how I think about teaching and learning. This interview about how her research intersects with the design of AI systems is fascinating.
What is happening right now with AI agents and asynchronous online learning is a canary in the coal mine for all of education. Phillipa Hardman lays out very clearly how agentic AI can either destroy online learning or help reimagine it.
Melanie Mitchell with a concise and helpful explanation of what a “reasoning model” is and what makes it different from the AI models we’ve been accustomed to using.
Jaron Lanier imagines a future where a good many of us are in love with bots, and what that means for humanity.
Ethan Mollick with some research on AI’s impact on workforce performance and the possible implications for how workplaces are designed in the future.
The instructional coaching expert Jim Knight with five myths about teacher professional learning (especially relevant to building teacher capacity on AI!).
Really interesting. I tested out Chatgpt and GenAI in Python and build the code on my local computer, it's pretty easy. I could even add in lines to gaurd against hallucinations and failures.
The model is still a black box though. The results were terrible compared to using old fashioned neural networks and random forest algorithms though in Python.
I had to write a post today, the title would be How Eric Hudson Changed How I Work. You came to our school to speak and introduced me to Notebook LM. LIFE CHANGING. Such a time saver and I feel empowered! Thank yoU!