I think you are absolutely correct to hypothesise that the future of education will be (even more than now) about the social element that a school or college can offer as a complement to the online AI-powered element of learning. The best educational experiences will make the most of social interaction both between teachers and students and between the students themselves.
Hi Eric - as always great thinking here and I love the shout out to Star Trek - Vulcan Learning. That said it seems it would be an extremely lonely way to learn. You couldn't be more quoting, "Matthew Rascoff of Stanford University has said, “School is learning embedded in a social experience.”
I would suggest two other elements of this AI in Education puzzle:
1. Maintaining common applications among teachers. We are currently struggling to find common ground for teachers. There are those who are the early adopters that are jumping at every new AI Tool for education they find and then those who are not reluctant but want proof of value added. Unfortunately what I see then is that students begin to get into the 'Teacher A says I can use this tool, and you (Teacher B) are saying I cannot.' This split is only growing as the firehouse of AI Tools is none stop
2. From a leadership perspective, the costs are going through the roof. Everything has moved to subscription base so we buy what we vet and think is going to genuinely be safe, valuable, and effective for students, but we have students who are buying their own tools and this creates the haves and have nots - what can we do about this?
Thanks again Eric for such great thinking and getting the conversations out there.
These are great ideas, Chris! Your first point is something I'm seeing become more common in a troubling way: inconsistent student experience with teacher use and permissions around AI. This results in confusion and, worse, punishment of the student. That's why I'd love to see more transparency and open dialogue across academic departments about AI... the teacher-by-teacher or dept-by-dept policy is going to be tough to sustain.
You write, "If you are going to spend money, spend it on the most flexible tool you can find. Right now, this is probably ChatGPT Plus for $20 a month, which gives you a variety of multimodal capabilities and access to one of the best models on the market."
This seems wise advice. ChatGPT Plus does enough for me that I find I'm beginning to lose interest in all the other services. I figure that if something dramatically better comes out I'll hear about it and can investigate. Until then I'd rather be working on my projects than trying to keep up with every development in the industry.
You write, "Objecting to AI on ethical grounds is valid and important."
I'm less sure about this. Given how little power any of us have to affect the future development of these systems, what can our objecting accomplish?
Teachers ask, "What is the role of teachers in the future?"
When tractors came to the farm, what was the role of the farm workers? Answer: learn how to drive the tractor, or find another line of work. When robots came to the factory, what was the role of the factory workers? Answer: learn how to manage the robots, or find another line of work.
In both of these earlier automation transitions, far less human labor was needed. That was the whole point. This same automation process is now coming to the white collar world.
How are you advising schools and districts about privacy, security, etc? The big models absorb a lot of data. Most of the good AI tools are built on big model architecture. The thought of students turning over the text/data in exchange for services still makes me a little queasy, how about you?
I share your concerns. I think it is another reason to work with students on critical AI literacy so they can make better decisions rather than trying to manage or control what they can or can't do with AI.
I think you are absolutely correct to hypothesise that the future of education will be (even more than now) about the social element that a school or college can offer as a complement to the online AI-powered element of learning. The best educational experiences will make the most of social interaction both between teachers and students and between the students themselves.
Hi Eric - as always great thinking here and I love the shout out to Star Trek - Vulcan Learning. That said it seems it would be an extremely lonely way to learn. You couldn't be more quoting, "Matthew Rascoff of Stanford University has said, “School is learning embedded in a social experience.”
I would suggest two other elements of this AI in Education puzzle:
1. Maintaining common applications among teachers. We are currently struggling to find common ground for teachers. There are those who are the early adopters that are jumping at every new AI Tool for education they find and then those who are not reluctant but want proof of value added. Unfortunately what I see then is that students begin to get into the 'Teacher A says I can use this tool, and you (Teacher B) are saying I cannot.' This split is only growing as the firehouse of AI Tools is none stop
2. From a leadership perspective, the costs are going through the roof. Everything has moved to subscription base so we buy what we vet and think is going to genuinely be safe, valuable, and effective for students, but we have students who are buying their own tools and this creates the haves and have nots - what can we do about this?
Thanks again Eric for such great thinking and getting the conversations out there.
Chris Bell
These are great ideas, Chris! Your first point is something I'm seeing become more common in a troubling way: inconsistent student experience with teacher use and permissions around AI. This results in confusion and, worse, punishment of the student. That's why I'd love to see more transparency and open dialogue across academic departments about AI... the teacher-by-teacher or dept-by-dept policy is going to be tough to sustain.
You write, "If you are going to spend money, spend it on the most flexible tool you can find. Right now, this is probably ChatGPT Plus for $20 a month, which gives you a variety of multimodal capabilities and access to one of the best models on the market."
This seems wise advice. ChatGPT Plus does enough for me that I find I'm beginning to lose interest in all the other services. I figure that if something dramatically better comes out I'll hear about it and can investigate. Until then I'd rather be working on my projects than trying to keep up with every development in the industry.
You write, "Objecting to AI on ethical grounds is valid and important."
I'm less sure about this. Given how little power any of us have to affect the future development of these systems, what can our objecting accomplish?
Teachers ask, "What is the role of teachers in the future?"
When tractors came to the farm, what was the role of the farm workers? Answer: learn how to drive the tractor, or find another line of work. When robots came to the factory, what was the role of the factory workers? Answer: learn how to manage the robots, or find another line of work.
In both of these earlier automation transitions, far less human labor was needed. That was the whole point. This same automation process is now coming to the white collar world.
Great stuff, Eric!!!
How are you advising schools and districts about privacy, security, etc? The big models absorb a lot of data. Most of the good AI tools are built on big model architecture. The thought of students turning over the text/data in exchange for services still makes me a little queasy, how about you?
I share your concerns. I think it is another reason to work with students on critical AI literacy so they can make better decisions rather than trying to manage or control what they can or can't do with AI.