Education was among the first victims of AI panic. Concerns over cheating quickly made the news. But AI optimists like John Bailey are taking a whole different approach. Today on Faster, Please! — The Podcast, I talk with Bailey about what it would mean to raise kids with a personalized AI coach — one that could elevate the efficacy of teachers, tutors, and career advisors to new heights.
John Bailey is a colleague and senior fellow at AEI. He formerly served as special assistant to the president for domestic policy at the White house, as well as deputy policy director to the US secretary of commerce. He has additionally acted as the Director of Educational Technology for the Pennsylvania Department of Education, and subsequently as Director of Educational Technology for the US Department of Education.
In This Episode
* An opportunity for educators (1:27)
* Does AI mean fewer teachers, or better teachers? (5:59)
* A solution to COVID learning loss (9:31)
* The personalized educational assistant (12:31)
* The issue of cheating (17:49)
* Adoption by teachers (21:02)
Below is a lightly edited transcript of our conversation
Education was among the first victims of AI panic. Concerns over cheating quickly made the news. But AI optimists like John Bailey are taking a whole different approach. Today on Faster, Please! — The Podcast, I talk with Bailey about what it would mean to raise kids with a personalized AI coach — one that could elevate the efficacy of teachers, tutors, and career advisors to new heights.
John Bailey is a colleague and senior fellow at AEI. He formerly served as special assistant to the president for domestic policy at the White house, as well as deputy policy director to the US secretary of commerce. He has additionally acted as the Director of Educational Technology for the Pennsylvania Department of Education, and subsequently as Director of Educational Technology for the US Department of Education.
An opportunity for educators (1:27)
Pethokoukis: John, welcome to the podcast.
Bailey: Oh my gosh, it's so great to be with you.
We’d actually chatted last summer a bit on a panel about AI and education, and this is a fast moving, evolving technology. People are constantly thinking of new things to do with it. They're gauging its strengths and weaknesses. As you're thinking about any downsides of AI in education, has that changed since last summer? Are you more or less enthusiastic? How would you gauge your evolving views?
I think I grow more excited and enthusiastic by the day, and I say that with a little humility because I do think the education space, especially for the last 20 years or so, has been riddled with a lot of promises around personalized learning, how technology was going to change your revolutionize education and teaching and learning, and it rarely did. It was over promise and under-delivered. This, though, feels like it might be one of the first times we're underestimating some of the AI capabilities and I think I'm excited for a couple different reasons.
I just see this as it is developing its potential to develop tutoring and, just in time, professional development for teachers, and being an assistant to just make teaching more joyful again and remove some of the drudgery. I think that's untapped area and it seems to be coming alive more and more every day. But then, also, I'm very excited about some of the ways these new tools are analyzing data and you just think about school leaders, you think about principals and superintendents, and state policy makers, and the ability of being able to just have conversations with data, not running pivot tables or Excel formulas and looking for patterns and helping to understand trends. I think the bar for that has just been dramatically lowered and that's great. That's great for decision-making and it's great for having a more informed conversation.
You're right. You talked about the promise of technology, and I know that when my kids were in high school, if there were certain classes which were supposedly more tech adept, they would bring out a cart with iPads. And I think as parents we are supposed to be like, “Wow, every kid's going to have an iPad that's going to be absolutely amazing!” And I'm not sure if that made the teachers more productive, I'm not sure, in the end, if the kids learned any better.
This technology, as you just said, could be different. And the one area I want to first focus on is, it would be awesome if we had a top-10-percent teacher in every classroom. And I know that, at least some of the early studies, not education studies, but looking at studies of using generative AI in, perhaps, customer service. One effect they notice is kind of raising the lower-performing group and having them do better. And so I immediately think about the ability to raise… boy, if we could just have the lowest-performing teachers do as well as the middle-performing teachers, that would seem to be an amazing improvement.
I totally agree with you. Yeah, I think that was the BCG study that found when consultants used gen AI—I think, in that case, it was ChatGPT—everyone improved, but the folks that had the most dramatic improvement were the lowest performers in the consulting world. And here you could imagine something very similar for teachers that are teaching out of field—that happens a lot in science and mathematics. It's with new teachers, and the ability of helping them perform better… also, the ability, I think, of combining what they know with also what science and research is saying is the best practice. That's been very difficult.
One of the examples I give is the Department of Ed has these guides called the What Works Clearinghouse Practice Guides, and this is what evaluation of research, and studies, and evaluation has to say, “This is the best way of teaching math, or the best way of teaching reading,” but these are dense documents, they're like 137 PDF pages. If you're asking a new teacher teaching out of field to read 137 pages of a PDF and apply it to their lesson that day, that's incredibly difficult. But it can happen in a matter of seconds now with an AI assistant that can read that practice guide, read your lesson, and make sure that you're getting just-in-time professional development, you're getting an assistant with your worksheets, with your class activities and everything. And so I totally agree with you, I think this is a way of helping to make sure that teachers are able to perform better and to really be an assistant to teachers no matter where they are in terms of their skill level.
Does AI mean fewer teachers, or better teachers? (5:59)
I recall a story, and I forget which sort of tech CEO was talking to a bunch of teachers, and he said, “The good news: in the future, all teachers will make a million dollars a year… bad news is we're only going to need like 10 percent of you” because each teacher would be so empowered by—this was pre-AI—by technology that they would just be so much more productive.
The future you're talking about isn't necessarily a future of fewer teachers, it's just sort of the good part of it, which is more productive teachers, and any field where there's a huge human element is always tough to make more productive. Is the future you're talking about just… it's not necessarily fewer teachers, it's just more productive teachers?
I think that's exactly right. I don't think this is about technology replacing teachers, I think it's about complimenting them. We see numerous studies that ask teachers how they spend their time and, on average, teachers are spending less than half of their time on instruction. A lot of it is on planning, a lot of it is on paperwork. I mean, even if we had AI that could take away some of that drudgery and free up teachers' times, so they could be more thoughtful about their planning or spend more time with students, that would be a gift.
But also I think the best analog on this is a little bit in the healthcare space. If you think of teachers as a doctor, doctors are your most precious commodity in a healthcare system, you want to maximize their time, and what you're seeing is that now, especially because of technology and because of some tools, you can push a lot of decisions to be more subclinical. And so initially that was with nurses and nurse practitioners so that could free up doctor's time. Now you're seeing a whole new category, too, where AI can help provide some initial feedback or responses, and then if you need more help and assistance, you’d go up to that nurse practitioner, and if you need more help and assistance, then you go and you get the doctor. And I bet we're going to see a bunch of subclinical tools and assistance that come out in education, too. Some cases it's going to be an AI tutor, but then kids are going to need a human tutor. That's great. And in some cases they're going to need more time with their teacher, and that's great, too. I think this is about maximizing time and giving kids exactly what they need when they need it.
This just sort of popped in my head when you mentioned the medical example. Might we see a future where you have a real job with a career path called “teacher assistant,” where you might have a teacher in charge, like a doctor, of, maybe, multiple classes, and you have sort of an AI-empowered teaching assistant as sort of a new middle-worker, much like a nurse or a physician's assistant?
I think you could, I mean, already we're seeing you have teacher assistants, especially in higher education, but I think we're going to see more of those in K-12. We have some K-12 systems that have master teachers and then teachers that are a little bit less-skilled or newer that are learning on the job. I think you have paraprofessionals, folks that don't necessarily have a certification that are helping. This can make a paraprofessional much more effective. We see this in tutoring that not every single tutor is a licensed teacher, but how do you make sure a tutor is getting just-in-time help and support to make them even more effective?
So I agree with you, I think we're going to see a whole category of sort of new professions emerge here. All in service by the way, again, of student learning, but also of trying to really help support that teacher that's gone through their licensure that is years of experience and have gone through some higher education as well. So I think it's a complimentary, I don't think it's replacing,
A solution to COVID learning loss (9:31)
You know, we're talking about tutoring, and the thing that popped in my head was, with the pandemic and schools being hybrid or shut down and kids having to learn online and maybe they don't have great internet connections and all that, that there's this learning-loss issue, which seems to be reflected in various national testing, and people are wondering, “Well great, maybe we could just catch these kids up through tutoring.” Of course, we don't have a nationwide tutoring plan to make up for that learning loss and I'm wondering, have people talked about this as a solution to try to catch up all these kids who fell behind?
I know you and I, I think, share a similar philosophy of where… in DC right now, so much of the philosophy around AI is, it's doomerism. It's that this is a thing to contain and to minimize the harms instead of focusing on how do we maximize the benefits? And if there's been ever a time when we need federal policymakers and state policy makers to call on these AI titans to help tackle a national crisis, the learning crisis coming out of the pandemic is definitely one of those. And I think there's a way to do tutoring differently here than we have in the past. In the past, a lot of tech-based tutoring was rule-based. You would ask a question that was programmed, Siri would give a response, it would give a pre-programed answer in return. It was not very warm. And I think what we're finding is, first of all, there's been two studies, one published in JAMA, another one with Microsoft and Google, that found that in the healthcare space, not only could these AI systems be not just technically accurate, but their answers, when compared to human doctors, were rated as more empathetic. And I think that's amazing to think about when empathy becomes something you can program and maximize, what does it mean to have an empathetic tutor that's available for every kid that can encourage them?
And for me, I think the thing that I realized that this is fundamentally different was about a year ago. I wanted to just see: Could ChatGPT create an adaptive tutor? And the prompt was just so simple. You just tell it, “I want you to be an adaptive tutor. I want you to teach a student in any subject at any grade, in any language, and I want you to take that lesson and connect it to any interest a student has, and then I want you to give a short quiz. If they get it right, move on. If they get it wrong, just explain it using simpler language.” That literally is the prompt. If you type in, “John. Sixth grade. Fractions. Star Wars,” every example is based on Star Wars. If you say, “Taylor Swift,” every example is on Taylor Swift. If you say, “football,” every example is on football.
There's no product in the market right now, and no human tutor, that can take every lesson and connect it to whatever interest a student has, and that is amazing for engagement. And it also helps take these abstract concepts that so often trip up kids and it connects it to something they're interested in, so you increase engagement, you increase understanding, and that's all with just three paragraphs of human language. And if that's what I can do, I'd love to sort of see our policymakers challenge these AI companies to help build something that's better to help tackle the learning loss.
The personalized educational assistant (12:31)
And that's three paragraphs that you asked of a AI tutor where that AI is as bad as it's ever going to be. Oftentimes, when people sort of talk about the promise of AI and education, they'll say like, “In the future,” which may be in six months, “kids will have AI companions from a young age with which they will be interacting.” So by the time they get to school, they will have a companion who knows them very well, knows their interests, knows how they learn, all these things. Is that kind of information something that you can see schools using at some point to better teach kids on a more individualized basis? Has there been any thought about that? Because right now, a kid gets to school and all teacher knows is maybe how the kid did it in kindergarten or preschool and their age and their face, but now, theoretically, you could have a tremendous amount of information about that kid's strengths and weaknesses.
Oh my gosh, yeah, I think you're right. Some of this we talked about in the future, that was a prompt I constructed, I think for ChatGPT4 last March, which feels like eons ago in AI timing. And I think you're right. I think once these AI systems have memory and can learn more about someone, and in this case a student, that's amazing, to just sort of think that there could be an AI assistant that literally grows up with the child and learns about their interests and how they're struggling in class or what they're thriving in class. It can be encouraging when it needs to be encouraging, it can help explain something when the child needs something explained, it could do a deeper dive on a tutoring session. Again, that sounds like science fiction, but I think that's two, three years away. I don't think that's too far.
Speaking of science fiction, because I know you're a science fiction fan, a lot of what we're describing now feels like the 1995 Sci-Fi novel, The Diamond Age and that talked about this, it talked about Nell, who was a young girl who came in a possession of a highly advanced book. It was called the Young Lady’s Illustrated Primer, and it would help with tutoring and with social codes and with a lot of different support and encouragement. And at the time when Neil wrote that in ’95, that felt like science fiction and it really feels like we've come to the moment now—you have tablet computers, you have phones that can access these super-intelligent AI systems that are empathetic, and if we could get them to be slightly more technically accurate and grounded in science and practice and rigorous research, I don’t know, that feels really powerful. It feels like something we should be leaning into more than leaning away from
John, that reference made this podcast an early candidate for Top Podcasts of 2024. Wonderful. That was really playing to your host. Again, as you're saying that, it occurs to me that one area that this could be super helpful really is sort of career advice when kids are wondering, “What I should do, should I go to college?” and boy, to have a career counselor's advice supplemented by a lifetime of an AI interacting with this kid… Counselors will always say, “Well, I'm sure your parents know you better than I do.” Well, I'll tell you, a career counselor plus a lifetime AI, you may know that kid pretty well.
Let's just take instruction off the table. Let's say we don't want AI to help teach kids, we don't want AI to replace teachers. AI as navigators I think is another untapped area, and that could be navigators as parents are trying to navigate a school choice system or an education savings account. It could be as kids and high school students are navigating what their post-college plan should be, but these systems are really good with that.
I remember I played with a prompt a couple months ago, but it was that, I said, “My name is John. I play football. Here's my GPA. I want to go to school in Colorado and here's my SAT score. What college might work well for me?” And it did an amazing job with even that rudimentary prompt of giving me a couple different suggestions in why that might be. And I think if we were more sophisticated there, we might be able to open up more pathways for students or prevent them from going down some dead ends that just might not be the right path for them.
There's a medical example of this that was really powerfully illustrative for me, which is, I had a friend who, quite sadly a couple of months ago was diagnosed with breast cancer. And this is an unfolding diagnosis. You get the initial, then there's scans and there's biopsies and reports, and then second and third and fourth opinions, it's very confusing. And what most patients need there isn't a doctor, they need a navigator. They need someone who could just make sense of the reports that can explain this Techno Latin that kind of gets put into the medical jargon, and they need someone to just say, what are the next questions I need to ask as I find my path on this journey?
And so I built her a GPT that had her reports and all she could do was ask it questions, and the first question she said is, “Summarize my doctor notes, identify they agree and where they disagree.” Then, the way I constructed the prompt is that after every response, it should give her three questions to ask the doctor, and all of a sudden she felt empowered in a situation where she felt very disempowered with navigating a very complex, and in that case, a life-threatening journey. Here, how can't we use that to take all the student work, and their assessments, their hobbies, and start helping them be empowered with figuring out where they should be pursuing a job or college or some other post-secondary pathway.
The issue of cheating (17:49)
You know I have a big family, a lot of kids, and I've certainly had conversations with, say, my daughters about career, and I'll get something like, “Ugh, you just don't understand.” And I'll say, “Well, help me, make me understand.” She's like, “Oh, you just don't understand.” Now I'm like, “Hey, AI, help me understand, what does she want to do? Can you give me some insights into her career?”
But we've talked about some of the upsides here and we briefly mentioned, immediately this technology attracted criticism. People worried about a whole host of things from bias in the technology to kids using it to cheat. There was this initial wave of concerns. Now that we're 15 months, maybe, or so since people became aware of this technology, which of the concerns do you find to be the persistent ones that you think a lot about? Are you as worried, perhaps, about issues of kids cheating, on having an AI write the paper for them, which was an early concern? What are the concerns that sort of stuck with you that you feel really need to be addressed?
The issue of cheating is present with every new technology, and this was true when the internet came out, it was true when Wikipedia came out, it was true when the iPhones came out. You found iPhone bans. If you go back and look at the news cycle in 2009, 2010, schools were banning iPhones; and then they figure out a way to manage it. I think we're going to figure out a way to manage the cheating and the plagiarism.
I think what worries me is a couple different things. One is, the education community talks often about bias, and when they usually talk about bias, in this case, they're talking about racial bias in these systems. Very important to address that head on. But also we need to tackle political bias. I think we just saw that recently with Gemini that, often, sometimes these systems can surface a little bit of center-left perspective and thinking on different types of subjects. How do we fine-tune that so you're getting it a little bit more neutral. Then also, in the education setting, it's pedagogical bias. Like when you're asking it to do a lesson plan or tutoring session, what's the pedagogy that's actually informing the output of that? And those are all going to be very important, I think, to solve.
The best case scenario, AI gets used to free up teacher time and teachers can spend more time in their judgment working on their lesson plans and their worksheets and more time with kids. There's also a scenario where some teachers may fall asleep at the wheel a little bit. It's like what you're seeing with self-driving cars, that you're supposed to keep your hands on the wheel and supposed to be at least actively supervising it, but it is so tempting to just sort of trust it and to sort of tune out. And I can imagine there's a group of teachers that will just take the first output from these AI systems and just run with it, and so it's not actually developing more intellectual muscle, it's atrophying that a little bit.
Then lastly, I think, what I worry about with kids—this is a little bit on the horizon, this is the downside to the empathy—what happens when kids just want to keep talking to their friendly, empathetic, AI companion and assistant and do that at the sacrifice of talking with their friends, and I think we're seeing this with the crisis of loneliness that we're seeing in the country as kids are on their phones and on social media. This could exaggerate that a lot more unless we're very intentional now about how to make sure kids aren't spending all their time with their AI assistant, but also in the real life and the real world with their friends.
Adoption by teachers (21:02)
Will teachers be excited about this? Are there teachers groups, teachers unions who are… I am sure they've expressed concerns, but will this tool be well accepted into our classrooms?
I think that the unions have been cautiously supportive of this right now. I hear a lot of excitement from teachers because I think what teachers see is that this isn't just one more thing, this is something that is a tool that they can use in their job that provides immediate, tangible benefits. And if you're doing something that, again, removes some drudgery of some of the administrative tasks or helps you with figuring out that one worksheet that's going to resonate with that one kid, that's just powerful. And I think the more software and systems that come out that tap that and make that even more accessible for teachers, I think the more excitement there is going to be. So I'm bullish on this. I think teachers are going to find this as a help and not as a threat. I think the initial threat around plagiarism, totally understandable, but I think there's going to be a lot of other tools that make teachers' lives better.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.