S2E1 AmandaBickerstaff
===
Amanda Bickerstaff: [00:00:00] These tools are so deeply non-intuitive and require an incredibly different approach than
Rebecca Bultsma: any technology you’ve ever used before. Sometimes, and the research says this, over and over, the best learning, the most transformative learning happens in that messy in-between. And that’s not just for kids, that’s for adults too.
Brett Roer: It’s really important. We wouldn’t say like, try 10 things tomorrow in your classroom. If you, if in absence of ai, it would be like, try one thing. Let’s get really good at one skill or practice.
Brett Roer: Welcome everyone to the AmpED to 11 podcast. My name is Brett Roer, CEO of Amplify and Elevate Innovation. I am joined by my amazing co-host, Rebecca Bultsma, and today we have the incredible Amanda Bickerstaff joining us. Welcome to the podcast, Amanda.
Amanda Bickerstaff: Hi, everyone. Excited to be here.
Brett Roer: We are equally excited. You are our first guest of Season two, and without further ado, we are gonna kick off some of our questions to get this audience AmpED to [00:01:00] 11.
Brett Roer: You ready?
Amanda Bickerstaff: I’m ready. Let’s do it.
Brett Roer: Alright Amanda, so I’ve had the pleasure and the privilege of watching your organization and you grow. You’ve become one of the leading faces in ai, especially for education. I’ve seen you keynote present, you do incredible work, and you are always AmpED to 11. In fact, we were offline for, you know, we said, how was your summer break?
Brett Roer: And you said you worked with 31 orgs this summer and your team’s growing. So you have so much passion for this work. Could you just take a moment and share with our audience your journey, your why, your pursuit and passion in education, and how did you see AI growing and shaping the field today?
Amanda Bickerstaff: First thing is, is that one of the things that has been such a remarkable part of our growth is that.
Amanda Bickerstaff: When I started posting about AI and education for, I mean first of all, LinkedIn had not, I’d never really been on LinkedIn and hadn’t even posted in a year. But when I used Chat GPT for the first time, I realized [00:02:00] two things, and I’ve spoken about this before, but you know, one is that, you know, the promise of ed tech was really exposed during COVID.
Amanda Bickerstaff: You know, we thought, oh, personalization of learning and like we’ll be able to, to do blended learning or online learning. And then what happened is like techno education technology was like, let’s take kids from desks and rows and put them in zoom boxes and then like hope they come on video and are wearing real close.
Amanda Bickerstaff: You know, like, and you know, it was so limited. In fact, the largest ed tech like platform became Zoom, which is not education technology. And so I think that, you know, when using Chat GPT for the first time, it became so clear that this was something that started to have a real outcome of. What could make a difference in schools, like teachers could control lesson planning, instructional planning, and content creation in ways they’d never had before.
Amanda Bickerstaff: Students with AI literacy and the right tools could have real opportunity of on-demand feedback at any time. [00:03:00] And so when I, that was one thing, but also, this is something that I literally kept in conversation with a researcher yesterday about how these tools are so deeply non-intuitive and require an incredibly different approach than any technology you’ve ever used before.
Amanda Bickerstaff: And so that’s how I started. I just started talking and it was like April of 2023. I built a website in a weekend, you know, with a prompt library and a prompting is the way you ask generative AI questions and then started posting. The thing I think that is so remarkable about our growth is that even from that first moment we were in our school, when I say we, I mean me at that time, within six weeks.
Amanda Bickerstaff: It wasn’t talking about generative AI in education as a pundit. It was, I am in front of hundreds of teachers that are showing me in real time their concerns, their fears, their opportunities, even. I mean, I think we have trained more humans on generative ai specifically chat bt then probably anyone in the world like, like we have [00:04:00] seen faces struggle or find epiphanies or understand and light bulbs go off.
Amanda Bickerstaff: And so I think that when we talk about like the, the passion component or the why is that we like this point. I was in a board meeting last week for a board of trustees where I got 25 questions that ranged from. IP copyright to artificial general intelligence to all the things like that. Had nothing to do with general of ai.
Amanda Bickerstaff: And, and probably I and our team could just do work around general of AI adoption. But the cool thing is, is that we’re all educators and we are educator, practitioner student first. And there is not a moment in our work anymore where we are longer than a couple of days or hours away from doing this work directly within schools and institutions, which means that we actually see what’s happening and can tailor what we do to like what is needed at the time.
Amanda Bickerstaff: So I think that that’s our why. [00:05:00] We’re very impact focused. It’s not for the fan of heart, everybody. I’ve never been a consultant or, or had a consultancy before where we’re in person the majority of the time, but when you walk out of every room. You see like change happen in front of you, hopefully for the better.
Amanda Bickerstaff: It makes it, I mean, just remarkable.
Brett Roer: I can just attest again for our listeners, I’ve seen that exact transformation that Amanda’s speaking of as well as her amazing team at AI for Education. So thank you and thanks for sharing that journey and I think our listeners now see the passion you’re about to get coming outta today’s podcast.
Brett Roer: So thank you so much for that.
Rebecca Bultsma: Amanda, I’m curious, as someone who is, is deep into AI ethics, I’m sure you hear a lot of questions around AI ethics. I’m sure you think a lot about questions around AI ethics. I’ve had to for myself to like make it less overwhelming, kind of break it into buckets of kind of ethical ways that [00:06:00] individuals should behave with these tools, ethical things that organizations and schools should be thinking about, and then some of those bigger ethical questions for society.
Rebecca Bultsma: So if you were to distill everything you’ve learned. Into kind of what the top things you’re seeing in each of those buckets are. I can give you a minute to to think about it. ’cause I know it’s loaded, but that’s the only way I’ve been able to make sense of it. ’cause otherwise you have individuals thinking about things they can’t control at this moment.
Amanda Bickerstaff: Well, I think that your, the tension here is that we tend to think about adoption of technology in schools as like teacher led or school led, but we very rarely think of it as, as tech led as the companies themselves. And so I think with the larger ethical considerations is that there’s never been a time in the history of the world where a handful of companies have so much control over.
Amanda Bickerstaff: The ways in which we use technology. And the reason why I say that is that I think that people don’t realize that if they use an application, let’s say an education of a Magic school, a DFI [00:07:00] Edge Wade, School AI, Brisk, they’re using chat GPT models, they’re using Claude models, they’re using Gemini models or Metas Lama, open source models, and that
Amanda Bickerstaff: no matter how great the application is, the, the level of control of how these tools are ethically designed or not are incredibly impactful and in some cases impossible to overcome. And so if you think about a large ethical consideration, we hear a lot is this idea of, of climate. And one of the things that’s very interesting is that when generative AI was first launched, we started, you know, Brett’s on, you know, early adopter, you know, type type P and these creating amazing things.
Amanda Bickerstaff: That had a, a significant environmental impact compared to other technology at that time. But as the technology has advanced cost spaces and, and the actual energy consumption of the individual use outside of enormous super users has really gone to a [00:08:00] similar level of like, you know, mid 2000 and oh eight kind of Google searches, for example.
Amanda Bickerstaff: But the large risk pieces that the data centers that are being opened, the, the enormous water consumption. Is actually happening at the large language model development level and is almost less about GPT five today. But what is GPT six tomorrow? And so the level of nuance of recognizing where the impact really lies around some of these deeper ethical places are at the macro tech level.
Amanda Bickerstaff: So when ways that we think about ways to make this more meaningful for individual schools, teachers, leaders, students, is our AI literacy framework, which still has not been launched widely just because there’s so many, and we don’t wanna add to the noise. But everything that we do, whether it’s a presentation, a course, our, our free student courses coming out soon is based on what we call the C framework, which is the idea that we build AI literacy in a way which people can have the [00:09:00] skills, mindsets, and knowledge to use AI in safe, ethical, and effective ways.
Amanda Bickerstaff: So what can we control? So we might not be able to control the ethics of the, the creation of the model, but you might choose to use. Philanthropic versus open ai because of, for a fact, philanthropic is endorsing regulation for safety in California right now. In, in that sense, so you can make decisions about your use and almost like, you know, the idea of voting with your feet, but in the other end, you can control what data you give to systems.
Amanda Bickerstaff: If you allowed them to train on your, on your data, the ways in which you use it and how you use it, your own transparency of use in terms of your academic and or professional integrity, as well as the more strategic effective use, the better the outcome. So for example, even something like climate, again, if you don’t have a lot of AI literacy and you’re asking hundreds of questions, or you’re doing hundreds of images, versus strategically identifying [00:10:00] areas in which AI can really help you, you can also start to do safe, ethical, and effective AI literacy just all the time.
Amanda Bickerstaff: And so that’s how we think about it, Rebecca. There’s no like easy way, but I will say that we don’t keep, we don’t really have an ethics piece. Everything is ethics, everything is safety. Everything is effective use because that’s the only way we can, we can figure out a way to make this discreet and palatable and pragmatic for people that are working with us around this work.
Rebecca Bultsma: And I think that’s where people get confused. Nobody understands what AI meth ethics means because of things like everything’s ethics, and it’s built into everything and you can’t control it. So if you were to distill what is AI ethics for you or for your organization down into one clear idea, what would you tell people?
Amanda Bickerstaff: I think it’s, it is deep is, I’m gonna say deep, but I do think it’s necessary knowledge of what these systems are, how they have been trained, and how they are deployed. That’s one. I think you have to recognize that AI systems [00:11:00] primarily are trained on human data that carry enormous biases, but also that the people creating the systems have their own biases.
Amanda Bickerstaff: And so we have, there’s not an AI system in the world that does not have bias, that just impossible at this stage. So that would be what I would consider the knowledge of how these systems are created, trained, and deployed. The second thing is the place in which the, the decision making or code of ethics in which you decide to use these tools.
Amanda Bickerstaff: Will you be using them to further, you know, ethical behavior? Like are you going to be using them to essentially. Further unethical behavior. Like are you hiding use as an adult or a young person? Are you going to be creating deep fakes of non-consensual nudes? Are you using rock spicy and not? And you know, you know that it’s going to lead to like outcomes that were not what are going to be safe and ethical, but you’ve chosen to, and you could say, oh, but it’s the model, but you decided to hit clay and decide to use that [00:12:00] version.
Amanda Bickerstaff: So I think it’s really the two places. One is knowledge. And we can’t, we cannot be uninformed. It’s impossible for you to be uninformed today and be ethical about AI use. And the second is your own code of ethics of what you believe is appropriate. Use what tools you’ll use, how you use them, and even how transparently you’ll use them as well.
Rebecca Bultsma: Perfect. I love that, that idea of responsibility being a, a big piece. That’s exactly how I feel too. So away from kind of the murky stuff, I’m curious, have you. What’s the coolest AI tool you’re using lately? What’s kind of the latest, coolest thing that
Amanda Bickerstaff: we’re boring everybody. We focus on foundation models to the point I made earlier, like, like if you’re gonna go to the source, like go to the source because that is going to have the with AI literacy skills and a paid model is going to give you the most, while, while specialized tools can be very helpful for specialized actions, what we find is that like if you learn how to use the foundation models, and I say foundation models, I mean, you’re.[00:13:00]
Amanda Bickerstaff: Chat GPTs, your quads, your Geminis. So for us, we actually are relatively simple in our own business. Like we, everyone has a paid account that we pay for, whether it’s Quad or Chat GPT. We do not require one. We ask, we require in the sense of they, can you pick which one you like the best? I think that GPT five is a slide back in capability and ease of use.
Amanda Bickerstaff: So I, I’ve always been a Claude stand. In fact, I think we were one of the first organizations training people on Claude. And so I have found Claude to continuously improve. And so I’m a Claude Stan, but I know some of my team loves Chat Sweet Tea. Some of my team we’re not really Gemini people. Sorry, Google.
Amanda Bickerstaff: But I will say that we believe that Notebook alum, one of the Google Suites is probably the best application layer for education. Meaning even more than specialized tool like chat, like, like no offense to the Brisk and Magic schools and school ais, but Notebook LM has, is, is a unique application.
Amanda Bickerstaff: [00:14:00] Accesses the capabilities of the Gemini underlying model and is thoughtfully implementing real tools for research support, studying support, knowledge. Gaining like that to us is like the, probably if you picked an application that we would say most educators plus Canva AI with their interactive elements that use Claude is going to be where we would say we spend most of our time training people on, but also using internally.
Rebecca Bultsma: I’ve been using the new notebook, LM features this past week just for my research. You can give it like a couple of research papers and it will have the hosts just debate and argue about it. And you can have ’em do that at every level. Sometimes I’ll have them argue or discuss it like uh, their celebrity hot, celebrity hot gossip.
Rebecca Bultsma: Spilling the tea and, and tell it to me, explain it to me like the Kardashians or something like that. That’s very, very interesting. And some of the flashcard features. I, as somebody who [00:15:00] researches and, uh, educates, it’s, it’s an amazing tool
Amanda Bickerstaff: and free for, I mean, this is what’s nice is that like, you know, there, there is more access to free technology that is at the cusp, if not the best, but very close.
Amanda Bickerstaff: And Notebook is a free core service at any Google education, but also just individuals. And I think that we do, when you talk about ethics, Rebecca, we don’t train ever on a paid anything. There has never been a training we have ever done where we have used only features behind a paywall ever. And that is a commitment.
Amanda Bickerstaff: Doesn’t mean that you can’t get more. We don’t suggest as you build your AI literacy and your needs, that you move up tiers and invest in that technology. But there has never been a time where we have chosen anything that is behind a paywall.
Rebecca Bultsma: And I think that speaks to the ethical deployment as well at your level.
Rebecca Bultsma: So a way to model that.
Brett Roer: I wanna just highlight, we’ve been using [00:16:00] Notebook, lm, and I love like the bells and whistles of it because it has so many things that are, I found very unique that you weren’t getting on free other services that really helped educators see like beyond some of the. Initial functions, but two members of the Amplify and Elevate innovation team, Desai and Valentina, they’re both current college students.
Brett Roer: So when we had like a check-in earlier this month and they’re starting classes, I said, how are you all using Notebook lm? And they were both like, what is that? And they work and spend a lot of time thinking and talking about ai. And we just talked about all the use cases as a college student and how you can upload your sources.
Brett Roer: And they were blown away and they, you know, have already started making a notebook for each of their courses. So I’m so excited to follow their growth. And I hope like students that are listening, I’m sure there’s so many or educators, please encourage your students to do that because it’s amazing how it can store everything for you.
Brett Roer: It’s like a scrapbook. If you’re kind of disorganized and messy or Google drives a mess, you dump it in there in each course and you have, you have everything there and you can sec select it when you need it and take it off [00:17:00] when you don’t. So that’s an amazing free tool that I’m glad you, uh, I’m glad you highlighted today.
Brett Roer: Thanks, Amanda.
Amanda Bickerstaff: Yeah, we like it. I mean, I think that the idea of. This is why we keep it simple, so to speak. It’s foundation models and, and maybe one or two tools that are freely available because there isn’t necessarily the, the tools out there today are, are not particularly evidence-based. They’re not particularly fit for purpose, for education fields, even those that are being designed for it.
Amanda Bickerstaff: And so if you find a couple of things that are readily available that have true use cases for you, and that means that the applications can be valuable right now, that you kind of can like close off the noise. Like you don’t have to use every, you know, every tool generator. So for example, Google put out a bunch of stuff, like a bunch, they have these tool generators and all this stuff, even a study mode, and they’re like, okay, they’re okay.
Amanda Bickerstaff: But then like notebook, lm. Is [00:18:00] transformational potentially. Like it doesn’t ha like there’s a lot of like noise and this is why partnering with following people that you trust in the space, whether it’s us or others like Brett and Rebecca, it’s, you know, being able to build like, and try and experiment like, and really be like very mercenary on.
Amanda Bickerstaff: You don’t have to know everything. And it’s actually, our keynote has this like image as a joke of a little girl drinking from a fire hose and it’s a real little girl because we’ve never found an image that AI that could really have that blah blah on. We, and like, but people relate so much to us saying no one needs to leave any of our trainings as an AI expert or an AI and education expert instead, leave that to people like us who this is all like, I mean this is, I am boring everybody.
Amanda Bickerstaff: I is general of AI and education all the time. But lead that to us to help you navigate. And then you focus on those one or two things, or three or four as [00:19:00] you build capacity. And that’s enough right now because it allows you to find true impact and meaning instead of feeling like you’re not spending enough time, you’re going too slow, you don’t know enough, like that doesn’t matter.
Amanda Bickerstaff: It’s where can you find impact today.
Brett Roer: Yeah. You know, Amanda, you say this two things that really stand out there. One, I got an opportunity this morning to present to 50 New York City principles, and I shared the insights about AI and how you can use this to, this was a focus on attendance and chronic absenteeism and student engagement and et cetera.
Brett Roer: And I was just, you know, sharing some relevant use cases. And afterwards, one of the members of the superintendent team kind of just said, you know, like, there’s still so much fear, you know, like some of them just won’t start. I said, yeah, but you know, right now. They’re used to having some background or grasp of pedagogical skills and background.
Brett Roer: This is like a new thing that’s been thrown into their world that they really weren’t prepping for. It wasn’t like other things [00:20:00] where, oh, these requirements or standards are coming down the pike. Um, but what you just said is so important because as, as educators, it’s really important. We wouldn’t say like, try 10 things tomorrow in your classroom.
Brett Roer: If you, if in the absence of AI it would be like, try one thing. Let’s get really good at one skill or practice or, you know, theory or philosophy. Not try 10 tomorrow ’cause that’s not gonna work. So I, I love your idea of like, find one thing that you’re really gonna hone in, or two or three. So just good, good advice that I think people need to make sure they know.
Brett Roer: We know this exists in education, what good theory and policy looks like.
Amanda Bickerstaff: But we also know if you go to Isti or you go to a conference or you even go into a session, it is, it’s top X tools for Y. It’s five tools for lesson planning. It’s 10 tools for images. That is where I think people feel comfortable talking right now because of their own discomfort around AI literacy and role knowledge.
Amanda Bickerstaff: And it’s much easier to throw a bunch of tools at you. And most people don’t [00:21:00] know that the tools they’re using are generative, A i, A, or B, how they work or how they’re meaningful. And so we know through EdTech over the last 20 years that no one needs more tools, no one. What we need is intention. We need quality and we need training around the tools we use.
Amanda Bickerstaff: ’cause even an okay tool with good training and a nice app impact has a higher ROI or turn investment than the best tool in the world that people don’t know how to use. The barrier to entry is too high like a, an okay tool with good training and accessibility is gonna be more impactful. And so I think that this is one of those times where we’re over relying on, but there’s so many and all that does is create more uncertainty, discomfort, and aversion.
Amanda Bickerstaff: Like I just don’t have time to figure out these five tools.
Rebecca Bultsma: It just made me think about a bunch of stuff I’m reading about lately that’s kind of [00:22:00] talking about how AI is a really good opportunity for educators to sit back in the discomfort and the messiness and remember what that feels like for learners to be sitting there being like, I don’t understand.
Rebecca Bultsma: I’m overwhelmed. I don’t know what to do next. And remembering that our students are experiencing that every day and. To experience what that feels like in this generative AI age to remind us, give us more empathy. And sometimes, and the research says us over and over, the best learning, the most transformative learning happens in that messy in between.
Rebecca Bultsma: And that’s not just for kids, that’s for adults too.
Amanda Bickerstaff: Absolutely. We talk, we, we lean into the idea of failure of trying things, of our, our keynote is like, learn, experiment, innovate. So learns about building I literacy for you and your community, but experiment’s, like try things out, be willing to fail. Be weird.
Amanda Bickerstaff: It’s weird everybody. GenerativeAI is so weird. It is so fascinating and weird and unexpected and silly. I mean, we ask, chat, explain itself as a pirate. Take pony club. The [00:23:00] best one that’s ever happened was, we were in Wisconsin for Atkinson a couple weeks ago and it did it as cheese and it, it nailed a cheese explanation.
Amanda Bickerstaff: Everybody. It was like. Probably the best job chat she’s ever done at a weird explanation of how it works. But like that, that kind of openness and play and creativity only happens when you create safe spaces where that can exist. Because I think Rebecca, the difference between young people, especially when we have like a really strong culture around inquiry and and learning, is that adult PD and learning does not often create a safe space for experimentation, failure, risk taking.
Amanda Bickerstaff: And because we’re so outcome based all the time that like a lot of, it’s actually pretty interesting. We have some new facilitators and. I’ve gotten pretty, like if you know, like I’m pretty much, let’s go in, let’s do it. And we have our, our flagship and maybe like 75% of the people engaged in the first [00:24:00] prompting exercise.
Amanda Bickerstaff: And then there’ll be like 85% in the second prompting exercise. But by the end, about 95% have engaged. And I don’t really, I don’t push people to gauge first. ’cause what I wanna do is have them watch, see, build comfort, find their entry point. But if we can walk out of a PD room across every context where 95% of people are actively engaging what we’re doing, it’s remark again, is remarkable.
Amanda Bickerstaff: But like that doesn’t happen. But it happens because we’re not there going, like, Brett, why aren’t you like Brett open it or no? We’re like, okay, like everyone, we’re trying it out. Watch me. I’m gonna model it. Look at your person. You start hearing people. But I think that that is missing and the ways in which we teach adults and like the, one of our, our real things that we’ve seen work is creating those spaces in every room we walk into.
Brett Roer: That’s crucial. Again, you’re just taking really sound pedagogical advice and theory and professional development and making sure it feels that way with a new, a new tool essentially, but also like a, a mental [00:25:00] model for people. So well done there. Amanda.
Amanda Bickerstaff: I’ve been a teacher, Brett, and I’ve been in really bad pd.
Amanda Bickerstaff: Like, first of all, I also didn’t know I would be doing pd. Everybody. Like I didn’t start for AI ation being like, you know, what I wanna do with my life is teach people how to use gender of ai. It wasn’t what I was thinking like, ’cause I had been an ed tech, CEO and I like, you know, I’m a builder. So when I started Brett, I couldn’t, I couldn’t do it.
Amanda Bickerstaff: I could not be, I could not create something where people were having a bad time or the worst PD one where everyone’s like, oh yeah, Amanda. And then like, I leave and they, they’re like, that was terrible and I’m never doing anything. And they gave you that pretty face. ’cause we’re at, you know, we’re educators, we can be pretty nice.
Amanda Bickerstaff: And then it’s like, never again. Like I just, we just can’t, we cannot do this as much and not like make it work, you know? And it doesn’t mean it works every time or we haven’t had to learn the process, but man, it makes it a lot easier to walk into a school room or classroom even [00:26:00] we had the hardest days.
Amanda Bickerstaff: It’ll be like the day before Thanksgiving, or it’ll be the first day of school, or it will be like on a, you know, and we have to like, that’s our audience, right? And like, if we couldn’t nail that, I think we’d all be a mess.
Brett Roer: Yeah. And again, I’ve seen both of you really leave audiences just wanting more, staying after lingering for questions.
Brett Roer: So you know, it’s working. So well done. I’d love to get your insights on this. This is something I keep formulating and when, like people ask me about the work I’m leading, a lot of it is serving districts and doing like ai, executive leadership coaching, working with Rebecca on policy. And when people ask, oh, do you do other trainings?
Brett Roer: Yes we do. But I’ve become a really firm believer that, you know, AI really has the potential to either reduce inequities and gaps or widen them. And I really find it’s if the leaders are embracing AI and recognizing what they have, leveraging that a teacher or a student can’t because of all the technical, you know, requirements and components.
Brett Roer: So first I’d love to hear [00:27:00] if, how your thoughts on that, if maybe, you know, you have a different vantage point or the same. And then how can we ensure it is closing gaps and building relationships in schools in lieu of, you know, eroding them.
Amanda Bickerstaff: Well, I mean, I guess my one pushback is I think that those conversations require an enormous amount of, you guys are gonna be so bored with Nate.
Amanda Bickerstaff: Like there’s, you should do a drinking game where it’s like, how many times does Amanda say AI literacy? But everyone would be very drunk by the end of it. But I do think though, that we know
Brett Roer: Kids at home. Don’t do that game .
Amanda Bickerstaff: Don’t do that game. I am not, unless it’s like coconut water. But the reason why I bring this up is that, to have that level of nuance conversation, Brett.
Amanda Bickerstaff: You’ve got to understand what these tools are and aren’t their accessibility, how they’re designed, like what they mean, and and, and then a real commitment to understanding what is happening. So we have a five questions to ask students and teachers, very simple resource on our website that literally you just click on a button if you don’t wanna do it.
Amanda Bickerstaff: You don’t wanna [00:28:00] create an a, a survey already exists. You click on a button, it creates a force copy. But once you have that AI literacy, you need to genuinely know who is using the tools, how and why. And until you understand that the conversations about equity are really the same conversations we tend to have in education where they are anecdotal at best, meaning that they are not evidence-based.
Amanda Bickerstaff: We are like, and it’s like the loudest people, oh, every kid’s using it because I heard it from this, this teacher. Or we had more submissions that were AI like, like flagged as ai, whatever it may be. But until you take, we have to push past this like. You know, lack of evidence, building and understanding.
Amanda Bickerstaff: And so you build the AI literacy, start understanding what is, start understanding what’s happening within your organization, and then you can start having questions about equity. And when you talk about questions about equity, there are a couple different ways, right? There’s a digital equity divide that already exist.
Amanda Bickerstaff: These tools do require, at this stage, bandwidth for the most part, although you can very [00:29:00] tech savvy adults and young people can download a local, you know, a local version and work without bandwidth. But for the most part, primarily primary use is through consumer apps that require bandwidth. The, the second thing is understanding the equity gap on unlike comfort levels.
Amanda Bickerstaff: And so there has been an example of gender gaps in, in some fields, based on short-cutting, feeling like you’re short-cutting or you don’t have enough information. There are going to be equity gaps around how like, like I would say like good use versus, okay use versus bad use. Okay. To bad use means people stop using it.
Amanda Bickerstaff: Or, but they’re using it. They get caught and like they might get caught where they use a citation and they avoid it forever. So there’s even an equity gap around knowledge. And then the last one is going to be like really down to classroom to classroom school to school. Who is allowing for risk taking AI literacy development access.
Amanda Bickerstaff: Because the thing is, is in one school district, [00:30:00] I promise you there is, we talked to you like since I’m a D Brett, we have our, our course on adoption and policy, 20 different district leaders and higher education institutions across everything from California, from New Jersey, all around and every room we’re in, you have people that are teachers that are.
Amanda Bickerstaff: AI first in classrooms, even without knowing what that means necessarily, and the impact to ai, to pen and paper first. And students are moving from classroom to classroom, school to school even, where it is acceptable in one space and not in another. And that means that there’s inequity where let’s say that Rebecca, you’re in a, you have just by random dent of like you’re in high school and you got all the AI first, like people, and then Brett.
Amanda Bickerstaff: You got one AI first three never evers and one, I don’t know. And like, could you imagine the tracks that U2 were beyond, that’s not about equity outside the school, but literally within [00:31:00] classrooms to classrooms. And so I think that the reason why I’m kind of teasing this out so much is why it’s so important to do foundational knowledge building and understand what’s really happening within your your school to even be able to acknowledge and identify where the inequities are.
Amanda Bickerstaff: Because it’s so easy just to say, well, we’re just gonna have everybody access to chat, CBT. That will not solve it. Access does not equal equity. It does not equal literacy. It does not equal like even like, like opportunity. And I think that that is something that we really, we just think give kids bandwidth and that would’ve solved the equity gap during COVID.
Amanda Bickerstaff: And we know that did not happen. And I think that that is the same thing that we have to take today and why these kind of nuanced conversations need to happen.
Brett Roer: I got nothing. Facts. I mean, that’s, that’s, you just took my idea and you actually made it. Like that’s what needs to happen after my one like.
Brett Roer: Brief sentence about it. Everyone do it, Amanda, and work with people like Amanda to make sure that’s [00:32:00] actually happening at scale
Rebecca Bultsma: and leave Brett’s speechless, which never happens.
Brett Roer: I’ve seen, and you know, you were a New York City educator in the Bronx, as was I, and I’ve said to people, you know, the Bronx sometimes breaks up these large schools.
Brett Roer: I’m gonna give a shout out to David Lou, even though he hates attention. You go to the fourth floor of the Stevenson campus in the Bronx, you’re seeing some of the most innovative ways where students have really understood their role in using AI tools in the classroom. You can ask them what they’re making, what tools they’re using, and why they’re doing it.
Brett Roer: And you can go to other parts of, you know, that neighborhood and you’ll have teachers or leaders who have no idea how to do that. So that’s where I’m starting to see. That was one of my big light bulbs, and you’ve named it the amount of training that that gentleman’s done and how. Forward thinking is how do you scale that once you have your early adopter.
Brett Roer: So I think we’re saying the same thing, but you said it much better. And now, because you’ve done such a good job, we are gonna move up one of our favorite segments. We’re gonna go ahead and just flip the script on you. Amanda, we did not read your questions ’cause here we go. At the end of season one, we decided, [00:33:00] you know, we’re gonna, we’re gonna let things flow and we’re gonna turn over the reigns to our guests.
Brett Roer: So for this next question, Amanda Biggerstaff, you are now the host of the AMP two 11 podcast. You get to ask me and Rebecca, one question, what would it be right now about AI and or education?
Amanda Bickerstaff: Um, it’s a good question, but, uh, first of all, I do not want a podcast. I’m just going to say that very strongly to anybody watching.
Amanda Bickerstaff: I know I’ll do a webinar, people, we have a bunch of webinars coming up, but I don’t wanna be a podcast host. I mean, I think that, you know, from our perspective of having different, know we’re coming from different lenses. I think that, you know, what do you think is the timeline? For when AI literacy and planning adoption roadmaps will become the priority, meaning like I would say right now, 15 to 20%, 25, maybe percent of districts are starting to move to organiz organization wide approaches.
Amanda Bickerstaff: We are not yet at kind of the, the [00:34:00] apex of like where most people feel pressured to do this. When, what year do you think that that will become as big a priority as science of reading?
Rebecca Bultsma: I work a lot in private sector as well, and so my answer to this is always when there’s the right carrots and sticks, right?
Rebecca Bultsma: Like as soon as there has to be accountability mechanisms built or you know, the carrots of, hey, schools who do this, get this much, you know, funding or an extra teacher. There has to be incentives. Uh, when that will come, I think it’s coming. We’re seeing a lot of, like,
Amanda Bickerstaff: I need, I need a, I need a date. I’m gonna say the me and podcast host.
Rebecca Bultsma: I am going to say piloting by next fall for sure. Okay. Uh, with funding and mainstream by, uh, 2030. Sounds skeptical, but I think formalized by 2030. In K 12,
Brett Roer: there’s, yeah. Great question. First of all, first time, first time host, you really knocked out the park. So the first [00:35:00] thing is there are states that are moving in this direction.
Brett Roer: As Rebecca said, you need carrots, you need sticks. So like Ohio has now adopted where every district should have an AI policy in place by the end of this year. So like having a deadline and a benchmark, if that’s done well in Ohio, I think that’s gonna scale much quicker. Someone, you know, we just brought up, does funding follow it?
Brett Roer: How much guidance are you given? How individualized can you actually make it? Or is it gonna be a somewhat of a template and a boiler claim? And I’m not, I don’t know any of these things in Ohio. I’m just naming how you roll out initiatives with these really audacious goals. Super important. Is it a priority in Ohio?
Brett Roer: Yes. How do they now get to it where it’s really meaningful and then it allows leaders and districts to drive that work in their community. That’s my biggest fear, and I still haven’t given you a year, and I actually was gonna say about 2030 is when I feel like most states will have said like, you have to have X, Y, and Z in place, hopefully earlier than that.
Brett Roer: And then that will allow by [00:36:00] approximately 2030 for schools to feel like, okay, we have the, we have enough guidance. Hopefully there’s been funding allocated and there’s enough PD in training, good PD in training where people can actually move forward with it. So I’m gonna go with 2030 as well.
Amanda Bickerstaff: I will say the reason why I ask that is it’s a thing that I have been so wrong about this whole time.
Amanda Bickerstaff: I am way too optimistic and around timelines and, and have stopped giving an answer. Although we did do a futurist piece that we’re hoping to have out in October around like the possible futures of van education similar to this AI 2027 piece. Um, you hear to hear first everybody, like, we just had a meeting about it apparently.
Amanda Bickerstaff: You need to be a creative writer to write, write something like this. So we are taking longer than we thought. But I do think that, you know, the thing about Ohio and Tennessee actually, so both Ohio and Tennessee have this kind of mandate, is how many of those actions will be simple compliance actions?
Amanda Bickerstaff: How many will adopt a [00:37:00] very simple set of policies that are, you know, if anything going to be checkbox exercises versus those that will do the work. The fact that it’s policy over guidelines is a pretty strong signal in Ohio that this is not something that is going to be deep transformational work because policies never are, policies are the stick policies are compliance, policies are punitive and protection.
Amanda Bickerstaff: And so I do, I do wonder. Wonder, something like California focusing instead on a AI literacy as a requirement that is more about knowledge building and more about mindsets and more about actions. And if I think our, the tipping point for me is if more states and organizations get behind that. But I will tell you, we work a lot and we have not seen significant movement on funding for AI literacy work.
Amanda Bickerstaff: We just have not, whether it’s from foundations, whether it is from federal and, and the state level grants, it ha is happening, but not at any [00:38:00] level comparatively to something like science of Reading where enormous parts of budgets are freed up to do this work. And so I think that for us, like until that happens, we don’t know if there, I don’t think we will have really hit that moment.
Rebecca Bultsma: And I don’t know if it’ll benefit education, but OpenAI just announced I think 50 500 million, um, grant on their website for Yeah, not for profits. It’s a drop in the bucket really. And it’s not specifically geared at education and I don’t think many people know about it. But you hope that there’s sparks to be things that are earmarked for both.
Rebecca Bultsma: ’cause I think we need both the policy, right? Like I think you almost need those guardrails in place so that people feel safe, like you mentioned, to start experimenting and they understand the don’ts and where the boundaries are before, um, because there’s some really real kind of safety concerns with these tools.
Rebecca Bultsma: So I think those guard, those policies need to be there. Maybe not standalone, but you’re right, we need both. We need the literacy and, and the policy. For sure.
Amanda Bickerstaff: If you think about like the more like, like kind of [00:39:00] global piece is that. You’ve got the moratorium on AI regulation and at the state level that went through, you know, it was even tacked on the budget.
Amanda Bickerstaff: It was a compromise to five years. It got removed. We were part of that coalition that pushed back. You now have the, the act in California that we spoke about earlier. It’s around safety and, and protocols around that. Is going to be, that is going to reinvigorate the conversation in Congress about moratoriums on AI regulation at the state level.
Amanda Bickerstaff: So we’re seeing this play out in real time where AI, like education is, is a battleground, not necessarily like something like the executive order. The, the, even the presidential challenge. All of those things are, are, are, are not supported by actual funding, by actual work. Even the commitments by co.org and others to train pe like it’s still, it’s not like the, the work that we know.
Amanda Bickerstaff: Let’s talk about it Rick and Brett. So it’s now in the pocket host by [00:40:00] accident, but we know from everything is that no matter what happens if you are not funding work at the district level. Or the E-S-C-B-O Cs, like the actual like, like organizing body, that change doesn’t happen. Change does not happen at the federal level or the state level.
Amanda Bickerstaff: Really. It happens at the individual district and the that, that is the microcosm in the us. Like that’s the place, that’s the quantum different in other places, but in the US because of the FRA fragmentation of the ways in which school systems work, it really is that level. And so I think that since there’s so little at that level right now, there’s still that huge disconnect to action.
Rebecca Bultsma: I have a question for you, Amanda. This is off script, but. I’ve been, I think a lot about people who are just sitting around waiting for their schools and their districts and their jobs to train them on AI and acting helpless, like they have no control over this. What would you say to those people who are waiting and say they, they can’t understand AI until they get funding from [00:41:00] their district to be able to do AI literacy?
Rebecca Bultsma: Like what, what responsibility on an individual level do we have versus what the school can give us that we can’t necessarily gain on our own with the number of free resources that are available today?
Amanda Bickerstaff: I’m gonna take it slightly to a less like deficit mindset here. I think that the reason why more people aren’t doing this on their own is because of the noise and the the noise.
Amanda Bickerstaff: It’s a hype cycle. It’s gonna go away. We have trained teachers to not believe us. We have trained teachers to say, science of reading today and tomorrow. It’ll be something else. Like we have created a system. Already where educators, if you said do it on your own, why is it I’m gonna like, is it really gonna be here?
Amanda Bickerstaff: Do I need to tip the time? There’s all this noise around it. So I think that’s one part. The other part is it is very noisy, Rebecca. Like there are people avoiding youth as adults. ’cause it feels like cheating. Like I’ve, I would’ve, we were in a district where I was, half of the district leaders were like, it feels like cheating and I’ve, [00:42:00] I’ve avoided it.
Amanda Bickerstaff: And you’re like, you are the, you are the leaders. Like you are the ones that need to be the sponsor. So I am gonna take it that until the noise abate some, until we bail, build better systems with its schools, it’s very hard for individual teachers to really know this is important. I do think though, that to your point on the other end is that there are lots of free resources out there, but I’m gonna tell you right now, there’s a lot of crappy free sistance out there too.
Amanda Bickerstaff: There’s no quality level of like, we have a free course, it’s had very good results, but like, it doesn’t, not who, who knows that it’s even there, right? Because we’re not being funded by Google or open AI to do this work. But I do think though that like the complexity of the moment, the speed to which it’s happening, the noise and the fear are, and how badly we’ve trained teachers not to trust us around new technology or pieces is a perfect storm of that individual agency feeling like, like [00:43:00] not possible or not for me or I don’t know how to get started.
Amanda Bickerstaff: That is really contributing to a lack of individual pushes towards AI literacy.
Brett Roer: Yeah. What you kind of mentioned, I think is the example that has frustrated me and some, I recently read somewhere, you know, in some sort of probably LinkedIn or a group text, this idea of like. You could look at what you just said and it’s like teacher complacency.
Brett Roer: And I definitely was a strong advocate. I said, you know, this summer I was at a birthday party for my daughter, and two teacher, two teachers were parents. And they’re like, yeah, I took an AI training with this specific tool. You know, it was like they’re getting their CTLE credits. This is all like what you’re supposed to do and that’s great.
Brett Roer: And they’re asking me questions and I was like, well, does your school have like an AI policy? Like does your school have a contract with that organization? They’re like, we don’t know. So I was like watching them be very hopeful and then like realizing the first day of school they’re gonna come back to their administrators.
Brett Roer: And if there’s not a plan policy, a partnership, is it approved? Is it safe? Like people are seeking this out and they’re not being like [00:44:00] rewarded be. And that’s where I keep going back to like it’s being safeguarded or blocked by those leaders who think it’s still cheating or those mindsets. So that’s one of who knows where to start and overcome it.
Brett Roer: But we, I think the way you’re approaching it, and many others are doing this work the right way of. It’s obviously multi-tiered, but you kind of do need, at the end of the day, everyone to be moving in the same positive direction.
Amanda Bickerstaff: Yeah. It’s a change management process. It is. It is something that is going to take time, it’s gonna take it’s failure.
Amanda Bickerstaff: It’s, it’s trying things out. But I, I do think though, one of the reasons why we’ve changed our missions, our mission is no longer AI literacy for a million educators, not educators and students. To be honest, we didn’t think we needed to because there, Brett, you’ve worked with. Ai, EDU and others that like have been focused on student AI literacy for a long time.
Amanda Bickerstaff: We’re a small organization, but what we’ve seen is that there is not a commitment to AI literacy for young people at all in schools in a way that, and I say at all, I don’t mean that like to be too broad brush, but [00:45:00] there are very, very few organizations that are putting actual time, money, effort behind student AI literacy today.
Amanda Bickerstaff: And especially at the, like the, the quantum of change we talked about with the district level. And the way we’re trying to do that is how we did it with the teacher AI literacy, is that I wrote a free course in August of 2023. Like, and we put it into the world when honestly we had like, no one even knew, like, I mean like we had, people knew who we were by then, but not nearly as much as now.
Amanda Bickerstaff: So what we’re hopeful is, is that we’ll put something out there much faster than a lot, like there’s not a lot of like high quality certificate based, you know, C frameworks, safety, ethics and effectiveness framed kind of things that hopefully that might, well, we might not get to a million students. What we could do is hopefully put pressure on other organizations and decision makers if we can show success that like kids are finding this on their own.
Amanda Bickerstaff: What’s fascinating is when kids do our trainings, we do the same training for kids as we do for [00:46:00] adults pretty much. And they are just like desperate for this information in a way that I don’t think has ever really happened before in terms of just how quickly this has become a priority. And when you start opening up, like you’ll have, you kids will be like, I know this.
Amanda Bickerstaff: And then you tell them one thing and they’re like. What, and then it’s like this cascade. But we’re hopeful that if like we can help like other organizations doing great work out there but help to like put these things into the world show that they work, that other organizations that have a much bigger reach will do the same.
Amanda Bickerstaff: ’cause if every single large tech company, the building models is making models for young people and yet none of them are really thinking about the impact on young people in terms of creating spaces in which they really learn how to use these tools instead of just giving them the tool. Nothing is going to change really, but if we can get people all moving.
Amanda Bickerstaff: To your point, Brett, rowing in the right direction, the same direction. I think we could do this pretty quickly. AI literacy doesn’t take a decade. [00:47:00] AI literacy, we, we can prove to you that AI literacy, the foundation’s 90 minutes to half day, 10 90 minutes to three day three hours. You give us that time. It doesn’t get it perfectly.
Amanda Bickerstaff: But foundational AI literacy, we can, we can do
Brett Roer: what you just said also regarding like. We can’t get it right. You need a little bit of time, but with students where you were like, you get them like hooked a little bit, right? You just find the one thing that kind of turns them onto it enough where it’s like, I call it like the zero to one on a scale to a hundred.
Brett Roer: All you gotta get ’em to is one. What are some things that, like you could ask or tell our audience, whatever they’re, if they’re leaders or teachers or in the ed tech space, like what are some of those cool things that you have done where you’ve seen that light bulb?
Amanda Bickerstaff: I would like to show something just because it actually has like, like, and if you’re listening to this, maybe you can watch the clip later.
Amanda Bickerstaff: So I’m just gonna pull up one of the ones we do while I’m talking about this, I’ll talk about one that was really great. I’m such a nerd. We’re like, let me show you a thing we did with young people, [00:48:00] but one of the ones that we do while I’m pulling this up is something with like source verification or hallucinations.
Amanda Bickerstaff: And so a great activity for foundational AI literacy around research is to ask kids to use something like perplexity, which is a generative search tool, and then have to color code between red, yellow, and green. Which sources that red they don’t know enough about, or no, sorry, red is, it’s bad. It’s Reddit, for example.
Amanda Bickerstaff: I don’t go to Reddit for my, for my research. Yellow is, I’m not sure I’m gonna do more. And green is like, I totally believe this. From a generative search output. And what happens is the kids get to do that. And when we, we were talking about this in a session of one of our, we have an instruction assessment course.
Amanda Bickerstaff: And one of the, the, the teachers, like, I just did this about six weeks ago. And what happened is every single source was yellow and the kids were like, oh my god, I don’t know at all. And they had to do extra research. But the thing is, is that [00:49:00] this was a seminal moment of learning because like four weeks later they’re like, oh, oh remember you have to double check.
Amanda Bickerstaff: ’cause remember that that red light, green light, like we didn’t know anything and that is going to sit with kids for literally forever. And I think that we love those types of things and it’s just so great to see. ’cause this is an organization we’ve worked with a bunch, but that one’s really great. But I’ll show you the one that we do around, I’m gonna share my screen.
Amanda Bickerstaff: This is, you’ve been in our keynotes or every single flagship. We do a piece around, you know, the, um. Limitations. We talk about limit knowledge bases, hallucinations or inaccuracies. And then we do bias and training data. And so we ask everyone, okay, going to an image generator and we’re asked for an image of a conventional CEO giving a board meeting and we kind of ask what you think that looks like And everyone will be like, oh, white male, big table graph.
Amanda Bickerstaff: And this is our image. This is a real image from Cha Bt for, oh, you’ll see that like, you know, boardroom a slightly younger [00:50:00] male than expected. I will say there’s body type bias in this, but also Rebecca, I do like to wear my highest stilettos when I go to board meetings.
Rebecca Bultsma: Absolutely.
Amanda Bickerstaff: Footwear. Um, two images in the back are not even men and women.
Amanda Bickerstaff: It’s just men like, but you see there’s all this like kind of bias that’s built in. But then this is, we used to do this live bread. When we first started, we used to do this live and we got some random stuff, but this one was done by Corey and we, same thing, we did, same chat, same person at unconventional CEO, giving a board meeting.
Amanda Bickerstaff: And so you think, okay, maybe it’s a minority, a woman. It turns out it’s that same guy who’s like 10 feet tall wearing like Austin power pants. He is standing over a woman. There are at least more diversity in the room. There are, there’s ethnic diversity, not body diversity, but ethnic diversity. There’s beanbag chairs, everybody, ’cause everyone wants a beanbag chair and it’s fun.
Amanda Bickerstaff: But the reason why I bring this up, Brett, to your point, we’re doing guidelines of policy with Will Matt outside of Chicago. It’s a K eight district. Eighth graders are in the [00:51:00] room. We give that, we show them, we give them time from tool expiration with Canva and one of the girls eighth graders, like Amanda I to show you something.
Amanda Bickerstaff: And she’s like, I tried to create an unsexy CEO picture and this is what she got. So literally, if you’re not watching this, it is a white male CEO who is like two women are behind him looking at him. And then at the top it says super unsexist, CEO. Literally written out. And then it has some, some hallucinations of feminism and equality where the image itself is so deeply not that, but this is like, could you imagine Brett and Rebecca just the sheer amount of like learning in that moment?
Amanda Bickerstaff: Number one, very hard to get around bias and image generators. And I actually don’t have that much control. Number two, image generators are very, very biased, like all AI systems. But also think about the critical thinking that she did. ’cause she tried like all these different things. That one was hysterical.
Amanda Bickerstaff: I mean, I cannot [00:52:00] believe, but this is what happened. Welcome to fascinating moments in time in this work, but that young person is never going to forget this lesson. And when someone asks her to use ai, she has a foundational moment, an anchor point that she will never trust AI blindly ever. And that is, if you talk about ethics, Rebecca, the most important thing about AI and ethics is you do not trust these tools blindly ever.
Amanda Bickerstaff: You never take the first output. You never critically, you never not critically evaluate. You can never, ever, ever, at this stage trust an AI output fully. And that has to be like how many people? There’s literally an AI safety paper that came out like this week that had like multiple citations that were made up by ai and it was a paper about AI safety.
Rebecca Bultsma: I just actually read a research report that OpenAI put out. I don’t know [00:53:00] if you saw it. It was last week. And it talks about why these models hallucinate actually, and it’s because when they were training them, they scored them in a way that if you get the wrong answer, it’s a zero. Uh, if you guess you get a point.
Rebecca Bultsma: And so it always just is saying, well, maybe there’s a chance. I’m just always gonna guess.
Amanda Bickerstaff: Well, I mean they’re always hallucinating models. Always approximate and predict. We call it hallucination. It’s inaccurate. But these tools, and this is something we’ve known for a very long time, if you ever get asked which one you like better, that’s you training the system.
Amanda Bickerstaff: It doesn’t ask you if it’s accurate. It asks you if it’s pleasing. And these tools in their system prompts what chat BT doesn’t say is that their system prompt says, I don’t say I don’t know.
Rebecca Bultsma: And optimize for engagement.
Amanda Bickerstaff: Yep, they have trained the system, but they don’t, they casually don’t mention that their system prompts also lead to hallucinations.
Amanda Bickerstaff: ’cause every gener of AI model has a system prompt telling you what to do, like be polite, answer fully, don’t say, I don’t know, but you know what? [00:54:00] Open the eye with their research. Like any of these big tech companies is gonna share with you
Rebecca Bultsma: what they want,
Amanda Bickerstaff: the things that probably already know and also in their best possible light.
Rebecca Bultsma: Yeah, well, we could probably talk all day. I feel like forever I could definitely do this. So if you were to just distill everything you know, which I know is a lot down to one key piece of advice for somebody listening today of what their next best thing is, instead of thinking big picture and all these problems and all these things we’re talking about, what’s the next right thing that people should be doing after they listen to this podcast?
Amanda Bickerstaff: I think the first thing is it’s okay wherever you are. Like, like it is. It does not matter if you are a super user or just getting started. It’s time. There’s time for you to get started, to keep moving, to learn more. It’s not too late. It is something that is important. This is not one of those times where we’re telling you that this thing is gonna last.
Amanda Bickerstaff: You are living through an inflection point. This inflection [00:55:00] point is already changing the ways in which we communicate. We learn, we show learning, we work and even our own social, emotional and companionship type of opportunities. So don’t wait, but it’s not too late. I think the second thing is capacity of expertise and for those taking a shot of coconut water ’cause we are a safe space for all things is AI literacy.
Amanda Bickerstaff: Start to learn how to use these tools in safe, ethical, and effective ways. Don’t just learn about ai, learn with ai. Be willing to ask for help, to look for resources, to create working groups, whatever you need to ask for support from organizations like ours or amped or others, but like it is such an important moment in time, but you, you do need to start.
Amanda Bickerstaff: And we have been very, very open to that. But the next year to two years is going to be where things are gonna speed up. [00:56:00] And while right now you’re just having AI overviews that might be annoying or you’re trusting them too much on Google or you see this kind of starting to infuse in classrooms two to three years max is when we’re gonna start to see some real disruption in work that is going to require us as educators to make a change.
Brett Roer: Can you say again for everyone? What was that? That was a great quote. It was like, it’s not too late, but don’t wait. Like that might be like deep slogan of AI in 20 25, 20 26 school year.
Amanda Bickerstaff: I think it has to be right. It it’s like it, it’s it. We have been so, like Brett, like we talk to people, we’re like, you’re not too late.
Amanda Bickerstaff: You’re not too late. But this year is when our rhetoric has changed. You aren’t too late, but you will be if you hold this too much longer. The, this school year has got to be the school year where we start really understanding the changes that are gonna be necessary and start building the foundational AI literacy to make better like to, to innovate, to risk, take, to change.
Amanda Bickerstaff: ’cause we’re going to have to,
Brett Roer: Amanda, we are gonna close out today’s podcast with our signature question. I know you [00:57:00] listen to every episode so year and you know it’s going every episode right? So you know the name of the podcast M two 11. So we have some 11 themed questions and one of them is. We’ve all seen the hit movie Oceans 11.
Brett Roer: You know, you assemble a dream team to complete the impossible. We’ve talked about, we all gotta row in the right direction if we’re gonna make AI literacy and AI policy impactful in education. So you’re assembling a dream team that’s gonna help you tackle the AI challenges and education. You can take a breather, you can think through this.
Brett Roer: You don’t have to come up with 11 people, but who are some of the people, the organizations in education, ed tech, you know, nonprofit for-profit, who is on your dream team that’s helping you tackle the biggest challenges in education That AI is trying to address
Amanda Bickerstaff: students and not just for computer science students, like students.
Amanda Bickerstaff: I would include parents and community, especially those that are, are trying [00:58:00] to support their young people. So thinking about equity is really important. I would want an education researcher that is going to be. Practical. So praxis over, uh, over the very, like ivory tower approaches. If you really want change, um, OpenAI, Google and Anthropic need to come to the table and really talk and like not just build things for young people, but actually understand what this means.
Amanda Bickerstaff: Leadership that has the ability to make. Decisions and put money behind things. And to that point, leadership could be at the state, federal level. Foundations need to get off the sidelines around this work in more meaningful ways. We don’t, A lot of what’s happening right now is for research, or not research, but for some research, but also for like more tools.
Amanda Bickerstaff: I’m not interested in foundations building more tools, although I think it’s important. I’m more, I’m more interested in foundations helping to fund in this. Deficit timeline like this budget deficit world, real work of transformation happening at [00:59:00] schools. Um, I think industry, like local industry is incredibly important.
Amanda Bickerstaff: If you are in tech corridor or you’re in a, a very like, like farmer or agrarian part of the world, like whatever it may be, having local industry come in and actually talk about what the changes will be at their level, it’s really important. And I think the last thing is that we have to have practitioners.
Amanda Bickerstaff: None of this will make sense if you don’t have teachers at the table and I’d, I end there. That’s, I actually oceans eight or maybe nine. But I end there because so much of what we do in education is requires, Rebecca, your point, why aren’t teachers just doing this on their own? We put, we, we have that mindset.
Amanda Bickerstaff: So much of teachers should figure this out, but they can’t. You’ve already, we’ve already given them so much and the structures themselves have to change so significantly. In fact, I’ll get, I’ll add one more. Testing companies. Testing companies at higher education institutions like that are not allowing for systemic change because of [01:00:00] entrenchment.
Amanda Bickerstaff: The largest, two of the largest nonprofits in the world in education are two testing companies, college board and et. S like and so, but okay, like broad sides away. But the idea that teachers, we can’t expect teachers to do this on their own. We need to set them up for success. They are so integral to this, but we can’t, we can’t let this live and die on a teacher at one classroom figuring this out.
Amanda Bickerstaff: It has to be a whole scale approach, needs to require everyone involved and engage, and people have to make decisions that lead to like friction or lost revenue to start, or money being put in or change, letting, like chatting, change being happen. Maybe it’s time for big systems to fall apart because you know what?
Amanda Bickerstaff: These big systems have not been supporting young people in a long, much longer than Generative AI has exposed.
Brett Roer: Whew. That went a direction it has not gone before, but really insightful and, and powerful. You made a lot of, I think you made a lot of friends and a lot of enemies in that one. Amanda, as usual. You made that [01:01:00] way
Amanda Bickerstaff: as usual.
Amanda Bickerstaff: I don’t, I mean, we’re definitely not a group that doesn’t push people, but I think it’s important, right? And like, it, it is something where we’re lucky, like we don’t have to, one of the reasons why we don’t build technology and we don’t take outside money really is because it allows us to, to hopefully be a space of like reflection and pushing and like no change can happen unless change is allowed to happen.
Amanda Bickerstaff: The cracks were already there. AI did not start the cracks. But what they’ve done is they have pushed over the walls that were already cracked and this is something that we can no longer ignore. That school does not work for most kids and it didn’t work for most kids. For as long as I have been a person, you know, like as long as I’ve been an educator.
Amanda Bickerstaff: But this could be the moment where we can make real change. And I believe that, and that’s why AI appreciation exists. It doesn’t exist to be an AI literacy company. [01:02:00] It exists to help us build enough knowledge and resources and skills to be able to change the system. And I think that is not what we state.
Amanda Bickerstaff: ’cause it’s very, it’s not what we’re doing right now. Really. That’s where we want to be in a couple years.
Brett Roer: Again, I think Rebecca said earlier we could talk about this forever with you. I hope this is the first of many podcasts you join and you keep giving us your insights. ’cause the future of AI is gonna keep progressing and you’re gonna keep being one of the leaders at the forefront.
Brett Roer: So thank you so much for joining us today. Uh, I wanna thank our listeners and continue reaching out, learning about AI and innovation from amazing people like Amanda on the AmpED to 11 podcast. Have a wonderful day.