Erin Mote

Erin Mote

March 27, 2026

AI Literacy Is About Critical Thinking, Not Tool Proficiency

Erin Mote, founder of National AI Literacy Day and CEO of InnovateEDU, discusses community AI playbooks, shared responsibility, and how dialectical differences affect AI grading.

What if the data that matters most isn’t grades or test scores, but the messy, beautiful process of how students actually learn?

Erin Mote, CEO of InnovateEDU, joins Brett and Rebecca for a timely conversation on National AI Literacy Day 2026. Erin founded the EdSafe AI Alliance and leads a network of educational partnerships touching thousands of school districts nationwide. Her background spans enterprise architecture, personalized learning platforms, and global education technology initiatives.

This episode tackles one of education’s most pressing questions: who owns the learning process data that AI systems are quietly collecting? Erin introduces the “ground lease on the family farm” metaphor—describing how foundational models are capturing the intellectual property of teachers and students to fuel AGI development. The conversation moves from policy to practice, exploring the White House’s new AI framework (released the day of recording), EdTech accountability gaps where safety features become paid add-ons, and emerging research on AI bias in grading. Punya Mishra’s work at ASU reveals how student dialect and cultural references can lower AI-assigned scores, raising urgent questions about fairness and trust.

What You’ll Learn:

  • AI literacy as discernment — Why the Blueprint for AI Literacy focuses on critical thinking grounded in the science of learning and development, not just tool proficiency
  • Learning process data vs. PII — How students think, correct mistakes, and sequence productive struggle—and why this data is foundational for AGI
  • EdTech accountability tensions — The pattern of features pushed open by default while safety becomes a paid upgrade, and what shared responsibility really means
  • AI bias in grading — Research showing how dialectical differences like “y’all” and cultural preferences (rap vs. classical music) affect AI scoring
  • National AI Literacy Day evolution — From its founding three years ago to 140+ supporting organizations in 2026, with statewide events, year-round curriculum, and student town halls

The episode features two rounds of The Rhythm Project’s AI Effect game, exploring AI-generated apologies and the ethics of using AI to grade 140 essays, plus Ocean’s 11 recommendations from Erin’s dream team of education innovators.

Brett and Rebecca bring fresh perspectives from recent work: Brett shares insights from presenting with Utah’s Matt Winters on six panels exploring humanity in AI policy, while Rebecca reflects on governance research at Edinburgh Futures Institute as she completes her master’s in AI ethics.

Tune in, subscribe, and share if you’re ready to turn up the volume on what’s possible in education.

Erin Mote: [00:00:00] Who has access to these tools? Who has opportunity to these tools? And as kids get older, are we using them not just to help them understand what they know, but what can they create with what they know?

Rebecca Bultsma: I think what bothers me the most is companies who have safety features that are add-ons and paid features.

Rebecca Bultsma: It makes me crazy like the fact that if you want to try and protect students, you have to pay extra for that.

Erin Mote: Between Congress and the White House here, between states, the White House and Congress, we’re gonna have to see how this shakes out. But the thing I am really delighted about is how child privacy data protection is table stakes in this conversation.

Brett Roer: We are live. Welcome everyone to the AmpED to 11 podcast. We are joined today by the incredible Erin Mote from Innovate EDU. How are you doing today, Erin?

Erin Mote: I’m great. How are you guys? [00:01:00]

Rebecca Bultsma: Living the dream.

Brett Roer: Before we dig in, Rebecca, obviously my amazing cohost is joining us today. Rebecca, what is one thing you need our listeners to know about what’s going on in your world of ai?

Rebecca Bultsma: Oh, there’s always so many interesting things. I just got back from the UK where I spent a couple of weeks, uh, and there’s just, there’s great work happening all over the place. Interesting research. I was at, uh, the Edinburgh Futures Institute, and there’s on the nerdy side of things, lots of great work happening in the realm of governance and worldwide governance and interesting papers being written that nobody who’s listening to this podcast cares about.

Rebecca Bultsma: But there is good work being done, and I’m excited to talk to Erin about the work she’s doing a little closer to home today.

Brett Roer: Amazing way to really excite the crowd. So we’re filming this, we’re recording this on Friday, March 20th. That is important because this is going to be our special AI Literacy Day episode release, and Erin is one of the pioneers and leaders of AI Literacy Day here in the United States and beyond.

Brett Roer: So. [00:02:00] Erin. Without further ado, I’d love it if you can explain to our millions of listeners around the world who you are, what you do, and how you got where you are today. Take it away, Erin Mote.

Erin Mote: Hi, everybody. Well, that’s a, that’s a lot of questions all in one. So I’m Erin Mote. I’m the CEO of Innovate edu.

Erin Mote: And I’d like to say we’re a house of brands, not a branded house. Uh, because at Innovate Edu we house a number of large scale multi-stakeholder alliances, and that means we touch two and three school districts in the United States through those alliances. Three partners who work side. Set of issues that’s really about radical system transformation.

Erin Mote: So whether that’s Project Unicorn, our oldest alliance, and also like, let’s be clear, the best named that focuses on data interoperability and how do we have safe, secure, interoperable data in K 12 education and data workforce ecosystems to the [00:03:00] ED Safe AI Alliance, which is our newest alliance. Founded in 2020, really focused on the safe, accountable, fair, transparent, and efficacious use of AI and education.

Erin Mote: And we have some other alliances that are focused on what I would consider more practice-based work. So Brett, you know, well, the Educating All Learners Alliance, really this uncommon alliance of over 175 partners that are focused on new models, ways to come together to serve students with disabilities across the ecosystem.

Erin Mote: From college transitions to K 12 service, to work with families, communities, and partners. Or our Partnership for Student Success, which is over two 50 organizations all thinking about. Everything from academic tutoring to post-secondary transition coaching and really thinking about the intersection of policy, technology, and practice, and how do we have folks row together [00:04:00] to think through systemic transformation in the K 12 ecosystem.

Erin Mote: So lots of different. Topics that are incredible, team and innovate edu touches. I have the most incredible team of mission oriented, really thoughtful advocates for Change who think about how we put our partners at the forefront in driving these conversations. How do we center the expertise, wisdom, and knowledge of practitioners from the field?

Erin Mote: And most of all, how do we create sort of timely, relevant tools, some of which utilize AI to really guide the field towards evidence-based reach, research backed practices. So, um, many, many hands in this work, which I think is always the most important thing to say that I’m just one person as part of an incredible set of partners.

Erin Mote: System leaders and really my team who [00:05:00] is driving this work forward day to day. So it’s hard to keep up with where they’re at sometimes, uh, all over the United States and globally as they really drive this change.

Brett Roer: That was a very succinct, well done version of what oftentimes if people interact with Erin Mote or all those organizations, you might have heard you see like bits and pieces of it.

Brett Roer: And thank you for kind of explaining how they all come together and support this incredible work. As we said, we are dropping this on National AI Literacy Day. I’d love it if you can let our listeners at home know the origin of this, the trajectory and the roadmap, and like what are some things that maybe today they should know about that might have been different from one year ago for national AI literacy.

Erin Mote: So National AI Literacy Day was created three years ago. Um, even though it feels like AI literacy is the topic du jour, this is something that’s existed for three years and has been an annual nationwide day of action for three years. Really focused on building [00:06:00] the capacity of educators, communities, parents, and even students to answer foundational questions about ai.

Erin Mote: What is it and how should we be about this technology in our.

Erin Mote: Discernment and critical consumerism, but also the ability to have really important conversations around skills, capacity, and begin to dive into what this technology does. It’s important for us to name that. When we started this work, it was really driven by superintendents and leaders in the field who were really focused on how do they start.

Erin Mote: To have conversations about AI in their communities and with their educators, and where could they go for professional development at all levels. Um, to really think through how do I, how do I take this on? And so we started with [00:07:00] Common Sense Media ai edu, the Tech Interactive. Um, and AI for Education was one of our original partners in this work.

Erin Mote: So I wanna give Amanda Bicker staff and her team a shout out there really helping us originally, kind of conceive this day and what it would mean to have a rally point around AI literacy and how it’s evolved over three years is I think it’s really become a movement. For AI literacy, the website itself and how we think about, um, the day has really evolved and changed.

Erin Mote: So we work in partnership with Hour of ai. We work in partnership with Day of ai. We work in partnership with Alpha Ed and the work that they’re doing on digital learning day to blend digital learning and media literacy with AI literacy. And so even when you go to the website now, the materials are year round.

Erin Mote: Curriculum is year round. There’s curriculum that has nothing to do with technology. You could be doing an AI literacy [00:08:00] activity completely analog. But I think what stayed consistent is this idea of bridging conversations in communities with parents and students and educators around this work and this idea that this is not something that a single district or state can solve by themselves.

Erin Mote: That we need to come together and really build this national movement. And so I’m really excited to see. The growth in supporting partners. As I’m talking to you, we have over 140 supporting organizations who are doing events and activities and curriculum on that day. We have statewide events happening in Florida, North Carolina, Colorado, Ohio, that are really focused on how they can be in service to districts, communities, and families.

Erin Mote: We have a student, town hall might be my favorite part of National AI Literacy Day when we talk directly to kids about their perceptions of AI and how they’re thinking about AI and how they’re using ai. And I think it’s really a day [00:09:00] that we try to use to call in conversations, not around tech enthusiasm or AI acceleration, but really a balanced set of conversations about the promise and peril of this technology today.

Brett Roer: Amazing. I’m gonna turn it over to Rebecca in just a moment because a lot of things have just changed even in the past week on a national scale here in the United States when it comes to AI literacy and frameworks. One thing I just wanted to share about all these events, where can folks go, right? This is gonna drop the morning of National AI Literacy Day.

Brett Roer: Where’s the website where they can go and find all of these and engage with them on that day?

Erin Mote: Absolutely, it’s ai literacy day.org. You can find a map there to find an event in your local community. There’s also events that are happening throughout the weekend and the week before. So you’ll see also recordings of webinars that are happening building up on the week before, including, uh, I think a really important, um, webinar across K 12 and workforce with the Department of [00:10:00] Labor and some great folks who are doing work on the ground, like Nishant Shah in Maryland and Matt Winters in Utah, who are really trying to, to bridge that gap between federal and state work.

Erin Mote: And one of the things that I think is really important about how we’re thinking about National AI Literacy Day moving forward is all of the curriculum, all of the pd, all the live sessions that happen that day will be recorded. And so folks will have access to gain insights anytime, anywhere over the next year.

Erin Mote: And we’ll keep adding to that. An educator who has time for a 15 minute lesson on national AI literacy. That’s awesome. And then come back for a 45 minute lesson a month later and really begin to build our fluency with AI again in a way that’s developmentally appropriate by grade band and grade level.

Brett Roer: Perfect. Thank you for giving that context. I wanna shout out, uh, we are fortunate, amplify and Elevate Innovation in partnership [00:11:00] with the Rhithm Project. We are doing a one hour event as a partner of National Day of AI Literacy, and we’re gonna be playing their amazing game, the AI Effect, which is a free, a free engagement tool, and their Sparks Toolkit is completely free.

Brett Roer: So for those that are out there, we’re, we’re doing that. And as you can find that on the website to register and join us. And you named another amazing partner in this Matt Winters. So last week I got to present on six panels with him, including the state of the state of national artificial intelligence.

Brett Roer: And so I wanna just give a shout out to the state of Utah, which I’d never been to before, which I’m in love with. But I wanna exec, I wanna share one quick anecdote and then I would love for you to talk about the national framework with Rebecca, who’s much more well equipped to talk about the implications of it.

Brett Roer: Their state superintendent gave the day two keynote, and she’s talking about a lot of big initiatives and great things, but she closed, I’m gonna paraphrase her. She’s like, you know, hydrogen is the most common element in the ecosystem. I’m gonna say something like that. And she said, but in education, the most important each is [00:12:00] humanity.

Brett Roer: And she gave this amazing, amazing closing speech about the importance of humanity and the world of AI and education, which is obviously something that resonates with all of us. And she said, this line that was so powerful, there were about a thousand people I just started clapping and no one else did.

Brett Roer: And I went up afterwards and said like, that was me awkwardly clapping. But it was great. But the reason I wanna shout her out is before I said that to her, she was talking to a teacher. And like, she’s on a pedestal, literally like on the stage. But like she got down on one knee and was like eye to eye and like looking this person in the eye.

Brett Roer: And I was just like, that’s awesome. And then the same with me. And then she like, okay. And then she like sits down on the stage so she could be eye to eye with me and have like a deep conversation even though she’s in the heart of legislative season and she’s like very self-deprecating. She’s like, that wasn’t even a good speech.

Brett Roer: So I just wanna say again, like, that is the epitome to me of humanity in the age of education, uh, age of ai, like a national or statewide leader, just getting eye to eye with people and just learning what’s going on in their communities. So it’s a big shout out to Utah. I’m gonna turn it over to two to discuss this amazing national [00:13:00] framework, um, and the implications for it at the national level on ai.

Rebecca Bultsma: And I do wanna talk about that. I just have a, a quick question for you first. It’s something I’ve been thinking about and uh, it has to do with something, you actually already brought up it, how you talked about how there’s great appropriate bans and ways to be looking at this. And I think about that a lot because I feel like at some point it morphs from general digital literacy, uh, for the younger kids into more AI literacy.

Rebecca Bultsma: Tell me a little bit about where you see that overlap. Do you see it as an overlap or We straight AI literacy now instead of the digital literacy. And when do we start talking to kids about AI specifically?

Erin Mote: Well, we released a, a blueprint for AI literacy that took this on from a grade level band perspective, and I’d encourage people to dig into that and to take some time to read that.

Erin Mote: We were really excited to anchor that paper in the Science of Learning and Development, which I think really helps inform this question, Rebecca, of what is the [00:14:00] developmentally appropriate way to engage in this technology? So we still need digital literacy, right? We still need folks to be able to know how to connect their commuter to wifi, how to, you know, turn on and off different types of things like VPNs, like there’s a set of whole digital literacy skills we still need, but we, when I think about AI and the types of AI literacy that we’re going to need to evolve to, it’s really about how do we think about cultivating discernment.

Erin Mote: So discernment, I think the ability to engage in discourse, discourse and dialogue, the ability to interrogate the inputs and the outputs of this technology is incredibly important to build that critical consumerism that I think is gonna be essential, not just for kids, but frankly even for seniors right now who might be getting, you know, different types of messages or phone calls or, or seeing things online.

Erin Mote: How do you discern what is real and what is [00:15:00] fake? How do you interrogate the technology? And so when I think about the types of things that are developmentally appropriate in terms of grade level bands, I lean a lot on the science of learning and development to guide those conversations, Rebecca, about, you know, when we start to build and think through, um, frankly, like.

Erin Mote: Productive struggle, um, with the technology, right. I, I think we are very focused at Ed Safe AI in orienting the building of AI and education towards the learning sciences. We’ll be releasing a benchmark in September with a internationally recognized group of experts. Folks that we’ve, we’d like Rose Luck and Ryan Pam.

Erin Mote: Engineer the yardstick for what we want these tools to look like from a pedagogical basis. But, um, for me, when I think about the types of things that educators should be thinking about when they’re taking on these tools, it’s not [00:16:00] so different from, um, the types of appropriate grade level bands that we would be using around media literacy.

Erin Mote: How do we classify things? How do we, what things are like or different when you’re in kindergarten, first and second grade? And then I’ll just say the thing I’m most focused on right now from an access and opportunity perspective, and this is why I’m really excited about the DOL, the Department of Labor’s AI Literacy framework, which talks about broadband as a prerequisite for, um, AI literacy, is who has access to these tools?

Erin Mote: Who has opportunity to these tools? And as kids get older, are we using them not just to help them understand what they know, but what can they create with what they know? And so that’s the. Tool use that I want us to be thinking about in terms of as we sort of move up that developmental grade band, are we helping young people understand how to augment their team [00:17:00] with ai?

Erin Mote: How do they still gain the skills to work in a multifunctional team? But one of those teammates might be ai and how do you manage that teammate? How do you manage the inputs and the outputs? How do you manage that in your process flow? And um, as an enterprise architect, I’m digging into this all the time.

Erin Mote: AI is so much my teammate on lots of different things. You know, I really push some AI tools, I think, to the absolute limit when it comes to coding and, and full stack development. So

Rebecca Bultsma: I think what you said exactly just made me think about why the AI literacy piece is so important. Because I think foundationally we lack a common language and understanding because to me, um.

Rebecca Bultsma: I probably understand the definition of digital literacy differently than you do. It sounds like to me, digital literacy has been taught in schools forever, but it is that discernment piece to deal with fake news, which you may be calling media literacy. And so I think foundationally this AI literacy piece, [00:18:00] before we do anything else, we need to start defining terms and what they mean so that we’re all speaking the same language before we start.

Rebecca Bultsma: Because, um, you’re right, there is all of these different terms that we’re all understanding differently, I think, um, and where they overlap. And I, I think that that common language is gonna be a huge part to getting everybody on the same page. Seniors, parents, adults, leaders, and students. Uh, because I’m seeing that disconnect happening a lot of places.

Rebecca Bultsma: Even the idea of ai, right? Like it being such a broad term and then AI literacy, what does it even mean when AI means a hundred different things? So I think a huge part of this conversation is going to be developing a common language that we can all speak and understand.

Erin Mote: I totally agree with you, and I think like you make an excellent point there about ai.

Erin Mote: I think when people think ai, they think it’s generative ai, but there are, you know, 16 other different types of AI besides generative ai. And while that’s the most consumer ubiquitous tool, actually the thing that’s started, the Ed safe AI Alliance and the things we were most [00:19:00] concerned about when we started in 2020 was machine learning, synthesis, ai and algorithmic bias.

Erin Mote: And that was actually like generative AI and its consumer breakthrough even took me by surprise in 2022. And the origins of ede were really around, again, synthesis AI and and that type of ai predictive analytics and was it tracking kids? Was it restricting the type of learning materials? And we were getting increasingly concerned about seeing the use of those tools in education technology, which is the origin of the ED Safe AI Alliance.

Rebecca Bultsma: And that’s so interesting. That is so important. There’s so much work being done that about that, especially in the uk right, where they have really, really strict guidelines around EdTech, data privacy, security, GDPR that we lack in North America. And so, yeah, I think, I think we’ve touched on something here to address, which is the, there’s the lack of shared understanding and definitions of what we’re actually talking about, because [00:20:00] regular common people who are just encountering chat GPT for the first time maybe don’t understand the nuance and the risks.

Rebecca Bultsma: Brett, go ahead.

Brett Roer: I could share some wisdom that happened yesterday. So we’re working with 10 districts across Ohio. I’m building AI community playbooks, and the biggest unlock has been we, we say we mandatory silence until someone has hit record with consent for wisdom collection. And one of the practices we do is we give them, you know, a, a, a glossary of let’s say 50 of the most common ubiquitous terms we find across frameworks.

Brett Roer: And we have them sit down and say like, do you understand this as it’s written. If not, like, what seems unclear to you. And we had them do this with a group of eight to 10 district members, including teachers and, and uh, all the way up to superintendents. And it eliminates the idea of like having to wordsmith it.

Brett Roer: Like when you maybe wanted to like def define rigor in the past. And you know, we take like a small team a long time, but we now have them going out to their community and asking them, do you understand this term is written? What’s unclear about it? And then it creates [00:21:00] common language that’s literally based on those.

Brett Roer: And then you can obviously scaffold it up or down, but like get to the heart of what are you trying to make sure people understand. And then that way when you use that term throughout the community playbook or then go backwards and change your instructional philosophies and frameworks, you’ve actually understood what your community was trying to ascertain there.

Brett Roer: And like, it’s just a different way of thinking about it. But then everyone actually has common language in a common playbook. So just highlighting like, it doesn’t mean five people sitting down and trying to rearrange a sentence, which is how I feel like we used to do that in education. It’s. Just listen, present things and get reactions, and then AI is a great tool to hopefully gather all that wisdom and mirror it back to your community.

Brett Roer: Just sharing that, because the efficiency allows you to do that and mirror back and get immediate revision, is just something I’m, I’ve noticed out in these districts that they’re not used to doing it that way and they just can’t believe you can do that in hours instead of never or months. So just highlighting that.

Erin Mote: I love that I, I’ll call out two districts in [00:22:00] particular who have open source resources that I think are really great for community conversations there. One is El Segundo, one of our original cohort one policy lab districts. Their entire Charette framework community activation protocol is online and available for folks to take a starter dough and modify.

Erin Mote: And that was a brave thing to do two years ago, was to go out to the community and prioritize the community first in these conversations. And the other districts that I’ve seen do this really well, one is Santa Anna, also out in California, who oriented those conversations around their portrait of a graduate and updating their portrait of a graduate to consider an AI infused world.

Erin Mote: And so they went from something Brett, that was like a similar shared vocabulary and language. And said, okay, what needs to shift or change? Or does it need to shift or change? And, and it’s a, it was a beautiful process. And the other thing that’s so beautiful about both of [00:23:00] those districts is they’re committed to open science and have made all of those materials available, um, through the ed Safe AI policy labs that folks can go grab.

Erin Mote: And again, use a starter though to begin to have those conversations in community, so shared orientation and that calling in of communities. I really appreciate that wisdom.

Brett Roer: Erin, if you could really share with everyone what just was released, I believe you said it was this week from the US government and what that might entail.

Brett Roer: This morning, I, on March 20th, this was dropped. So hot news, if you could share with everyone what it is, maybe where they could access it, and any of the roles that you, or these, uh, organizations that you’re in partnership with, played a role in. And then what does that mean for next week? And then obviously beyond.

Erin Mote: Yeah, well this morning the White House dropped their national policy framework for the US and it’s really comes out of an executive order that was released in December that directed the Office of Science and Technology Policy and the White House Special [00:24:00] Advisor for AI and crypto to work together to craft a policy framework that they could send over to Congress to begin to work on a comprehensive policy framework for AI policy within this country.

Erin Mote: And there it’s only four pages. That’s the good news. It’s on the White House website. Well, I’m sure put it in the show notes here. And you know, I will say, I’m not gonna like comment on the content other than there’s a set of domains. It’s really important to know that they start, the first domain is child safety and child privacy.

Erin Mote: It’s table stakes for. And frankly I think table stakes for communities across this country. Table stakes for the first lady in particular around making sure that we’re emphasizing child safeguarding when it comes to digital ecosystems and AI technology. There’s gonna be. A lot of work over the next couple weeks with a lot of different players in Congress, whether that’s Senator Blackburn and the work that she’s put forward.

Erin Mote: Senator Cruz and the work he’s putting [00:25:00] forward, or Representative Guthrie and the work he’s put forward on child privacy and safety. And while I have a crystal ball on my desk, it doesn’t work. And so I can’t tell you how it’s ultimately gonna net out. But we’re gonna have some really important conversations about child safety and privacy, duty of care, which is what is the role of technology companies in taking responsibility for the tools that they’re developing.

Erin Mote: Um, we’re gonna have some really hard conversations, I think about the tension between federal and state law and preemption. We’re gonna have conversations about issues related to the infrastructure, build out the parts. I think of the framework that. We’ve worked most closely on in our policy papers in a really shape, obviously child safety, data privacy, anything that has to do in the work of AI and education, the work that’s called out [00:26:00] on workforce and education in the framework.

Erin Mote: And then probably, maybe the thing that people don’t think is most sexy, but I think is the sexiest part of it all, which is AI infrastructure for equitable global development. Um, research and development testing and evaluation like sandboxes, calling out Utah again for their extraordinary work on that testbed infrastructure.

Erin Mote: And then the ability to have data sets so that we’re creating the type of public infrastructure. Rebecca talked about the uk, they’ve done incredible work here in creating public infrastructure that allows startups to big tech companies to really test their models, test their data around nationally representative data sets.

Erin Mote: So there’s, there’s going to be fault lines. Between Congress and the White House here, between states, the White House and Congress. We’re gonna have to see how this shakes out. But the thing I am really delighted about is how child privacy data protection is table [00:27:00] stakes in this conversation.

Rebecca Bultsma: I agree with you a hundred percent, a thousand percent.

Rebecca Bultsma: That’s something that’s been on my mind for a long time, especially around EdTech. It’s part of the reason I sometimes struggle to work with EdTech companies, just not necessarily their fault, but the way the system is set up and the responsibility that’s put on schools and superintendents, uh, that maybe doesn’t fully belong.

Rebecca Bultsma: There, right? That should be more of a shared responsibility.

Erin Mote: It’s a huge burden on our schools and our districts and our superintendents and our tech directors and our educators, and, and it creates, I think, a situation where they’re just like struggling to keep their head above water when it comes to the sort of changing regulatory undergirding.

Erin Mote: But also, you know, I will just say that I am really frustrated by both consumer tech and ed tech tools who push AI as features as not as like [00:28:00] product developments or updates and are trying to circumvent some of the review processes. So let say the thing aloud, lemme say the hard thing that. Impossible when you’re a tech director because so many of these features are pushed open by default, and so you know, you’re kind of running around like somebody who has a chick, like a chicken coop and all the chickens got out.

Erin Mote: You’re like frantically trying to get the chickens back in the chicken coop before the fox gets in the hen house. And so just really, really, I hope that we will have a clarion call here around what is the shared responsibility in this space.

Rebecca Bultsma: A hundred percent. And I, I think what bothers me the most is companies who have safety features that are add-ons and paid features.

Rebecca Bultsma: It makes me crazy like the fact that if you want to try and protect students, you have to pay extra for that.

Erin Mote: I totally agree with you. And the other thing that really puts a B in my bonnet, Rebecca, [00:29:00] is, is that, and folks who charge for safety and security training for teachers for professional development like y’all, this is table stakes.

Erin Mote: If you wanna do business in schools, you have an enhanced set of responsibility to be a steward of young people. That’s not just the director of tech.

Rebecca Bultsma: And you’ve named the thing that bothers me most, which is business in schools, the idea that there is this much for-profit business. In schools, sucking money out of the school systems and then also harvesting the data of, of students, uh, unbeknownst sometimes to the school leadership just based on the fact that they have so much going on, plus the level of burnout it’s contributing to for, uh, our leaders and our staff trying to manage this, these escaped chickens, as you’ve mentioned.

Rebecca Bultsma: So thank you for that. Ah, we could talk about this all day, but I wanna turn something to something a little more fun because like, it sounds like, like. Um, me, you are a huge kind of nerd when it comes to experimenting and playing with [00:30:00] AI and doing cool things with the tools. I know I spend, like, my whole life, I spend hours every day being like, what was released overnight?

Rebecca Bultsma: Oh, great, now I can control my cloud code from my phone. How am I gonna use this? Right? Like, all of these latest features. So I wanna know a little bit about your stack, your favorite tools right now, what you use, what you’re using ’em for. Nerd out with me for a sec.

Erin Mote: Yeah, I mean, I think, I mean, I’m pushing Claude Opus I think in a way, like too, it’s like absolute limit.

Erin Mote: I have like, you know, I think like many people, I just went and got a Mac Mini because like, I can’t, I can’t. I can’t have it on my regular computer right now. And so, you know, I, I use Opus. It’s really like my force multiplier. I don’t just use it for snippets. I am really doing some architecture of like, functional prototypes.

Erin Mote: Sometimes I build stuff for district superintendents. We work with, like, I just built one for a superintendent that they could use to input only publicly available data, like labor and birth rate trends, population zip [00:31:00] codes, that type of thing. Um, to actually make some decisions about school closures. A really hard thing that often like touches our emotional sort of metacognitive, you know, triggers.

Erin Mote: And so it’s really hard to think about closing schools. Those are real students, real communities, real families. And so how do we sort of build tools that allow you to at least have data as a, as a neutral actor and synthesize an a lot of different types of data. Maybe not even educational data. Again, population data, workforce data to be able to make decisions that are best for your communities.

Erin Mote: But I use, um, tools to build custom dashboards to look at some of the data interoperability work we’re monitoring across states and districts. I’m sort of constantly thinking about how I can take some of the massive, really messy data sets that we have access to and hone them so that they’re understandable, [00:32:00] uh, to folks who need to action something through visualizations.

Erin Mote: And I think that’s really the power of some of these tools is to take the invisible and make it visible. And that’s the type of stuff that I am really, really enthusiastic. And then of course, I use some tools to just monitor my ongoing pipelines, uh, so I know when something breaks because it breaks all the time.

Rebecca Bultsma: Amazing. Love it. Love it. I’m here for all of that. Brett, we’ll let you talk for a second because we could easily, I feel like Erin and I are best friends now. We about chicken

Erin Mote: bonnets.

Rebecca Bultsma: Yep.

Brett Roer: So why I, when you said the be in the bonnet, I, I’m glad I was on mute because I was cackling over here. But I shared this with Rebecca again.

Brett Roer: I was very fortunate that we had, like, I’m on a panel with like Chris Agnew from Stanford and we’re in Utah and then we had the, this gentleman was there in, in the audience and then he stayed for the next session, which is in the same room. And then we had some EdTech leaders talking about the topic was for districts, uh, especially chief technology officers [00:33:00] build it or buy it or co-design it.

Brett Roer: Right. So with all these things you’re all just talking about, like what’s the right, they’re all different when, when to choose, which I guess would be the best way to say it. We had some EdTech leaders on there, and myself and Matt Winters on there. And there was a gentleman in the audience who was passionate.

Brett Roer: I loved it and I wound up feeling like I was like moderating a little with Matt to make sure like this stayed productive. But afterwards I talked to this gentleman, turns out he is the technology director of the largest school district in Utah. He’s the head of their state technology team. So he’s basically the gatekeeper and Utah is this great DPA that applies across all districts.

Brett Roer: And we started talking about, I said, you know, and again, not knowing who he was, just trying to like keep the peace and just make sure this is a productive use of everyone’s time. I said, you know, sir, like you’re so passionate and informed about this. I made sure with Matt, I said, Matt, can, can districts like add addendums to the statewide DPA?

Brett Roer: And uh, he was like, of course. So I said, so sir, like what would be some of those that you wanna make sure districts at least know they could be adding that would [00:34:00] eliminate or make ed tech companies have to acknowledge these things? And he said, yeah, I have that. I’ll share it with everyone. Again, I’ve shared it with everyone, the whole state.

Brett Roer: I was like, great, but I turned it back to you all because I would love to hear like if you, not knowing their DPA, but in general. What are some exact addendums rules or ideas that you wanna make sure EdTech directors out there, or chief technology officers out there, I’m sorry, are like, definitely make sure you’ve asked this exact question or this type of question and see if you can get that in writing from them.

Brett Roer: Uh, and, you know, let’s move this, let’s make this informed anger productive for people out there. So open it up to both of you, you’re both leaders in this field. What are some things you might do if you were in their seats to keep EdTech companies more honest and protect your students’ data?

Rebecca Bultsma: Well, I would not having read what’s already there, so I’m sure some of this is already there.

Rebecca Bultsma: I would want to know exactly how they’re planning to use the student data. I wanna know where it’s stored. Um, I want to know who their partners are because [00:35:00] it came out, uh, last year, maybe the year before Canvas was sharing student data with 500 or more partners. And there’s nothing really like preventing them from doing that.

Rebecca Bultsma: Uh, often the sharing is actually monetized, right? So they’re charging the district to making money and then they’re selling some of the data or sharing it, whatever arrangement they have. And student data is ending up in a lot of hands. And we know, as part of the research that I’m doing, uh, one of the biggest risks of AI for students and children is that data collected about them now will follow them for the rest of their lives and be used to make decisions about them, but also to, um, manipulate them and to, uh, tailor their algorithms and to tailor ways that that will impact them in the future.

Rebecca Bultsma: And so I wanna know about where that student data is going and who it’s being shared with. And then I wanna know what happens if the company goes under or gets sold. Uh, what happens to the data if we cancel our contract? What happens to that student data? [00:36:00] I wanna know how it’s used. Is it being used to then, you know, we can say all we want that.

Rebecca Bultsma: We shouldn’t, that AI is neutral or it’s just making the invisible visible. But the truth is like we don’t know what it’s doing. We don’t know how it’s working or how that data’s being used to make decisions about students. We’ve seen this all the time. It happened in the uk. People were using AI to grade ’cause they thought it was nice and neutral and it ended up causing massive problems with university entrance.

Rebecca Bultsma: It graded students on lower income scales from public schools, lower than private schools. And so we just kind of need to know how that data is being used to make any sort of decisions about kids anywhere. I just, I would have so many questions, um, because kids can’t consent. Like we’re making decisions about student data now that they can’t consent to.

Rebecca Bultsma: Parents aren’t even really aware of what’s going on at all because the school has to make decisions on behalf of the parents, about the student data. So I just think it’s something we just need to be talking about more in general and what actually needs to be [00:37:00] collected versus what is being collected.

Rebecca Bultsma: And then what happens to it and where it goes. I have, I could talk forever about this though.

Erin Mote: Yeah. I, so we released a set of resources that I would encourage folks to look at for superintendents, and then another set around CIOs for CIOs and CTOs around what are the 12 to 15 questions you should be asking before signing a deal.

Erin Mote: So it’s everything from like, uh, to Rebecca’s point, like, is this data being used as training data? Uh, what do your third party subcontractors look like? Are you, where’s the data being stored? Is it being off shored? This is a really important thing to think about as the cost of tokens and processing goes up, it’s more expensive to sho data domestically than it is overseas.

Erin Mote: So lots of really important, both technical questions, but in the superintendent’s checklist, there’s also some alignment questions, uh, [00:38:00] for legal counsel, so on and so forth. I, on.

Erin Mote: Really important that we’re stewarding. But I also, you know, I get to look at term sheets of, uh, deals from foundational models and from EdTech tools for some of our districts and states because we’re in a relationship with them, um, as policy labs. And so one of the things I’ve been most alarmed with is they are clear about how to and what to ask for around safeguarding PII, because I think we as a sector have, have really hit that over the last like five to seven years.

Erin Mote: There’s been lots of training and resources we still have worked to do, but I think folks are attentive to some of the questions that Rebecca’s using about profiling and tracking and that type of thing. The thing that I don’t see us safeguarding is the intellectual property of teachers and students.

Erin Mote: [00:39:00] Being used in these models. And the second thing is something called learning process data. And so everybody says to me like, why do all these foundational models go at higher ed institutions and K 12 schools? And it’s all about the type of learning process data that they are going to need to shape the next generation of artificial intelligence, which is a GI.

Erin Mote: And what do you need to know to shape a GI? You need to know how people correct mistakes, what mistakes people are learning, how people sort of sequence the productive struggle of learning. If you look at the things that are building and developing with Google DeepMind and others, they’re thinking about the learning process for a GI.

Erin Mote: And we need to be safeguarding our learning process data because you might not realize that you just gave the lease on the family ground lease on the family farm away. ’cause you, you had your eye on the PII, but you did not have your eye on your intellectual property. And your learning process data.[00:40:00]

Rebecca Bultsma: Erin, I love you. Like I love being able to finally have these conversations with people because it’s something nobody’s talking about or thinking about, and you are 100% right. That’s part of the reason I have a huge problem with AI detectors in school. I have big problems with AI detectors in general, but what we’re not talking about is you’re taking student intellectual property and using them in models that are unapproved, I think the most fascinating.

Rebecca Bultsma: What you said, because it’s a hundred percent right, is we’ve also seen it in other directions of these foundational models. I think specifically last Christmas, the chat GT launched, um, or OpenAI launched the shopping feature. And it wasn’t about shopping. It’s about seeing how people make decisions and what persuades them to take an action and what it takes to help someone make a very specific decision.

Rebecca Bultsma: They don’t care that you’re buying something. They wanna know your decision making process and what ultimately convinces you to do something. And the same thing is exactly what’s happening with learning and those partnerships. Thank you for naming [00:41:00] it. It’s a hundred percent accurate and it’s not being talked about.

Erin Mote: And as an enterprise architect, if I was building these tools, I’d think about like what’s the biggest sociocognitive modeling experiment that I could get at? And for me, frankly, that would be schools and higher ed institutions, because that’s where I know that there are millions of data points of interaction in the learning process.

Erin Mote: And so it’s not an accident that we see the deployment of free tools, um, in order to capture this data. There’s a whole set of business pressures here that are driving why AI is so attracted to education

Rebecca Bultsma: and it’s money At the core of it, it’s money, right? All of this money, it’s money. So yeah. Anyway, go ahead Brett.

Rebecca Bultsma: Like I said, I feel like Erin and I are BFFs now. We could probably do this forever. You’re just gonna have to cut us off at some point.

Brett Roer: Deal. Um, well I’m actually not cut you off, but empower you by going in a different direction because as you just heard, there’s a new professed [00:42:00] love. Here on the M two 11 podcast, and let’s capture that and do something with it, because as we know, we all share this ethos that we wanna keep humanity at the center of education, the age of ai.

Brett Roer: We know one of the organizations that we’re all big fans of and we try to use on the show is the Rhythm Projects AI effect game. Because as we know, there’s no right or wrong answer to these scenarios. They’re difficult. There’s gray areas, there’s tensions. So Erin, we’re gonna play one round of the AI effect game.

Brett Roer: So I’m gonna share my screen in a moment, and because you’re our esteemed guest, you are gonna pick the scenario that most resonates with you that you’d like to explore. And then we’ll go over the rules one more time and then we’ll have some fun on it. So I’m just pulling up a PDF of the cards that when you play the AI effect game, it’s the game to uncover how AI can strengthen human connection or where it might pull us apart.

Brett Roer: So. Erin, you see these are the three choices you have. It is obvious that these are all gonna be, it depends. So you have to say, you have to start your sentence with either support or [00:43:00] aero, and then you can give your nuance after that. Do you understand and agree to these terms?

Erin Mote: Yes.

Brett Roer: Great. I am gonna scroll very slowly.

Brett Roer: If you see one that just catches your eye that you would be excited to do, just just say stop. And we will do that one. Sound good?

Erin Mote: Great. Okay. You’re gonna test my eyesight. How about we talk about generating an AI created message to apologize to a friend after a fight?

Brett Roer: How about we do that? Let’s do it.

Erin Mote: Do that.

Brett Roer: Okay. So I’m gonna say it again. And then Erin, since you are esteemed guest, you’ll get to go first. So the question is, using AI to generate a message to apologize to a friend after a fight. Take it away, Erin and then Rebecca. And then I’ll close.

Erin Mote: So I’m gonna say support because I do this all the time.

Erin Mote: I am somebody who’s radically candid, um, with folks. I’m very direct. And so I think that, [00:44:00] uh, I use AI to support human connection and to help me engage in how I can make something maybe softer or, um, how I can, um, make something more direct even sometimes. And so I think as long as you’re making that apology in person, um, and you are not just sending a note, but you’re picking up the phone and calling someone or doing that in-person apology so that people can engage face-to-face.

Erin Mote: To your point, Brett, that human connection that drives understanding and relationship, I think AI can be a tool to help us, um, speak to be heard rather than just speak.

Rebecca Bultsma: I love that. Yes. And, uh, I, I’m not a fan of ever copying and pasting anything from ai, and so I, but I do use it a lot because I’m a lot like you, Erin.

Rebecca Bultsma: Um, like I have strong opinions and, and sometimes I don’t [00:45:00] come across, you know, the proper way. And so sometimes I use it to say, help me see the other side of the situation. Help me gain perspective or empathy here that I might be missing. Help me identify my blind spots, um, to just kind of help me process through and see somebody else’s point of view, even if I don’t share it or necessarily understand it.

Rebecca Bultsma: I’d like to have that vantage point. So I’d say support, um, you know, as part of the process of figuring out what you’re apologizing for, why you were wrong and identifying something that you haven’t yet seen. But then I’m a huge advocate of make it your own. Uh, don’t copy and paste. Use it as a guide. And, uh, yeah.

Rebecca Bultsma: So I, I would say it depends and support because, you know, I’m gray. I live in gray.

Brett Roer: I would also say this is a great example of making sure before you use an AI power tool, providing as much context and personalization as possible. So, you know, to echo what I just heard, especially Rebecca, what you just said, like, you know, hopefully your friends [00:46:00] and like what triggers them or what, you know, reconciles disagreements.

Brett Roer: So like often I’ll be like, okay, here’s what happened. Here’s what I know about this person that obviously offended them or made this disagreement occur. Here’s what I know they need to hear that I authentically believe and I need help presenting it to them and making sure I don’t start with the thing that I think is most important.

Brett Roer: Starting with they think is most important. And then obviously refining that draft, but like getting what’s out of your brain, the thought process and knowing like you can com compartmentalize it and take your emotion and then have someone help make it more logical in the way that that person receives it is how.

Brett Roer: That strengthens human connection. Obviously there’s a way to use this to erode human connection by just copy and pasting it or not giving any context or being or leaving the prompt in the, in the text message or note you send. That is what I keep hearing from. Yes. So thank you Erin, for giving us that.

Brett Roer: I wanted to know if I could push your thinking and actually, ’cause we have two amazing people on here and then me, so with with consent, can I show you something we did yesterday [00:47:00] that actually was one of the things you all brought up a scenario we made 10 districts in Ohio do yesterday. Here’s what we did yesterday in 10 hours we’ve had now districts build their own AI community playbooks, and day one was really teaching them how to capture wisdom.

Brett Roer: As we’ve said, day two was in between day one and day about a month apart had. Reach out to their community and gather that wisdom like model what we did. Now on day two, here’s what we had them do. And this was created in a Marriott hotel from about 11:00 PM to 4:00 AM two days ago. And here’s what we did, and I’m gonna use the one that we kind of just touched on, and I’m gonna give preface as to how we got there.

Brett Roer: Um, and what made it very interesting to me was during the break, right before we did this, we talked about, a gentleman walked up to me from his district in Ohio and he said, have you ever heard of this tool? And I said, no. He said, well, here’s why I love this tool, and my students are engaged with it. So this gentleman said [00:48:00] that my kids love this tool and here’s why I love it.

Brett Roer: I create the stations and it has the, it has primary sources from history, and then it asks them questions. And what he loves about it is it immediately gives you on a rubric score what your grade is and one sentence of feedback. And I was like. You know what’s interesting sir, is we’re about to play a scenario like this.

Brett Roer: So I shared it with the group and said on paper, like, you know, high a kid said, oh, we’re playing, we’re using that tool today. I love it. And the teacher was like, this is great. ’cause they get instant feedback and they keep trying to get the high score on the rubric. So like, you know, from a bird’s eye view, perfect, right?

Brett Roer: High engagement, the rigor’s there, the standards are there and kids are persevering and there’s a productive struggle. And so I said, you know, so I’m not judging what you’re doing. I said, is there any policy at your school whether you should be using AI to grade papers and et cetera? And he’s like, I don’t know.

Brett Roer: And I said to the superintendent, do you know? And he’s like, no, I said, so then I’m matching you two up to go through this scenario. So I’m just gonna read it to our audience [00:49:00] and then I’m gonna ask you all to like just what would you do if you were the two people having this conversation? Sound good?

Erin Mote: Yeah.

Brett Roer: Great. Okay. So the teacher in this scenario is Mr. Thompson. He has 140 students over, across five sections on Friday. Every one of those kids wrote a full essay for a period. It’s Sunday night. Mr. Thompson has his own family, his own life, and a full week of teaching ahead, you know, facing him Monday morning.

Brett Roer: So he opens up a safe district approved tool, and it’s gonna score all 140 essays on a one to four rubric. It’s gonna give them one sentence of personalized feedback for each student. The the, it already passed all the DPA agreements. It meets all the data privacy. He reviews all the outputs. Two of the scores of the 140 felt off.

Brett Roer: He then enters all 4,000 grades into grade book. Kids have access to the material and the feedback. He goes to bed, they get their grades Monday morning, they didn’t know the feedback was AI generated. Mr. Thompson comes in Monday morning. He rested over the weekend. He practiced self-care. He’s energized and ready to teach.

Brett Roer: So here’s my question for y’all. You [00:50:00] can see them here. He used AI to generate rubric scores. Is that different from using AI to generate feedback comments? Should your district treat these two things the same way or not? And, uh, he reviewed output and only adjusted two scores. Does that feel like human enough?

Brett Roer: Oversight? And what would your district expect a teacher to do before submitting AI assisted grades? Let’s just start there.

Rebecca Bultsma: I, I just can’t side eye this hard enough, honestly. Like, I, I hate everything about it. I like, but then again, as an ethicist, like one of the core founding principles you will find in every single ethical AI framework is accountability, right?

Rebecca Bultsma: If I, if I’m a parent and I’m mad about this, and I come in and say, why did my kid get this grade? The teacher can’t tell me. There’s no transparency, there’s no accountability. And knowing what I know and reading the case studies, I read about embedded bias in models and things that go wrong, and how ai, my favorite example that I, I’ve probably given it on this podcast, but when they were training computer vision, they were trying to teach [00:51:00] at the difference between a wolf and a doc.

Rebecca Bultsma: S and thousands of pictures of and dogs. And then they realized when they were testing it, the computer had only decided that the picture was a wolf. If there was snow in the picture. That was something that the testers never even thought about. They didn’t even see the fact that every picture of it, they showed a wolf, there was snow in the picture, but that was what the AI picked up on, which is why we don’t know really what the AI is making decisions about in this grading a rubric.

Rebecca Bultsma: Maybe there’s some indicator in there that the kid is from a different socioeconomic class or we don’t know what it’s picking up on because there’s no transparency, no explainability, and no accountability in this scenario. So I do think it can be tweaked, and I do recognize there are benefits to helping teachers with their workload.

Rebecca Bultsma: I just don’t think this is the place for it or in this way.

Erin Mote: Yeah, I mean, I, I think this in my space gets to a consequential decision. Like a grade is a consequential decision where you one need to disclose, you’re using ai. [00:52:00] So one of the things about this scenario that picked, that, you know, picked up for me is the teacher not even disclosing to students that were using ai.

Erin Mote: Like we owe reciprocal trust in our education system. That’s the foundation of human relationships. And so if you aren’t disclosing, you’re using AI to your students when you’re using it, you’re breaking reciprocal trust. And I don’t think that has. The type of thing that I would want between my educator and my students.

Erin Mote: The other thing I’ll say is, you know, for folks who wanna dig into the bias and tools, there’s a wonderful researcher, Punya Mishra at Arizona State University, who I think has made some of the most consumable understandable research available about this in the education profession. He sits at the Mary Lou Fulton Teacher’s College.

Erin Mote: So that’s a good, good hit. He’s thinking like a teacher, he’s thinking like an educator. He’s thinking [00:53:00] about these use cases. But one of the things that’s so fascinating about that research, um, is actually this like hidden context clues that Rebecca talked about. So I’ll just give you one example from his research.

Erin Mote: Well, I’ll give you two. One that I talked about I in Congress actually last April when talking about bias, because I think when we think about bias, we think about things that are like race or, or zip code, or. Those types of things. But there’s all these contextual clues that actually AI is using to rank students lower.

Erin Mote: So I say y’all all the time, many members of Congress, even during my hearing, said Y’all, if AI was grading that transcript, it would’ve knocked them down in terms of a grade. Words like di, like dialectical difference words, like y’all have an out, an outbound impact on young people and whether or not they’re scored higher or lower.

Erin Mote: The other thing that Punya looked at was when, um, students were given the [00:54:00] ability to do a freer response about the types of music that they listen to. Some students wrote in classical music, some students wrote in rap music. My, you know, 11-year-old likes rap, you know, and so that AI system then prescribed lower content, lower pedagogy scaffolds to the students who described rap music as a preference versus classical music.

Erin Mote: It’s just repeating our embedded human bias. And so this is not a place where I think we should be using ai. And in fact, we are really big advocates of helping people understand how to keep a human in the loop from a policy perspective, particularly on consequential decisions. Because I don’t think we want AI making decisions about what grade a kid gets, or whether they get into college, or what college loans they get access to, or whether or not you get a mortgage.

Erin Mote: And right now we’re in [00:55:00] a place where that’s happening in our systems, it’s happening in our structures, and we need to call it out.

Brett Roer: Perfect. And so I wanna share, first of all, thank you because. What’s great about these scenarios is not everyone has your vantage points and level of expertise that’s needed.

Brett Roer: So for example, when we talk about humans in the loop, which is the next section, it is very clear like what should students be told? What should parents be informed? If there’s a tool that’s used in grading, is there a difference between using it to plan a lesson, using it to evaluate, using it to give feedback?

Brett Roer: And do you have any current guidance on when and how teachers must disclose it? And then how do you communicate it? But then the last part is after you talk through all that is we’ve been using the red, yellow, green because it’s just an easy framework for people to get a sense of. So like if at this point it sounds like Rebecca, would you be, would you be picking the color red?

Brett Roer: I know that there’s much more nuance to this, but where would you fall on this, on this color scale right now?

Rebecca Bultsma: I’d pick blue and flag it as needs more training. [00:56:00]

Brett Roer: Well done. Always, always sticking to the script. Love it. Take that algorithm. So anyway, I wanna thank you first, both of you for like going through that experiment.

Brett Roer: And this is exactly the kind of things that educators are grappling with. Like how do you use a tool that’s supposed to help you in an authentic way? And even if you, like, one of the questions is, what would you train it on if you had access to like, so what are the state standards? What should the teacher be putting at the top?

Brett Roer: Like to make sure it is aligned to what students should be learning at that age and ways it can support it. But I really wanna highlight and applaud both of you for realizing this is without, if not done with fidelity, and if it’s done in a way that could be punitive to students, this shouldn’t be acceptable.

Brett Roer: But I wanna also play the other side. Did I hear correct, Erin and or Rebecca? Is there a way to use a tool like this to help with guiding students in their writing process before they do a formative assessment or in a evaluative [00:57:00] assessment? Yeah,

Rebecca Bultsma: hundred percent. You could have students use it themselves as they kind of, uh, work through their.

Rebecca Bultsma: Process. Process, if it’s an approved tool, and take that feedback that maybe the teacher generated and assign a grade tube to help refine their submission potentially. Or maybe the assignment itself isn’t necessarily about the output, it’s how they used the feedback and what they chose to implement and not, and where they felt it was lacking and not, and to teach.

Rebecca Bultsma: Exactly. Circling back to what Erin talked about as those AI literacy skills, how they decided to interpret the feedback from the ai, whether or not they decided to use it, how valuable it was, and where it fell short.

Erin Mote: Yeah, and I think like all the learning happens in the drafting. Like that is actually where we know that learning happens, is in refinement and drafting and the productive struggle and taking inputs and outputs and making decisions about what is the way you wanna structure that argument.

Erin Mote: What’s the best type of structure you wanna use to make your [00:58:00] point. And so I think there’s, there are tools, I think that we are starting to get some evidence and research base can support that productive learning that are about Socratic thinking, not s of fancy. And so I, I think that, you know, some of the things I’m most excited about are, um, some of the research that’s coming outta Stanford right now, where it’s like thinking about the learning experience as being an active learning experience with a teacher, with an instructor, with a professor, but then taking the syllabus, the materials, so on and so forth, creating a small language model, a closed corpus of training materials and allowing students to engage with that chat bot.

Erin Mote: Tutor around a closed small corpus so that they can stay on pace and grade level and engaged in the content, in the lack, in the active learning experience and be supported for some of the core knowledge. Um. Development, some of the [00:59:00] questioning, some of the probing in an AI tool. And I think what’s exciting about that is actually it didn’t move the needle at all on core knowledge development, but what it did move the needle on was motivation and engagement and persistence for students.

Erin Mote: And those are the types of things that we need to be thinking about. Again, when we’re anchoring in the learning sciences and developing purpose-built educational tools, we’re not just thinking about, you know, having a tool that is about rote memorization and knowledge acquisition, but it’s about like, what are the scaffolds that keep you supported in that, in that learning experience, in that Socratic thinking, in that pushing and that discernment.

Erin Mote: So those are the types of tools I think that I am really excited about right now.

Brett Roer: Thank you, thank you both for engaging in that. And as mentioned, if you now, if, if you were to do this with the right stakeholders across your community, you can then draft something that incorporates [01:00:00] all of that wisdom, find what is still uncomfortable or tensions, and then have a first draft to mirror back to people and say, how do we feel about this?

Brett Roer: Now that we’ve actually asked people these questions and then framing it against like pedagogy, data, safety, privacy, all the frameworks that you’ve referenced, we might actually move somewhere and address those points that Erin just said. That’s what we really want kids to leave with. It’s not really about the number grade, it’s about all the skills and values that they can lead your community with.

Brett Roer: So thanks. Thanks both of you for bringing your expertise and personal beliefs to that, to both those scenarios. That was excellent. I know we could probably do this for another, what do you want? Maybe we should do like another three, four hours here, long form. Or maybe we could,

Rebecca Bultsma: if Erin wasn’t so busy, I’d love it, but I know how busy she is.

Rebecca Bultsma: So I say we, we skipped to her Ocean’s 11 list.

Brett Roer: Yeah. So Erin, just to close, this is really, you know, you have a few moments left. Just kind of share again, if there’s anyone that you haven’t had the opportunity to share with our larger community people that folks should know about, whether that’s [01:01:00] organizations, initiatives, individuals, you know, please use this time to just kind of let people, let people know out there other folks they should be looking out for.

Erin Mote: Well, I mean there’s so many great people doing great work. I am always sort of elated and and inspired by the educators and the leaders and districts who are like doing this work day to day, fighting the good fight, being in conversations with communities and parents and. I’ll, I’ll call out. You know, a couple things that I think are really important.

Erin Mote: First, I’m gonna call out a podcast that I think I, um, am just driving, listening. So not, I want everyone to listen to your podcast, but I also want them to go listen to the Last Invention, which is a podcast long form podcast about the history of ai. I think it’s a fasting fascinating way in eight episodes to understand this technology and how it has existed since World War II and why it feels so disruptive right now as an arrival technology.

Erin Mote: And [01:02:00] so, let me shout out the last invention as a podcast. I want everyone to go listen to do it on the beach, do it. I listen to it with my kids. So it’s, it’s really highly approachable and I think it’s, I think it’s really, really important when I think about, you know, sort of the dream team that we’re sort of putting together at Ed Safe.

Erin Mote: I would say that, you know, I’m gonna talk in broad categories like. I think we are inviting industry to the table. So, um, I know that can feel a little hard for folks. I know that there is some friction in that conversation, but for me, I think we need the systems architecture that bends the arc to shape these tools for what we want rather than what companies wanna get outta them.

Erin Mote: And so this idea of how do we have industry come to the table? Um, Rebecca happy said, are really important. The most influential ethicist for me throughout my career has [01:03:00] been a woman named Alandra Nelson. If you don’t know her, she served in the Biden Harris administration, but she’s got a long history of Ethicism that also broaches the social sector, and I think she has a lot of really important things to say about the considerations that we should be holding and helping folks really discern between the promise of this technology and the peril.

Erin Mote: And how do you hold both at the same time in these conversations? You already called out Michelle Culver. She’s my girl from the Rhithm Project, but I love the conversations, but she’s pushing. Rebecca Winthrop is another person who I think isn’t afraid to say the hard thing, Isabel, Hao? It’s funny how I’m naming all women.

Erin Mote: I’m sorry about that.

Rebecca Bultsma: Don’t be sorry. Never

Erin Mote: be sorry. But these are, these are. Folks who are like really helping evolve my thinking about what’s going on and, and, and where we need to be. Not just looking now, but looking around the corner. Folks like, folks like, you know, Ronit from [01:04:00] Google. She’s doing some really important work around literacy and I think asking hard questions about the accessibility of AI literacy in our space.

Erin Mote: And then, listen, probably the people I I talk to the most, I enjoy the conversations the most with are teachers who are actually in c. I, you know, believe that we need to honor the voices of educators who have a lot to say and a lot of wisdom around what it means to be moving from a schooling system to a learning system.

Erin Mote: And when I close my eyes and hoping where we get with this arrival technology, I hope that the experiences that we have for Robert and Claire, my kids are less about schooling and rote memorization and assessments, and more about the types of learning experiences that we can create. And we can’t do that unless we center the voices of educators, students, and families in this conversation.

Erin Mote: So hopefully that answers your question, but I do think we need a [01:05:00] wide calling in across our space to shape the future that we want. And it, it’s up to us to demand something different.

Brett Roer: Thank you Erin Mote, that was very eloquent and passioned and a wealth of amazing people that folks should be looking out for, especially amazing field of female leaders in this space.

Brett Roer: So I wanna say thank you again. Thank you to all our listeners. This is dropping on AI National Literacy Day. We hope you engage with all of the amazing organizations, solutions, and people that has shared with you today. And I, everyone listen alongside of us. Thanks again, everyone enjoy the podcast and have a wonderful day.