Ken Shelton on AmpED to 11

Ken Shelton

March 31, 2025

AI, Equity, and Why Ethics Matter More Than Ever

with Ken Shelton

AI promises to revolutionize education—but are we ready to face the tough ethical questions it raises?

In this unmissable episode of AmpED to 11, Brett Roer and Rebecca Bultsma sit down with international keynote speaker and education visionary Ken Shelton for a real, raw conversation about equity, ethics, and AI in education.

Ken pulls back the curtain on hidden biases in AI systems, explains why culturally responsive teaching isn’t optional, and shows how AI can amplify student empowerment—if we use it wisely.

Buckle up as Ken reveals uncomfortable truths and transformative strategies to ensure AI serves everyone, not just a privileged few.

Ep8 KenShelton Youtube
===

Brett Roer: [00:00:00] Welcome everyone to today’s episode of the AmpED to 11 podcast. My name’s Brett Roer, CEO, and founder of Amplify and Elevate Innovation, and I am joined by the amazing Rebecca Bultsma.

Rebecca Bultsma: Hi everybody. Great to see you. We are so excited about today’s episode with Ken Shelton. You, you weren’t here for it, but we had a great pre-session conversation and we are just gonna lead right into it, and I think it’s gonna be one of our best episodes ever.

Brett Roer: Oh, I can’t, I cannot wait. Ken, first of all, just thank you so much for taking time outta your busy day and your amazingly intense travel schedule to find some time to connect with us today. Welcome to AmpED to 11 podcast.

Ken Shelton: Yeah, thank you for having me.

Brett Roer: Ken, for the listeners out there, right, I was just sharing when we started, I, I cannot believe my good fortune, right?

Brett Roer: One of the main missions of the AmpED to 11 podcast is to have listeners feel like. They get to experience what we do when we go to these conferences. Getting to hear keynote speakers, not only on [00:01:00] stage sharing their wisdom, but like off to the side, just sharing even more insights about the work they lead.

Brett Roer: So Ken, just take a moment, please share with our listeners, you know, who you are, a little bit about yourself and the incredible work you’re leading in this space of AI and education.

Ken Shelton: Ah, thank you. So I am based in Los Angeles. I taught, uh, in the Los Angeles Unified School District for about 20 years.

Ken Shelton: Most of my career was in, at the middle school level. The subject matters, if you count, substitute teaching. I’ve literally taught every subject I would say my favorite, which usually surprises people. It’s usually one of my, ~uh,~ when I say my educator history, Easter egg. So it won’t be a secret to you all in your audience.

Ken Shelton: My favorite subject was actually art. I love teaching art. I love art in general, but especially teaching. And, ~uh,~ I, that was, ~uh,~ my big pitch with the students was always, everybody is an artist. You just don’t realize it until you’re afforded the opportunity to explore creative expression. And then you will find out what, ~uh,~ components of that are, uh, [00:02:00] resonate with you the most.

Ken Shelton: Artwork is also used, um, as a mechanism for, ~um,~ what I always say is social commentary. And so I would always talk about, ~uh,~ artistic expression through a social justice lens as well. You know, one of the things I used to always say to students even back then is I would tell them when, when school districts or schools are deemed as quote unquote underperforming, why do you think the very first thing they take away is the arts?

Ken Shelton: I taught other subjects as well, primarily I would say social studies and then also technology. And so, uh, when I, when I left the classroom, which is a whole nother episode for us to record perhaps on, ~uh,~ educator wellness and recognizing as early as possible the signs of demoralization and burnout to avoid them.

Ken Shelton: But I nevertheless, I left the classroom, ~uh,~ for literally for my own survival. ’cause a lot of times people ask me like, well, why’d you leave the classroom? Like, uh, because if I didn’t, I’d be dead. That’s a fact. And I go, so I didn’t [00:03:00] leave by choice. I left because I had no choice. But since then I’ve worked in a couple of companies and I’ve turned, uh, that partly that and public speaking into being an independent consultant, advisor, speaker.

Ken Shelton: My work has, uh, I’ve been very privileged. I’ve worked in, uh, I’m actually giving a key, so a couple of upcoming keynotes outside of the country. One in Hong Kong and then another one in Kuala Lumpur, Malaysia. So those will be countries number 50 and 51 that I’ve actually worked in, which is awesome. And I love to travel.

Ken Shelton: So I’ve been to 73 different countries and of course of those that will be 51 of them, I’ve also, ~um,~ been able to work in. And my work in technology continues to emerge and evolve as technology emerges and evolves. Earlier this century, my work focused on. Getting technology into the hands of teachers and kids because I recognized the transformative power of technology.

Ken Shelton: And then it also involved things like [00:04:00] a heightened importance around information literacy because back then was also when Google search became available to everybody. And so as the technology evolves and again emerges, of course, for me, it’s continuing to do that with the marketplace. That has led to a lot of the work that I do around artificial intelligence.

Ken Shelton: I do think it’s important to include in this recording that my work around technology and then also artificial intelligence technologies is not limited to starting in November of 2022. My work has been for a minimum of the last 15 to 20 years around this. It just has evolved as the technology has evolved, and so my advocacy around ai.

Ken Shelton: As a overarching terminology, if you will, has definitely been around. How does it present opportunities to dismantle historical barriers, but also what are the concerns, questions, and ethical [00:05:00] considerations that we should have with respect to the development of AI systems and the deployment, implementation of those AI systems, particularly within education, but not limited to education.

Rebecca Bultsma: That was awesome. That I think is interesting that you mentioned how much you like art, because if someone’s watching this, they can see behind you a copy of your book. And the first thing I noticed when I ordered a copy of your book was how cool the Art on the cover is. Tell us more about that and tell us more about your book.

Ken Shelton: So the cover art, when and when you get a copy. So if you’re listening or viewing, definitely you know, support. My brother from another mother, Dee and I, but the cover art is an interesting story. So we wrote a story about it in the book because part of, part of our belief in an ethical approach to AI is being transparent on when you use ai.

Ken Shelton: So Dee and I had a concept, and so we wrote out like, what do we want the cover arch to be? And so we kind of talked [00:06:00] it out and I said, now I used to teach art and I consider myself a decently skilled digital artist, but I also know that one of the values of an educator is to recognize. Uh, to operate from a degree of humility and recognize where you may have limitations.

Ken Shelton: For me, it was in turning concept into visual. And so we went into a couple of different image generators. We used copilot and Dow e it was Dow e at the time. We used those and we were typing in sort of different props. And so we literally have like a whole slide deck of here’s the prop we use and here’s what it did.

Ken Shelton: And we, we work back and forth with each other using the imageries to refine, refine, refine, refine. But to be honest with you and your audience, we never got exactly what we wanted, but we got close enough that we said, okay, now we’re ready to take our prompt, our concept and what it generated. And then we handed over and paid an artist to [00:07:00] do what was version one.

Ken Shelton: And what you see now is actually version two. So it still involved our intellectual process. It’s still involved using AI to at least dismantle some of our creative barriers. But then in the end, it’s still required the touch of a human hand to get the finished product. And so what it is, is, um, it serves as a double, double juxtaposition.

Ken Shelton: It is the balance between technology and our humanity. But also recognition from an Afrofuturism standpoint of how in many cases, if you are of a historically scooted and marginalized background, the technology can begin to take over your humanity whether you want it to or not.

Rebecca Bultsma: And I you bring up such a good point, and we haven’t even started on our 11 questions yet, but I think that this is really, really important stuff that we don’t get to cover very often are these ethical aspects and that idea of all of the bias and [00:08:00] discrimination and stereotypes that are baked into especially image generators.

Rebecca Bultsma: I actually just. Uh, read something about this, uh, a few days ago about kind of how that happened. And the long story short is there was something called ImageNet. I dunno if you’ve heard of this. In about 2010 where researchers decided they would get a huge collection of images and then they paid people like $5 an hour through Amazon Turk to label image for this image database chair, table cup.

Rebecca Bultsma: The problem is when people started labeling human faces and people with specific labels, and then that data set was so problematic and then every image generator in the world was trained on this data set. And it’s just getting continually amplified, combined with the way language is used on the internet and words and where people are historically discriminated against is it’s something that we need to be talking about a little bit more.

Rebecca Bultsma: So I’m really glad you brought that up.

Ken Shelton: I bring it up all the time and it was a keynote that I saw back in, I want to say oh [00:09:00] seven, and it was profound enough that I remember it even today. And it’s exactly what it’s saying. So you mentioned there was image nests because I remember this and where Amazon paid people to essentially index images.

Ken Shelton: You know, when you, when you capture images, how do you make ’em searchable? You gotta index and there’s certain key words and hence forth the bias among other things. The other other mechanism or approach to that, that Google used was they recognized people’s desire to play games online. And they had the data around how many hours, and I don’t remember the specific number, but how many hours the average person plays solitaire on a computer.

Ken Shelton: ’cause this is going this far back. And so they took the desire for games and gaming and, and the fact that we love having, what I would say are digital based distractions. By the way, this is how social media is also built. Okay. So we love digital based distractions, right? And then we love to play games.

Ken Shelton: And so what they did back then was they worked with a company and the guy he worked at Carnegie Mellon, I don’t [00:10:00] remember if he’s still there, but it was called the ESP game. And so the way that game worked is you would log into your account and you would get randomly paired with somebody who you don’t know who they are anywhere in the world.

Ken Shelton: And then the way they would index their images is an image would pop up, and then you would start typing in words that you associate with that image. And the faster you matched a word with the person you were paired with you, it, that that word became a part of the index of the image. But you got points.

Ken Shelton: And then they would keep a global ranking of who got the most points. And then to add to the, uh, the depth of the indexing. Once one word was flagged, that now became a, it became a word that you couldn’t match anymore. So then you had to try a whole bunch of different words. And so what for, for everyone who’s listening and watching, what you want to keep in mind that both Rebecca just shared with Image Nest and I just shared, is both of those systems were still built on the perspective, experience, and knowledge of the [00:11:00] humans, which means that they did not have the guardrails or the safeguards in place for the bias that those humans would bring.

Ken Shelton: And that became part of the terminology that was used to index the images.

Brett Roer: Yeah. We’re throwing out the 11 questions. This,

Ken Shelton: yeah. To borrow from the MCU, which is it, uh, captain America, we could do this all day.

Rebecca Bultsma: Also to borrow from Captain America. I use that analogy all the time, and the idea of AI is this super serum, right?

Rebecca Bultsma: Like that can make the good, good and the bad really, really, really bad. And people aren’t talking enough about the bad, really, really bad. There’s so much good potential. But then, so yeah. It’s interesting you bring up Captain America. I actually wrote something about that recently, so we’re aligned for sure.

Brett Roer: You know, Ken, a topic that I was fortunate enough to hear you speak about, and obviously it’s, it’s a, i I hope, uh, you can speak to this as well as how it intersects with what you’re writing about what you wrote about in your recent book. You know, you’re a huge proponent of, you know, using the term equity again, so fortunate to get to hear you share this with [00:12:00] educators out in Ohio, uh, last year.

Brett Roer: Could you, could you share with our audience a little bit about this idea and how you’re really trying to bring that to the forefront, what you’re talking about, about equity and how that intersects with some of your, uh, your findings in your most recent book?

Ken Shelton: Perfect. So good question, and thank you. So I’ll connect the book and we’ll kind of do it in reverse order.

Ken Shelton: So the book that d and i wrote, there’s a couple of things I think are important for a little bit of background on that. So, as I shared before, I’ve been around EdTech this whole century, and you rarely ever see voices like mine amplified when it comes to talking about technology, period. That led to me reaching out to the co-author of the book, my, my, uh, brother from the other mother Dee.

Ken Shelton: And I said to him, look, you and I have talked about digital equity. We talked about, and you know, and I’ll, I’ll mention equity and I’ll share with everybody how I define it as well. And I said, this is one of those situations where we can’t just sit it out. I go, I know you get [00:13:00] stages and I get stages, but at this point we have to put our perspective and our thinking on, on wax and, and I go, and I’m gonna hold you to it.

Ken Shelton: And so at first he was like, ah, I’m good. I’m good, I’m good, I’m good. I’m like, all right, well I’ll proceed on my own, maybe do some blog posts. But then he started noticing the same things that I was noticing around which voices were being amplified, which voices were being associated with experts. Uh, and none of them were representational of a majority of the students in our public school systems really globally.

Ken Shelton: They were not representational of the global majority. Okay. And so then he reached out to me and was like. Okay, I’m, I’m in. And so then we just started writing the book and I, I shared with them the most important thing for us to have in this book that will differentiate ours from all the others that are being written is, it is not a tips and tricks.

Ken Shelton: It is not a recipe book of how to, it is a philosophical and thematic approach along with ethics and equity as [00:14:00] a component to it around our approach to ai. It is evergreen in its content and it is from a perspective that I get, I hear from folks, even friends who read it and they read the personal stories that we have in there, and then they’ll reach out to me like, I didn’t know this.

Ken Shelton: And I’m like, that’s exactly why we wrote the book, and thank you for taking the time to read it. And there’s a couple of Easter eggs that I’ll share ahead of time around the book. So one is, again, the story about the cover art. The other two are the following. The typeface that we used in the book is the most accessible typeface that exists in the world.

Ken Shelton: And I share that because it’s important for people to understand and recognize. If you truly understand and have a universal, uh, design approach, then it’s something like that as well, even including the typeface. And I know it because I have friends that grew up and were never appropriately diagnosed as being dyslexic, which affected their schooling experience.

Ken Shelton: I hand ’em the [00:15:00] book and I’m like, lemme know if this is easy for you to read, boo. My my own mom who is, uh, visually impaired, she was like, are you all gonna come out with a large print format? And I said, well, I can make sure I get one for you, but you should take a look at it first. As soon as she opened, she’s like, oh my God, I can read this so easily.

Ken Shelton: And then she started crying and she’s like, why aren’t all books written like this? And I was like. I don’t have an answer for that, but what I do know, and so, so everyone knows the typeface is, uh, Lexend, L-E-X-E-N-D. The other thing, and then I’ll connect this to this is why the, this connects to the equity question.

Ken Shelton: The other thing we did is we have extended margins in the book. So Rebecca, when you get yours, you’re gonna notice when you open it, we talk about this in there, the margins are extended. That serves a duality of purpose. One, have you all ever noticed that sometimes when you’re reading a book, you write on post sticky notes and you put it in the book and your book is filled with them, which is fine, but our whole point was the margins are there for you to write in them.

Ken Shelton: And also considering the fact that it is an AI book written by two black [00:16:00] male educators, our hope is that by you sharing your thoughts and writing in that space, what are you doing? You’re filling in the margins, so your thoughts are no longer marginalized. Now, to your point, Brett. Yes, I do use the word equity.

Ken Shelton: I’ve been using that word for more than a decade, and there was a group of friends that they were, they, they were part of it. Then they brought me into a conversation on the artist, formerly known as Twitter, around how do we combine our understanding of technology and the need for equity and not limited to digital equity, to come up with, you know, a term because people, you think in terms of, of effective marketing, you can distill things down to a simple word or a short phrase.

Ken Shelton: We came up with ultimately tech-equity. Now, my use of that term, and my definition has evolved over time where tech-equity, I define it as a following, merging the effective use of educational technology in a cultural, responsive, and culturally relevant [00:17:00] learning environment to support students’ learning of essential skills in alignment with a pedagogical approach of higher order thinking.

Ken Shelton: That’s how I define it. So you see that whole idea and it fits easily within AI is it’s not just for example. Putting technology in the hands of a learner, it’s what are you doing with that technology that dismantles historical barriers, removes inaccessible, makes the inaccessible accessible, and also supports a higher order thinking.

Ken Shelton: So in other words, if I use tech, I. And this happens in AI a lot, where people are like, oh, I use it all the time. What do you do for, I have to generate worksheets. Okay. So that implies that worksheets are an effective pedagogy, okay? And what that is, is that’s high order, high vol, high volume tech, low order thinking.

Ken Shelton: It is not in alignment with, uh, learner agency. It’s not in alignment with what I call, oftentimes in education you hear the importance of learner engagement. I always follow that up with, well, what kind of engagement are we talking about? Is it passive or [00:18:00] active? For example, worksheets are passive engagement.

Ken Shelton: But if I am creating something, if I’m collaborating with either a classmate, the teacher, or, uh, and yes, it is possible to collaborate with a large language model, and I, and I hear people try to argue with me about that, but it is possible to do that. You know, they, they, they, they don’t understand. Okay, anyway, if I’m doing collaboration, that augments my learning to make me think more than what I thought before and produce or create something that I didn’t think I was capable of doing.

Ken Shelton: That is in alignment with the type of active engagement. I like to see if I have further questions. I always ask in terms of the goal shouldn’t be engagement. The goal actually should be ownership of learning. And you have to work your way to that point by showing by, by showing and helping learners identify what is possible and then helping guide them down that path.

Ken Shelton: AI is just another, uh, road marker, if you will, down that path.

Rebecca Bultsma: Do you find, and I’m finding this in a ton of my conversations [00:19:00] and presentations right now, we’re trying to give that to teachers right now to teach them to think beyond that first order thinking of, I can use AI to do this, to how do I combine this and this to then elevate how I use this to then, ~um,~ help students better.

Rebecca Bultsma: I feel like we’re still at that point where the teachers need to understand that level of thinking before we can effectively pass it on to kids. And so I’m just wondering what advice you’d give to teachers who are kind of at that place where they’re generating it for worksheets and things like that, but they maybe don’t have the tools or the skillset they need to help their students develop that second order level of thinking that you talked about.

Ken Shelton: Yeah. You know, and I, I’ve, I’ve seen that as I shared earlier because I’ve been around at Texas this whole century. I’ve seen that, that, that pattern and process, so let me start off with, I, I take no pleasure, nor do I have any interest in dunking on teachers for what they’re doing, because whenever I see that situation, my follow up question is always.

Ken Shelton: What PD have you been provided and what was the PD plan? Okay. ’cause we cannot have our expectations [00:20:00] exceed the resources we have or the plans that we implement. So with that being said, oftentimes it, I, I equate that type of usage to, ~uh,~ a couple of things. So, one, there’s a reason why a lot of AI companies that one of their biggest and number one marketing pitches is time savings.

Ken Shelton: And, and, and I reject that right away. And I always say, well, before we get into using AI to save time, let’s look at what are we doing with the time that we have, because we might be able to regain some of that before we even get into using ai. Okay. Uh, there are plenty of good and practical uses. Yes. But before we get into the.

Ken Shelton: Before we buy into the marketing allure of time savings, let’s take a step back and look at what are we doing with the time that we have. And unfortunately, a lot of, a lot of the classroom educators, time is commandeer to do things that are not in alignment with the reasons why they chose to [00:21:00] be an educator or to perform the essential functions of what they do.

Ken Shelton: So that right there already is what I, what I say that, and that’s where, that’s the disease. The AI is just focused on ascendum. That’s the disease. Okay. But what I also say to the teachers, they’re like, well, you know, there’s a lot, there’s a lot. Yes. And don’t even waste your time trying to keep up slow and steady wins the race.

Ken Shelton: I go and, and the uh, the, the funny metaphor I use is the early bird gets the worm, but the second mouse gets the cheese. See that? So you don’t have to try to keep up. First of all, you’re not gonna be able to keep up. Okay, so let’s accept that. I mean, Moore’s Law, for those that don’t know, Moore’s Law is a law that essentially it equates to processing power, is that processing power will double every 18 months.

Ken Shelton: Moore’s law has been blown outta the water by ai. And so there’s no point in even trying to keep up. And so for me, the [00:22:00] question I always posed before my workshops, we have it in our companion guide for our book. And I do this all the time when I consult with, with schools and schools and and educational systems, not just in the us.

Ken Shelton: And they were like, well, we wanna get in ai, we wanna do this, wanna do this? Great. I get it. But I need you to answer the following question first before we get into that. What is the simplest solution to the problem you are trying to solve? That the humans cannot solve themselves and you need AI to do it for you.

Ken Shelton: Because that answer is going to be the driver for what AI systems you look at what AI systems you acquire, the plan that you, uh, create in association with, uh, with not only the, uh, ethical implementation but pd and then of course based on all of that. How is that in alignment with the educational experiences and goals that you have?

Rebecca Bultsma: I think where I see that kind of tension happening right now is all the teachers I talk to, a lot of ’em say their biggest thing is the marking, right? The marking marking. And, but [00:23:00] we’re seeing AI systems come out right now, like in Arizona, that are doing the marketing marketing and that has its own ethical problems and implications in itself.

Rebecca Bultsma: So I think just for everyone, the bottom line is it’s messy. We’re in the messy middle. We’re figuring it out as we go, and. You’re right. Second mouse gets the cheese slow and steady wins the race and people are feeling overwhelmed. But I like what you recommend as a starting point. Brett, do you wanna ask a an 11 themed question?

Rebecca Bultsma: I feel like, uh, Ken and I are,

Brett Roer: Hey, you know, I think, I think the way we started out again listeners, I hope you’re seeing this is the energy we had from the word go. I do, but I also wanna actually like make sure, Ken, you said something that I’d love for you to, if you can go deeper on and you know, I don’t wanna take too much outta your book.

Brett Roer: I want everyone to make sure they purchase it as well. But two real, two things you just said there. One was, you know, I just presented today and we actually did the same thought experiment with a bunch of New York City public school district and building leaders, which was throughout it’s, it is an ai, it’s an AI PD that be a transformational leader

Brett Roer: said, let’s throw out ai. What is the thing [00:24:00] right now that is causing bottlenecks in your systems? What are the things. That you need more time to go deeper on and solve for your community. And obviously everyone has a slightly different answer to that, but you’re absolutely right. The idea of first identify what you, what your needs are as a community and then build in the solution.

Brett Roer: So I just wanna make sure that’s really emphasized. ’cause that’s brilliant. But I also wanna make sure you let like our audience know, because they, many of them probably track everything you just shared, but I really would love it if, if you have an exemplar of, tell us about a district or a school or a community or a teacher that is bringing their students to the point that you are like, yes, this is what, whether it’s AI or just meeting the community’s needs, they’re taking them to that level where there’s that ownership of student learning.

Ken Shelton: So I’ll start off with teachers. So, ~uh,~ one of the school districts I work with. I worked with the, ~uh,~ science department, so it was a high school science department, and my question to them was, what are, you know, again, kind of back to what you [00:25:00] just shared, Brett, I was like, what are, what are you find some of your biggest challenges in reaching, uh, your learners, particularly in your department, in your content area?

Ken Shelton: And I go, because I, as I shared earlier, I, I, I, I taught science as well, and I go, I have a feeling I know what it is, but, but, but I’m not in your shoes. And so I don’t want to, you know, I don’t wanna presume what it might be. And so, of course, it was a lot of cases they, what the teachers could not directly do.

Ken Shelton: The challenge was essentially being culturally relevant and culturally responsive within the context of a science curriculum, science-based curriculum. So biochemistry, physics, and, and so forth. I go, so that, that is a problem that you all have not been able to solve yourselves. Remember that question I asked?

Ken Shelton: I said, okay, so we’re gonna use two different large language models. To solve this problem. And I go first, do you all know why we’re gonna use two different ones? And they go, well, why? And I go, well, the clue is in what I just said. They’re different. [00:26:00] They may both be large language models, but you have to understand that just because they have the same name, they are not the same.

Ken Shelton: That’s like me saying, I’m gonna go drive an SUV. They’re different. They may have the same car class, but they’re still designed differently. LLMs are the same. All of ’em are the same. They are not coded the same, the algorithms are not the same. And even if they have the same data sets, you take a prompt and you plug it in a day later, you are gonna get a different result.

Ken Shelton: Okay? So I said, so that’s what we’re gonna do. And the reason why we’re gonna do that is if you go back to what I shared and I shared this with them, an information literacy approach. I want to get as much of the pictures I possibly can to do a comparative analysis, and then I will go from there. So ultimately I asked them all the following questions, ~uh,~ and then some, I said, so tell me, uh, tell me about your learners.

Ken Shelton: I don’t wanna know about classes. I wanna know about individual learners. What are their, what are their likes? What are their dislikes? What are their hobbies? And so then they, they share with some, and [00:27:00] I’m literally tabulating all this little computers, so they like to do the following thing. I said, what kinda sports do they like?

Ken Shelton: Well, they like football. They like basketball, they like soccer. Some of ’em like baseball. Okay, great. You know? Do they like music? Yes. What kind of music? Oh, they like hip hop. Uh, they like EDM, uh, many of ’em are big Taylor Swift fans, blah, blah, blah. I’m like, okay. So I’m doing this whole list of things that they’re share with me with their students, which for your listeners and viewers is critical because I ask, tell me about your learners.

Ken Shelton: Don’t tell me about your classes. I want to know about this, the individuals. Okay, so I get this, this whole inventory, and I’m like, great. Now I have all the following. I have the inventory of the learners. I have, uh, A PDF of the, ~um,~ the, the standards, the standards for, ~uh,~ the high school curriculum, uh, across all the, uh, the science areas.

Ken Shelton: Okay. I have the graduate profile of a graduate for this, ~uh,~ this particular school district. And so then I go into an LLM, I attach the standards. I attach a PDF [00:28:00] copy of the graduate profile. I attach a PDF of all of the things that the kids like. And then I begin my prompt and I say, take on the persona of a high school English teacher.

Ken Shelton: One of the struggles that you’ve noticed over the last several years is developing a cultural responsive pedagogical approach that supports the students in directly connecting the attached standards to their lived experiences. I. Also attached are all the hobbies, interests, and uh, um, things that the students like to do.

Ken Shelton: Your goal is to support the students in drawing a direct correlation between the standards and what they like to do. Also, you want to direct, directly correlate that within the, the thematic purposes or the thematic goals of the graduate, of a profile, profile of a graduate rather. So in this context, please provide an interesting hook that can be conducted [00:29:00] for five minutes every day for the next 30 days that will support the students in drawing a direct connection between their hobbies and interests, the standards, and have it be aligned with the profile of a graduate.

Ken Shelton: Provide a rationale for each hook. Provide a direct connection to the profile of a graduate. You see what I did there? So for anyone listening or watching you notice all the stuff that I did long before I went into the LLM that was for the teachers. And so then I did that in both, and I took both and I said, let’s take a look at both.

Ken Shelton: And I go, let’s, let’s mark down the ones that we like and that we could potentially use. Let’s purge the ones that we don’t like. Once we have that, I then took that, I put it back in and I said, regenerate the list based on what’s here. Provide a further rationale as to why it’s in alignment with the goals of it being culturally responsive, culturally relevant, fits within a profile of a [00:30:00] graduate.

Ken Shelton: And feel free to expand upon your reasoning if you get it. You know why I’m saying some of these things in the LLM. And then they did that. And you know what? It was great. But you notice the human element had to proceed the tech element no matter what. And if they couldn’t answer the questions about the kids, then guess what?

Ken Shelton: We’ll wait until we get those answers and then we’ll still do all the other stuff. So that’s what now with students, I love working with student focus groups, and this is one of the stories I love sharing in, in one of my keynotes. So I was working with a student focus group on using artificial intelligence, and I started off by saying, okay, y’all should understand that I’ve been a teacher.

Ken Shelton: I’ll also a student. And yes, I was that student where I would ask the teacher all the tough questions. So I’m just gonna put it out there. I’m gonna put it, put it on blast for y’all to know right now, you can’t hide it from me. I know for a fact each and every one of you has either thought of, or you have done something where you’ve had AI do your work for [00:31:00] you.

Ken Shelton: I go, so, so, so I know I’m right. So do not in trouble, but, but check this out. We’re gonna do something. So I said to the students, I, the first question I asked the students was, I need to know from all of you, are you okay with somebody else speaking for you? That was my first question. And of course, all of them, every time I do student focus groups, which I love doing, I would say a hundred, almost a hundred percent of the time, there’s a few that that play Games are like, well, I’m okay with that.

Ken Shelton: I’m like, all right, I’m gonna remember that one anyway. But they’re usually like, no. And I’m like, right, why would you need me to speak for you? Y’all, y’all get talked at by adults all the time and they tell you what you think and how you should feel. Right. Especially the one I go, wait for, wait for the special one.

Ken Shelton: You’re living your best lives as a teenager. And they go, oh, all the time. I’m like, yeah, no you’re not. ’cause you have no agency. You will later. But this is why education is important. I go, so I’m gonna carry that with me. I’m gonna keep that one on shelf. So you’re, you’re, you’re not okay with someone else speaking for you now, how often do you get [00:32:00] assignments where you feel like you are being schooled rather than learning?

Ken Shelton: And of course they’re like, oh my God, all the time. And I’m like, yeah, that’s, that’s what I describe as learner disenfranchisement. I go, I, you know, but I get it. And so I said, so I wanna do something. I said, you all got an assignment not too long ago. Uh, ’cause you read the book, the Great Gatsby, right? And they go, yeah.

Ken Shelton: So I said, okay, well watch this, check this out. So I went into Annette LM and I said, generate, uh, five paragraph summary of the Great Gatsby with a particular focus on the theme of meritocracy. And I said, okay. So I just did that. I go, so I’m gonna ask you all a couple of ethical questions. Let’s say that we were in class together and I did exactly what you just saw me do.

Ken Shelton: How would you feel about that? And they’re like, oh, that’s not cool. I’m like, right, ’cause you would’ve spent hours working on it. And here I just did it in less than one minute. Right? That’s not cool. And I go, so that’s one of the ethical questions I always pose is, ~uh,~ you know, how would I feel if this were done to me?

Ken Shelton: Okay, so that’s, that’s the [00:33:00] second, the first one is, is does it violate the law or, uh, what I would say is explicit district policies. So the second one is, how would I feel this was done to me? Okay. Then I said, okay, now the next thing, if I do this, here’s a question for you all to ask. And it’s something for that I need you to be self-aware of.

Ken Shelton: What long-term benefits am I sacrificing for this short term gain? If I just have a, do my writing for me, what’s the long-term benefit that I’m sacrificing? And I had them in, they, we, we, we, we talked it out like, well, you, you might not learn how to write properly. The one response I’ve been waiting for that I do get, but not as frequently as I want, is that I might become over reliant on it, upon it.

Ken Shelton: Okay. And I’ve seen it. Uh, and Rebecca, you may have seen this too, it’s, it’s called cognitive offloading.

Rebecca Bultsma: Yep. And we’re seeing it when kids go to do their university interviews in person. There was a story of a kid who was like, I actually am struggling to answer these questions ’cause I’m so used to running things by chat GPT [00:34:00] first and always having it at my fingertips.

Rebecca Bultsma: And that’s where this concept has started being talked about a little bit more.

Ken Shelton: That’s one of the over alliance things we need to be aware of is that cognitive offloading. So I had them and so I said, you need to tell me all the things I’m sacrificing. And so then, and then of course ultimately to kind of shorten the story after that, I just started laughing.

Ken Shelton: And so a couple of kids were like, why are you laughing? And I said, because you all got this assignment, didn’t you? And they go, yes. And I go, you want to know something? That’s funny. I got this exact same assignment in the fall of 1987. I go, so the good news, there’s good news and bad news. The bad news is you’ve already gotten this assignment.

Ken Shelton: The good news is one you all recognize, I don’t want you doing my writing for me. And then I took them through what I call as an idea generation activity, where it’s like, what you don’t want is a flashing cursor. So if I have to write, if I gotta do something about this, and I kind of have an idea, there are ways in which I can use an LLM [00:35:00] quote unquote, collaborate with it to help me broaden and expand on my ideas.

Ken Shelton: But what you don’t want is it to do your writing for you. Okay. And then I even said, now here’s the other thing. Take a look at what it actually wrote and let me know what you think of that. And then we did, and they looked at it almost every single time. The kids are like, well, one, I don’t sound like that.

Ken Shelton: And two, I I, I don’t know that I would even turn that in. And I’m like, exactly.

Rebecca Bultsma: I talk about something like that. It’s like this, I call it the AI Oreo, but it’s this idea of you use AI to help get you started, and then the white part is all you. You make it good, you, you know, I’m thinking those mega stuffed Oreos.

Rebecca Bultsma: It’s about 20% AI at the beginning to help you do thought generation, think critically, think deeply, write your own stuff in the middle. And then at the end, if you want to use 20% AI again to say challenge any critical arguments in this, where do you think I could develop something better? Use it as a thought partner.

Rebecca Bultsma: Act as if you are, you know, this particular audience or this particular teacher or an expert in this [00:36:00] particular thing, and use it at the beginning and the end. But make sure the best part, that white stuff in the middle, that is everybody’s favorite needs to be 60% you and your personality.

Ken Shelton: So I think, I think to, to build upon what Rebecca just shared.

Ken Shelton: This is what I did with the students is I literally said, you’re gonna go through a process called Question Refinement. And so then I have them go into a, a, a, an AI platform and I said, you know, again, take on the persona of a high school English teacher who’s supporting a student, ~uh,~ expand upon their ideas and develop their writing, your objectives of the following.

Ken Shelton: And then we listed what the objectives that they’re writing. And I go, what, what I want you to do is I want you to ask me a series of questions. No more than two questions at a time to help me build upon my ideas until I meet those objectives that I now can take that and begin the process of doing my writing.

Ken Shelton: You see what I did there? And that’s what I did with the students. And, and I said, well, don’t do that. And I wanna know how y’all feel about that. And their teacher is a good friend of mine. And [00:37:00] again, dismantling of barriers. The barrier oftentimes we writing is, I don’t know where to start, or I don’t know what to say.

Ken Shelton: Well, then that’s where we can use it. What you don’t want is for it to do that for you.

Brett Roer: What I truly appreciated there, you know, for listeners that kept up was like, oftentimes people talk theoretical about how to move forward in education using ai. Like, I mean, each question you gave you explain why you’re doing it.

Brett Roer: You really walk through how teachers can get to that next step, how students can. One thing I really appreciate you saying, ’cause I actually experienced this today. You know, I’m speaking to New York City leaders afterwards. One of them says, I use, I show a lot about how you can use voice memos to really move your work forward and get to that first starting point.

Brett Roer: And they said, well, if you do that for an observation, like how would a teacher feel if you used AI to write their observation? I said, well, let me ask you a question. When I push a a principal that I’m coaching, I’ve said, what are all your low inference notes? Gimme the [00:38:00] lesson plan. Gimme the resources the students had in front of them.

Brett Roer: Gimme an exemplar of student work. Now talk to me about what you were thinking and what you think the teacher needs to improve. Now look at the first draft. In real time, say it out loud and let’s refine it together and then, you know, upload your school’s instructional handbook, your portrait of a graduate, your last superintendent rating.

Brett Roer: Now, does it look like the exemplar that I asked you to give me that you hand wrote last year and now like, is this ready? And so I said to them, if you did all of that and then afterwards, after as many reviews you want with an AI tool, you now sign your name on it. Yes. I think that is a perfect marriage of what you’ve believed in, what you’ve built as a community and what your own personal philosophies are, and tools like AI to put it all together.

Brett Roer: And then I said to them, it’s the same way. If a teacher wants to use an AI tool, they’re putting their name on that lesson. That’s up to them what they choose to present to students and like hold themselves accountable to if we’re gonna observe them. But what I really appreciate you doing right there is walking people through the rationale of all that front loaded thought [00:39:00] processes, which is just good instruction.

Brett Roer: All those questions you ask teachers before they. Put it in an AI tool. I hope that people were doing that before ai, but we know they weren’t. So this is great that they now have a tool to put that all together. So again, just wanna praise you for really making that narrative so clear for how you can push student and uh, educator thinking.

Brett Roer: So thank you Ken for that.

Ken Shelton: Yeah, thank you. And I think for the listeners and viewers, it is important to understand why even what, what you shared, why, why that should be foundational to anything you do with artificial intelligence. And the whole idea that I always tell people is, your golden rule of thumb with any ai generative AI system you use is more context equals better output, period.

Ken Shelton: The more context you provide, you increase the likelihood of getting a better output, the less context, and you want the AI to fill in the blanks. Well, guess what? I got news. News for everybody who’s watching and listening. The bias will for sure end up embedded in your results. I know it ’cause I know [00:40:00] systems work and I can expose that bias.

Ken Shelton: I mean, we can talk about if y’all want, but I bring it up all the time. There was a, there was one where an educator was telling with me, and I’m not gonna name the platform ’cause I actually like the platform, but she was like, they have AI and they have a whole, uh, page where they’re dedicated to data protection and safety and blah, blah, blah, blah, blah.

Ken Shelton: And I was like, that may be fine and dandy, but I bet you I can expose the bias even in that platform in less than 15 seconds. And I go, ’cause they have the image generator and I know what they source their, their, their image data from. And she was like, really? And I’m like, watch this. And I go, and so my prompt was generate an image of a teenager wearing an ankle monitor.

Rebecca Bultsma: Wow. And how do we ever fix problems Like that is the question that’s, that’s the big, that’s, those are the million dollar questions that we’re trying to solve right now. But solving them starts with people, being aware of them and talking about them and bringing them to the forefront and putting pressure on.

Rebecca Bultsma: Governance and regulations and [00:41:00] transparency around how these are built?

Ken Shelton: Well, so my approach is always, yes, it’s twofold. So there, one, there should be a higher degree of scrutiny, responsibility and accountability on the developers. But again, as I shared several times I’ve been around ed tech, that ain’t gonna happen.

Ken Shelton: Just, it’s just not gonna happen. Should, should and does are two. There’s a massive delta there. So for me, it serves two purposes, especially with education. One, I bring this up to leaders all the time. Don’t write a check to a platform that’s not being transparent or responsible in what they do. You don’t have to buy what they’re selling.

Ken Shelton: You don’t have to do it. And if all of us collectively say we’re willing to buy it, as long as it does the following things well, guess what? They have to be responsive to the market’s needs. Okay. That’s just business 1 0 1. Number two is that is a foundational component to the ways in which I define and emphasize AI literacy.

Ken Shelton: I know it’s there. So now that I know that, what are the things that I can do to reduce [00:42:00] the likelihood of something like that occurring? And again, going to oversimplify it or not to overdo it. To simplify it, your golden rule is more context equals better output. Some people laugh when they look at some of my prompts that are literally like a whole story and I’m like, no, that’s for a reason.

Brett Roer: Two points I wanna make sure, because again, uh, I think we started with this, and I think Rebecca alluded to it, you know, if you’re a teacher or an educator or even an education leader, like you’re in it, right? It’s so hard to do some of that vetting, you said, but we have a platform here and you are doing that work.

Brett Roer: Are there any tools or solutions or anything right now that you feel comfortable saying they’re either doing it the right way or they’re aspiring to do it the right way and we should keep supporting and pushing that along? And if they’re not, that’s okay too, to be like, no, I don’t have anything that is really at the level I expect.

Ken Shelton: Uh, as much as I want to answer your question, I don’t want to influence through confirmation bias, uh, viewers. I, I, I prefer to empower people, and so I will, I will put it, put it out this way. There is, first of all two [00:43:00] things. One, I, because I just saw this at a recent conference. If somebody’s telling you that they built a platform scratch and it’s new, they’re lying.

Ken Shelton: Okay? They did not do that. Nobody’s going, nobody in their right mind is going to build an AI platform from scratch. Now there’s too much data that has already been collected, so it’s not gonna happen. So that’s number one. Number two, again, I go back to those questions. What is the, what is the educational goals you have?

Ken Shelton: What is the solution you’re trying to solve that you yourself cannot solve? And then now you map that solution to what AI system or platform helps best meet those needs. I do think that educators need to be equipped with the right questions to ask the vendors starting off with, what are your data governance practices?

Ken Shelton: Meaning how do they collect the data and what, what mechanism do they have to cleanse problematic data because it’s there for me, me being a black male educator, one of the very first things I always do is I go look at their team and unfortunately I don’t see [00:44:00] myself represented in any of their teams at all.

Ken Shelton: So that leads me to additional questions around things like, how is your platform designed to meet the needs of our historically excluded and marginalized student populations? When you yourself, don’t even have someone in your decision making team to provide that perspective, what are the ways in which the AI platform fits within your existing IT infrastructure?

Ken Shelton: See, that’s another thing to consider. Does it require an a, a separate sign on? Does it fit within my single sign on? What do I have to, what do I need in order to sign on? Do you use an email address? Do you collect the email addresses? You see, I can be compliant with prevailing data privacy laws, both in and outside of the US and still collect your user data.

Ken Shelton: And that’s another question. Does my use of your platform also train your model? And then if I’m, if I write a check for your platform, what are the steps for effective and ubiquitous deployment [00:45:00] within my school or my school system? I have certain platforms that I personally like to use more frequently than others, but just for full transparency, it still depends on what is the experiential goal I have of the workshop or the PD or the talk.

Ken Shelton: What is the, what is in alignment with that experience? That AI allows me to do something that I cannot do myself, or that augments, or I would even say accelerate something that I can do myself. For example, I design chatbots. I usually design them from, uh, uh, I don’t use templates. I usually design ’em from scratch ’cause I have an idea and, and I show people sometimes the prompts that I set, that I write, that sets the guardrails for the chat bot.

Ken Shelton: And so, for example, if I’m given a talk and I want to gauge where the entire audience is with respect to their understanding, comfort level, and usage of artificial intelligence, this is a prime example. Yes, I would love to go speak to every individual, but if I’m in a room of, you know, 300, 500, a thousand, 5,000, I can’t do that.

Ken Shelton: But an AI chat Bott allows me to do [00:46:00] that because I set the guardrails for the conversation, connect them to the chat bot. I see their whole chat and I get a, a a a, a spreadsheet of all of their conversations that I can then further synthesize that data. See, but I, I know what the experiences I want them to have.

Ken Shelton: And again, I map that to what digital resources available that allows me to do just that. And by the way, that is in alignment with equity because I’m being responsive, I’m being relevant, and I’m doing using in a way that still supports higher order thinking. I’m not digitizing a worksheet.

Brett Roer: Listen, Ken, first, thank you for not taking the easy way out and you truly just empowered both myself, Rebecca, and all our listeners.

Brett Roer: You just gave us the exact internal questions you should be asking. If you’re in a decision making position or if you’re an educator, what tools to use. This is the kind of research you should be doing. So thank you for giving us that roadmap. I know that I’m fortunate, again, I’m sitting here with two international keynote speakers who both are, you know, doing all this [00:47:00] research both for their passions and knowing how important this work is in leading this, in this forward, uh, ethically.

Brett Roer: So I really do wanna speak about some of the ethical. Dilemmas that are out there. And let you two kind of hold court here, uh, Rebecca and Ken about, we’re in this very unique time. Rebecca, uh, is gonna share where she’s doing her degree and they had, you know, extreme weather that canceled classes in a part of the world that doesn’t typically get it.

Brett Roer: Ken is located in, uh, you know, the greater Los Angeles region. What is going on and how is AI gonna play a role in the future of weather and the ethics that are around it? And I’m, I’m gonna turn that over to both of y’all. The whole court here.

Rebecca Bultsma: You kick it off, Ken. I’ll, uh, I’ll jump in.

Ken Shelton: I mean, yeah, there’s, there, like I said, that’s, that’s, I guess we don’t have three hours, but I.

Ken Shelton: I mean, I do, I do think that we have to maintain a conscious awareness and diligence around the use of these systems and the environmental and economical impacts that they have. [00:48:00] So, uh, to make it to, so I’m not speaking in abstract terms to the audience here. So let’s talk about the economical impact.

Ken Shelton: You know, if you use chat GPT, you need to understand a couple of things. One, there is a free version, and then there is a paid version,

Rebecca Bultsma: and then a paid, paid version for 200.

Ken Shelton: Oh, yeah. Then, then, well you said paid twice. I’m almost, I’m a paid, paid, paid, paid version. So the question always to consider is, what am I giving up for the free version that I’m not gonna give up for the paid version?

Ken Shelton: That finances are now a barrier, especially with an education. And I used to always say this, uh, to folks with that, that, which I don’t blame ’em to some degree, but, and there’s easier ways to analyze ’em now, but I always ask like, did you read the terms of service? And of course the answer is usually no.

Ken Shelton: And I’m like, well, I used to have a, uh, and I wrote about this in the book. I had a protocol that I personally would do around the terms of service, but now because of the ability of a large language model to synthesize large volumes of data, can [00:49:00] leverage those large language models to analyze that terms of service.

Ken Shelton: Okay. So there’s the economic aspect of the, the, the fact that I, what I always just say is if, if, if the platform is free, then you are the product because you are giving up a lot more for that free access. So free is not always better. Okay? So that’s number one. So that’s the economic, the environmental component to consider.

Ken Shelton: I remember talking about this briefly with the proliferation of, of Google search where, uh, I know I, my friends that are in the state of Oregon used to tell me about a, a massive, uh, uh, server farm that was, uh, that is east of Portland. And it, it was, it’s, it was built literally right next to the Columbia River.

Ken Shelton: The Columbia River. Okay. And I’m like, why would they build? And back then, of course, my curiosity, why would they build an electronic facility next to moving water? And they’re like, I’m like, water and tech don’t mix. And then of course, a friend of mine who’s a big time cybersecurity guy was like, what do you think they [00:50:00] use to keep ’em, keep the rooms cool with all those servers operating at 24 7?

Ken Shelton: I’m like, oh, okay. Well, guess what now, magnify that with ai, not just environmental impact. How much energy is required to run these systems? What materials are needed to, uh, build these systems? Where are those natural resources located? See, these are things, and it’s harder for us on the, uh, you know, farther down the, what I would say the line from acquisition of resource to design to production, to usage.

Ken Shelton: But it is something that we should also be consciously aware of so that when we are in the right situation to do it at minimum, is to pose the questions because there seems to be a degree of comfort behind the window of the ignorance of people or, or hiding behind what I actually, before I say ignorance, I would say hiding behind the lack of transparency.

Rebecca Bultsma: I think you’re absolutely right. I [00:51:00] just actually listened to an interview with Dr. Sasha and I, I’ll probably butcher her name, Luccioni. She’s an artificial intelligence researcher and a climate lead, and she was a guest on The Hard Fork podcast, which is a tech podcast put on by the New York Times, and, uh, did a really great segment on this.

Rebecca Bultsma: It talked a lot about how we have these stats, like, oh yeah, it uses a fresh bottle of water, uh, every time that you interact with it five or 10 times. But the truth is, we just don’t know because there’s no transparency around how much energy and water this is actually using. So everything’s an estimate.

Rebecca Bultsma: They’re not gonna tell us if we’re right or wrong, and we’re probably grossly underestimating it. And where I live in Alberta, one of the biggest data centers in the world is slated to go in right away here, right where I live. And we’re missing out a lot on that, uh, communications aspect, the engagement, informing people about why and the impact it has on neighboring communities and you know, the climate and the environment, which we were just

Rebecca Bultsma: talking about, and it’s a, a major ethical [00:52:00] issue that’s not getting enough attention connected to ai. And I will say, I got this question in a session yesterday. Well, like, should I just start using it less? Like should I use it or not use it? And I have mixed feelings about that because if I just start using it less, will it make a huge difference overall?

Rebecca Bultsma: It’s one of those things that we need to, you know, advocate for broader change. That’s, uh, like you mentioned Ken, like it’s, it’s really hard to have something like that change and happen, especially with this week, like at the White House, there’s giant announcements about brand new like data farms and infrastructure connected to ai.

Rebecca Bultsma: So it’s just part of this really complex, um, really, really complex ethical, messy futures that we have to think about all these possible futures of where we go with this.

Brett Roer: Again, when you have two leading experts like yourselves here, uh, I wanna make sure our listeners got a chance to, that if you had FOMO from earlier, that was one of the many things we discussed before cameras started rolling.

Brett Roer: So I [00:53:00] just wanted to make sure that y’all had a chance to hear where, where that kind of started out. We didn’t cover any of the questions we were gonna talk about. And I love, love it because this was such a written conversation. Oh, no. Like, I was just wanna make sure I, I give you flowers because there are very few people that we could have just kind of, you know, just give you free reign to really tell people things that they have to be hearing from a leader in the space like yourself.

Brett Roer: Those Rebecca said, that just means we gotta do a part two, but we are gonna end, which we always end with, which are signature questions. So I just wanna make sure I say for our listeners, and Ken more importantly, you’re so driven by doing right in the world of education and the world writ large and addressing, you know, inequity that’s historic.

Brett Roer: So instead of asking you, I. You know, we’re putting on a heist. It’s Oceans 11, who are the 11 people you’re bringing with you? I really want you to take a moment and just however many people could be, one could be as many as you want. Keep [00:54:00] it rolling. Who are people right now that our listeners need to know are doing good things for the right reasons?

Brett Roer: Uh, trying to move education and society forward that you just wanna make sure when you get a chance to say their names in podcasts and rooms like this, that they’re getting their praise and why?

Ken Shelton: You know, I’ve got, I do have, I do have what I call in-group preference bias, and those are my pees. So obviously my, my co-author of my book, they’re my friends at All4ED that are doing, uh, that are doing right by education.

Ken Shelton: You know, the thing, the thing that we have to be mindful of is that even in the context of what I would say, organizations that are engaged in, you know, attempting to do good in some cases, albeit performative, there are always individuals that they see. They see things for what they are. And they do their best to, you know, institute influence and change within their locus of control.

Ken Shelton: And so, uh, you know, I, I mean, I, I, I don’t really [00:55:00] have a laundry list of names. I will say that those are the people that I keep close company with. They are my thought partners, but they’re also my accountability partners because I’m also human, which means I am, uh, no less susceptible to mistakes than the next person.

Ken Shelton: But I also always bring up the fact that, you know, for me, my, my mantra that I carry with me is that if it doesn’t bring me joy and isn’t aligned with my purpose, then I let it go. You gotta, you gotta be true to yourself. You gotta be true to what you believe in. ~Uh,~ recognizing that there is no such thing as collective acceptance.

Ken Shelton: So you might as well just be you and, uh, you will find your community, you will find your peeps. And then you’ll also find that some people that you may have, ~uh,~ broken bread with or may have, ~uh,~ collaborated with before are not necessarily the people that you want to, you know, devote your time and energy to and support of.

Ken Shelton: But, uh, there are quite a few folks in the ed tech space and the tech space and the AI space, not [00:56:00] limited education that are doing. Right by, ~uh,~ raising awareness around this, uh, Rebecca just brought up one. Dr. Sasha, I’m gonna assume that’s an it Italian last name. So I’ll say Sasha Luccioni, but you know, I, I, I will say, you know, some of my favorites are also, uh, ru, Dr.

Ken Shelton: Ruha Benjamin is one of my favorites, who does a lot of work around, uh, educational, or excuse me, technology research. One of my, one of the books that we actually cite in, in our book is Unmasking AI about Dr. Joy Buolamwini and her whole group called the Algorithmic Justice League, which I, I, i, I roll with a lot of the stuff that they’re doing.

Ken Shelton: Dr. Alondra Nelson is another favorite of mine. And, and, and my guess is many of your listeners probably have not, not heard of these names before. And I know, and I, I’m drawing a blank on the professor’s name, but there’s even a professor there at the uni, uh, the University of Colorado there in Boulder that does a lot of work around AI technologies and especially the impact it has on, uh, vulnerable communities.

Ken Shelton: And I’m drawing a blank on her name now, so I [00:57:00] apologize, but. You know, my main thing is that there’s always going to be a group of folks that tend to gravitate towards, to borrow a phrase from a close friend of mine, the candy for dinner. It’s, it tastes good. It’s easy to get, you feel good because it’s sweet, but what you don’t realize is a longer term, it’s actually detrimental to your health.

Ken Shelton: And that’s my, that’s my urgency that I stress to educators is try not to allow yourself to be all lured by and mesmerized by the tips and tricks and, uh, cool things. And what I, what, what my co-author and I say is the throwing up of confetti and blowing up kazoos. Because what it does is it can, it can cloud your critical thinking judgment.

Ken Shelton: I, I get, we want the quick and easy, but the quick and easy. It’s like, you know, you, you wanna microwave this work. And, and, and, you know, think about that. That’s not, it’s not good. It’s not healthy, it’s not effective. So, uh, as far as, again, as far as the [00:58:00] names go, I think what’s most important is for your viewers and listeners to start to ask the questions around, am I consuming the exact same messaging that confirms what I already think?

Ken Shelton: Or am I looking to open my aperture and expand my understanding? As I, hopefully I’ve articulated in our recording here, I’m not opposed to ai, but what I am opposed to is oversimplifying it on the promise side or the fear mongering and, and perilous side. That’s precisely why the taller our book is the promises and the perils, like the whole picture.

Ken Shelton: So I do think it’s important to recognize who you’re hearing from, and I, I’m, I’m immensely grateful. For our time, because I’d be remiss if I didn’t stress the fact that outside of my co-author and myself, I don’t see very many black educators period either talking about it or their voices being, uh, to any degree amplified with any degree of impact or visibility at all.

Ken Shelton: And that’s why I won’t [00:59:00] stay silent on a lot of the stuff we share. And I recognize that I lose out on some stages, but I also gain some other stages. So, you know, again, I can only, I, I have to, at the end of the day, I have to be able to look in the mirror and be, uh, proud of who I am and validate what I’ve done each and every day.

Ken Shelton: And if I can do that, and then I feel that I, I’ve spoken truth to power and done the right thing for the right reasons.

Brett Roer: I got no extra thoughts on that, Ken, other than just like, thank you for being yourself. Thank you for bringing all your energy listeners. You wouldn’t believe it. But when we started this, Ken was like, I’m a little under the weather, but I didn’t feel any of that.

Brett Roer: ’cause you’ve just brought so much passion, um, and truth to this. So I. I wanna just thank you so much for all your wisdom, all your passion and all your purpose and the work you’re doing in education and, and society in general. Thank you so much again, Ken.

Ken Shelton: Well, being on this call with you all is what?

Ken Shelton: This was a fuel for my soul, that’s for sure.

Rebecca Bultsma: Ours too. Wonderful.

Brett Roer: Thank you everyone. And, uh, listeners, I hope, I hope you enjoyed this incredible conversation with the international speaker, author, [01:00:00] educator, just amazing human being, Ken Shelton and, uh, Rebecca. Let’s listen and learn a little bit more about what we got cooking in the news today.

Rebecca Bultsma: For today’s tool tip, I just wanted to highlight new feature within one of my favorite AI tools. Google’s notebook, lm just last week announced a new mind map feature, which if you’re a student or a learner or someone who likes making connections between ideas, you’re gonna love this feature. Basically it will take anything that you upload to Notebook lm, so your PDFs, your website links, your YouTube links, your copied and pasted text, and I.

Rebecca Bultsma: It will make a mind map connecting all the different concepts together in a visual way that will help you understand it better. So currently in Notebook LM, you can go in and create things like [01:01:00] audio overviews, the podcasts, study notes. You can create FAQs. But now with this mind Map feature, which is interactive by the way, you can take all of your notes and have these dynamic diagrams that will evolve as you add more materials and show you where the connections are and how all those ideas connect.

Rebecca Bultsma: So it will do this automatically. You don’t have to drag or manually make the boxes, although there is something to be said about physically mind mapping as you’re reading and studying. There’s research to back this up obviously and how that helps you learn. And obviously cognitive offloading is a, an issue that’s been in the news a little bit lately.

Rebecca Bultsma: So if you decide to use the mind map feature, be aware that it probably won’t be as effective as. Creating your own mind map per se, but it is a great tool. So where you’re gonna find this feature is your uploads on Notebook LM will all be on the left hand side, and then there’s a chat window in the middle where you can ask the chat bot.

Rebecca Bultsma: It’s powered by Gemini. Questions about [01:02:00] your content in that chat window. Actually, on the bottom right is where you’re gonna see a button that says Mind map. And it will be a super handy tool. So it rolled out March 19th, and it should be hitting all accounts for everybody probably right now. So check your account, see if it’s there.

Rebecca Bultsma: The best part is, is you can instantly download your mind map as uh, an image file for free, and then incorporate that into notes or into presentations or lesson plans, however you’d like. But good luck. Hope you enjoy.