Think UDL Podcast Logo

[aioseo_breadcrumbs]

Humans in the AI Loop with Eric Moore and Kevin Mallary

Welcome to Episode 158 of the Think UDL podcast: Humans in the AI Loop with Eric Moore and Kevin Mallary. Dr. Eric Moore is the Director of Learning Design and Technology and Kevin Mallary is an Instructional Design Specialist at the Kennedy Krieger Institute. Both Eric and Kevin are Assistant Professors by courtesy at the John Hopkins University School of Education. Eric and Kevin have been doing some great work at the intersection of UDL and AI and have some sage advice on creating safeguards and guardrails as you approach using AI in adult education. In this conversation, we discuss the need for always centering the human perspective and keeping the humans in the AI loop at multiple intervals, and how to do that through PLCs, or Professional Learning Communities. You’ll find more information in the resource section just before the transcript on this episode’s webpage at ThinkUDL.org.

Resources

Find Eric Moore’s profile, Kevin Mallary’s profile and Miriam Larson’s profile on LinkedInTheir article, Artificial Intelligence and Universal Design for Learning: Transforming Teaching and Learning in Adult and Continuing Higher Education, is on Wiley online.

Transcript

50:05

SUMMARY KEYWORDS

Universal Design for Learning, AI in education, human perspective, professional learning communities, generative AI, learner variability, ethical use, accessibility, interactive simulations, real-time captioning, instructional design, critical thinking, educational technology, future of AI, human-in-the-loop.

SPEAKERS

Lillian Nave, Kevin Mallary, Eric Moore

Lillian Nave  00:02

Welcome to think UDL, the universal design for learning podcast where we hear from the people who are designing and implementing strategies with learner variability in mind. I’m your host, Lillian Nave, and I’m interested in not just what you’re teaching, learning, guiding and facilitating, but how you design and implement it, and why it even matters. Welcome to Episode 158 of the think UDL podcast, humans in the AI loop with Eric Moore and Kevin Mallary. Dr. Eric Moore is the director of learning design and technology, and Kevin Mallary is an Instructional Design Specialist at the Kennedy Krieger Institute. Both Eric and Kevin are assistant professors by courtesy at the Johns Hopkins University Graduate School of Education. And Eric and Kevin have been doing some great work at the intersection of UDL and AI, and have some sage advice on creating safeguards and guardrails as you approach using AI in adult education and this conversation, we discuss the need for always centering the human perspective and keeping the humans in the AI loop at multiple intervals, and How to do that through PLCs or professional learning communities. You’ll find more information in the resource section just before the transcript on this episode’s web page at think u d l.org as always, thank you for listening to the think u d l podcast. All right, I’d like to welcome my two guests. A repeat podcast guest for me is Dr Eric Moore, who this will be the third time that he’s on the podcast. So welcome Eric back to the podcast. Glad to have you.

Eric Moore  02:16

Thank you. It’s great to be back.

Lillian Nave  02:18

And also. Kevin Mallary, you are a first time guest, but I’m very excited to talk to you what you both are collaborating on at the Kennedy Krieger Institute. So welcome to the podcast, Kevin.

Kevin Mallary  02:31

I’m glad to be here.

Lillian Nave  02:32

Thank you for inviting me. Great. So I’ve already asked Eric this question before, but I haven’t asked Kevin, and that is what makes you a different kind of learner.

Kevin Mallary  02:44

Well, I am profoundly deaf and legally blind, and recently, in 2024 was diagnosed with ADHD. And so, as you can imagine, from early on, learning was never something I could do passively. I couldn’t rely on overhearing discussions, skimming slides or picking things up by accident, so I had to slow my learning down and engage very deliberately. I learned to do things like ask the materials in advance, or to focus on structure of content and the meaning, rather than emphasize the speed and trying to get things done too quickly. And I noticed right away when access or my attention would break down, and so it was very important that, over time, I was intentional with how I learned, and I think that that intentionality has turned me into somebody who I don’t just simply learn the content, but I learn how learning itself has worked for me, and I’ve become an active partner in the process, rather than simply being a passive recipient of information. And I think that that’s being an intentional learner, is why I’ve become such a firm advocate for UDL.

Lillian Nave  04:19

Yes, yes, we want those expert learners, or really that learner agency is the new terminology in our guidelines, and you make me think of something that has really been a mantra in my class that I teach now, called, How’d you learn that? And what I’m thinking it’s the one who’s doing the work is the one who’s learning. And it sounds like every bit of your learning, as you said, was not passive. It was, in essence, hard fought to get to those concepts. Yeah. Thank you. And Eric, I’ve asked you a version of this question right at least twice. And so, gosh, the first one was in 2018 where six, seven years later, if I did the math right, has anything changed in if I were to ask, what makes you a different kind of learner?

Eric Moore  05:14

Yeah, so a lot has changed. I think, you know, at this stage in my life, you know, I’m older, I’m further along in my career now for me, it has been a constant evolution of the way that I approach learning at this point, having achieved, you know, some some degree of success in my career and my life, being again, being older, and so forth, the challenge now is to not continue to rest on my successes, you know, to not say this is how I did things, and it got me here. So I’m going to keep doing things that way. I really have to now push through, to intentionally unlearn, to intentionally challenge my understandings such that I can break through into new ways of doing things, I found this to be especially important, since I’m leading a team where it’s so easy to just let success, you know, carry us forward to our oblivion. As circumstances change, as systems change, as the world changes, we have to be adaptable, and so learning has to be active, and as does unlearning, as others in the UDL sphere, like Alice and Posey, you know, have reminded us is so important to make unlearning a critical part of learning. And I’ve just found that to be more

Lillian Nave  06:35

and more true. I love it. Yes, we do. We change. We learn how to learn differently. And of course, it’s environment we talk a lot about that in our UDL world, as the environment change changes how we learn, and so we can shape that environment, too. Thank you. But today we’re going to talk specifically about generative AI. And I came across a little article you guys wrote, and I was really interested in learning more about it. So I’m going to start off with you, Eric, to ask about how you’re using generative AI in Adult and Higher Education, and actually what, let’s start off with, what are some concerns and advantages of using Gen AI, which is such a hot topic right

Eric Moore  07:20

now, right? Speaking about changes in the world and in systems and having to adapt, right? Well, so yeah, generative, AI, of course, has taken all of us by storm, you know, as the latest major cultural technological shift that has rocked education and industry. And as you’re right to point out, there’s been a lot of consternation, you know, a lot of concerns, as well as a lot of optimism, or even propaganda about the potential for generative AI is all there. And so in the face of these disparate voices, you know, Doom saying and promising the world, we’re sort of left wondering what’s what’s real. You know, what should I be for this or against this? And you know, rather than just listening to contemporary voices, I’ve often found in my life that truth is truth across time. There are principles that stand the test of time that weather, cultural changes, technological changes, and so one of the world’s most famous voices that has stood the test of time is that Plato actually, and Plato has a lot to say about generative AI, believe it or not,

Lillian Nave  08:34

really, tell me about that,

Eric Moore  08:37

not directly. But so in one of his books called the fatal Plato’s fato, I love to say that he talked about how everything has equal potential for Good and Evil. And in his, you know, his method, the Socratic method, that he sort of uses the dialog. He gives examples, you know, of a dog. What’s the worst thing that a dog can do? It can kill somebody, right? And the best thing that a dog can do is rescue somebody’s life, right? They’re equal and opposite. He goes on to say, a child can do more good and more evil. An adult more good and more evil. And in Plato’s mind, the super intelligent adult was the one who had the greatest potential for Good and Evil. We’ve seen this sort of play out through history, where a lot of extremely capable people have used their capacity for wonderful things or terrible things. We’ve also seen it play out in technological ways that Plato could not have possibly imagined. Right? We see that the potential of social media to bring people together or drive people apart, we see the potential, potential of nuclear technology to help cure cancer or decimate populations in a heartbeat. You know, it’s the potential is always equal and opposite. And so the question isn’t whether. Generative AI is good or bad. The question is, what is the potential of generative AI? And it is very, very high, from what we can see at this point, which means that it can, and by Murphy’s Law, will be used both for great good and great harm. And there’s an opportunity, a responsibility, for people who are in positions of leadership, positions of power, whether that’s political or in industry or educators, to be part of the solution in driving people to ethically and sensibly use this to further as good potential and mitigate as negative potential whenever possible, as with other technologies that have come before.

Lillian Nave  10:47

Excellent. So what are we to do? How can we, I guess, explore that positive side, and I guess the flip side is, but aren’t there a lot of negatives, like environmental impact and that sort of thing. What do you say about those? Yeah, absolutely.

Eric Moore  11:05

So let me take that one step at a time. There’s a lot of things that I think we can be doing to mitigate the negative and promote the positive. Probably the number one biggest thing, though, is to keep human in the loop. Is a phrase you’re sometimes hearing in generative AI conversations, which is to say, you know, one of the great fears, and we’ll come back to this a little bit later too, is that of education is going to devolve down to computers giving assignments to computers respond to you, and computers grade, yeah, you know. And we totally eliminate the humanity that drives education. Human in the loop. Means that anytime that we touch generative AI, for any reason, humans are involved in the design of that use case, in shaping the output of that use case, they can never just be copy, paste, you know, drop it in. That’s That’s really where we see things getting ugly and, you know, the negative aspects coming about. So in the case of education, for example, it’s tempting for teachers who are extremely busy, and I say this from experience to just like, can you make this assignment for me so I can give it to my students? Yeah, and we have to resist that temptation and think about, what am I trying to accomplish here, does generative aI have have a role in helping me accomplish this better like and then looking at the output, shaping that to better fit the original purpose I had in mind, using generative AI as a tool, not as a replacement, and teaching our students to do the same. You know, I think early on, there was a because of the concerns, there was a heavy handed reaction in some places like, shut it all down. No generative AI in here, which, of course, was going to fail from the outset. And it missed the opportunity to teach people how to use it correctly, you know. So it’s not telling our students don’t use it at all. It’s saying don’t use it to write your papers, you know, but write your paper and then maybe ask for feedback, you know. Maybe ask for how might I construct this sentence better and analyze the results that it puts out? So you can use it to help improve your own skill or writing or thought process, using it as a tutor, but not exclusively, you know, double checking key points, things like that, just always keeping the human engaged in the process, rather than being a passive recipient of whatever the computer is outputting. You also mentioned things like environmental impact. We know these. It’s a fact that generative AI has a significant impact on the energy grid, on water usage, on potentially air pollution and so forth. I want to recognize that these are real concerns that warrant attention and response. So people need to attune to their own ethics, and at a minimum, we should be aware and we should be transparent when I say we, I mean industry, governments. You know, schools and individuals all have a role to play and thinking about these things and not shying away from that reality. But we also need to be on an ongoing process finding solutions. When we think back to other technologies, it’s been similar cars and cell phones, even primitive tools like fire or the wheel. You know, everything, again, has potential for good and evil, and so we have to mitigate that, that those drawbacks as much as we can, while we still try to preserve how these things can be used to make systems humanity even future environmentalist efforts better?

Lillian Nave  14:43

Nice, yes, I really appreciate that is a really great way to get into this topic, thinking about the great potential for positives, but also knowing we’ve got to keep figuring out how. To do this well, we might not, like, we’ve kind of gone gangbusters and like, Whoa, this is using a lot of water and resources, and we just got to figure out how to do that well. But there’s so much good potential that can come out of it, and what you two have done has already been really good. And so that’s what I wanted to talk to you both about. And so Kevin, I’m going to toss this next question to you about Gen AI adoption, and what are some examples that you have in Adult and Higher Education that you have been able to work on.

Kevin Mallary  15:41

So that’s a great question. Gen AI has already impacted continuing healthcare, education, for example, in numerous ways, as Eric and I have quickly discovered. In fact, we regularly use Gen AI in our work at Kennedy creator Institute. So what we do is we use Gen AI to support the development of more engaging and accessible learning experiences. One way we do this is by we create interactive scenario based simulations for healthcare professionals, whether doctors, nurses, other providers and rather than passively consuming content, learners are placed in realistic clinical situations where they make decisions, they’re able to then see the outcomes of those decisions and then reflect on what they’ve learned and what they might do going forward, so the experience adapts in real time based on the learner’s actions, which really does help support the idea of personalized learning, and it helps to reinforce key concepts. Right off the bat, we’re also using Gen AI to focus on improving accessibility of our courses, and that’s something that we are staunch advocates of. So for example, what we’ve been doing is implementing real time captioning and translation for our instructional videos, which has really had a significant impact on many of our learners across the world who are multilingual, and even learners who are deaf or hard of hearing or just simply prefer to have that text along with the video. So what we found especially promising is that these tools can actually be trained. These large language models can be trained on medical terminology, so that over time, the captions and translations, they become more accurate, they become more context aware. So what we’re doing is, by doing this, we’re making access a built in feature, rather than simply an add on and so that that really is part of our work at Kennedy Krieger, but I also wanted to give a shout out to our work. We’ve actually Eric and I have worked closely with Dr Miriam Morrison, who also works with us at Kennedy Krieger, and we have worked together to with the Johns Hopkins University School of Education to develop several graduate level courses in learning experience design. Miriam, in fact, is teaching her course this term, Introduction to learning experience design. And Eric and I are going to be teaching ours this summer and into the fall. And the neat thing about these courses that students are introduced to Gen AI tools and strategies such as how to craft effective prompts for getting the results that are most aligned with what the information that you need. And what we’re doing is we’re encouraging students to check their reading comprehension. For example, they use tools like chat, GPT or copilot or Gemini to generate textual summaries or even graphical charts that highlight what the main ideas are for for the readings. And what they’re doing then is they’re critically comparing these summaries with the original text the original material, to make sure that what they’re understanding is indeed what they should be learning. And another thing we’re doing is that we’re also showing our students how to use Gen AI to deepen their understanding of important learning theory concepts and even design methods by asking, helping them ask for explanations, examples, additional resources. And so many of our students through these experiences have found that these tools are very supportive. Nave thus far in their writing process, helping them to draft outlines, experiment with alternative phrasing, and even study examples of effective academic and professional writing. And what this is all allowing our students to do is to build essential literacy skills in generated AI so that they’re able to tackle challenges in their future jobs and in their lives, and as Gen AI becomes more and more pervasive.

Lillian Nave  20:35

Wow, right now I’m thinking that’s the human in the loop is you’re teaching those students how to be that human in the loop. It’s modeling for them that, yeah, they’re learning how to do the prompts. And they’re also saying, I’m going to check your work Gen AI and see where you hallucinating. Were you? Were you really getting this? And then that’s using a different kind of analysis, or critical thinking. So it’s kind of multiple critical thinking skills for our students to not just, let’s say, analyze the work, but then say what’s different, how somebody else might see it, in essence, how that Gen AI is reading it. That’s fascinating.

Eric Moore  21:14

Yeah, I feel like another part of this is to get them, especially if they’re in professional, you know, learning programs, you know, when they’re working through master’s degrees, or in, you know, upper division courses as undergrads, to get them thinking about how generative AI will be used in the industries that they’re in or going into, right again, trying to make sure that we’re shaping Ethical, constructive use of this tool. So, as we’ve spoken about before, the difference between expert students and expert learners, the skills of using any tools or resources or approaches that help you get as in classes, basically reduced value to zero on the day you graduate, you know. And so if they learn how to use generative AI to complete papers or, you know, projects for class, but they don’t see how that translates into the field. They’re moving into the value of that instruction or that that learning goes down. So I feel like we’re also looking at what does it mean to be an instructional designer, using generative AI in an ethical, productive way, you know, and incorporating that into project work and so on and so forth. We’ve also been seeing the effective use of generative AI agents, you know, they go by different names depending on the platform. So custom GPT is or agents and copilot and so forth, where these are custom built bots, if you will, that achieve a specific purpose. That might be a tutor for this specific thing, or it might be a case study that you encounter in real time where your choices have effects on the next step, you know, things like that, where we can build in these sort of interactive experiences using generative AI that previously would be time or cost or resource prohibitive. Now we’re able to really expand that opportunity for everybody.

Kevin Mallary  23:07

Yeah, and I think something that really resonates with me is we are helping learners to become researchers. And what I mean by that is we’re helping our learners to become critical consumers and producers of information, and I think that Gen AI is really allowing our students to build critical literacy skills, information literacy, digital literacy, and engaging in problem solving in ways that otherwise they might not have before. And so that’s something that I think is the really remarkable thing about the Advent and the continued growth of Gen.

Lillian Nave  24:02

Ai, yeah, this is wonderful to be exploring all of these super positives, right, that we could go one way or the other. I love it. One of the things that generative, AI, I believe is supposed to do is make learning more equitable. We’ve already talked about how it might help with accessibility. You know, for any students with disabilities, that having those captions are going to be great, but what can we as educators and as institutions, as policymakers do to be mindful of equity gaps when using generative AI, and this is going to be for Eric, I’m going to put this towards you, because we want this to be equitable. We want learning to be equitable for all students. But can it also backfire when we rely on Gen AI without, let’s say, paying attention? And what do you think?

Eric Moore  25:02

Yeah, again, this is, this is going to go all the way back to Plato, you know, the generative AI can be used as one of the great levelers, you know, of technology in my lifetime, and it can also result in the rich getting richer and the poor getting poorer, from an academic perspective. So it’s really about how we make use of it. There are some things that I think institutes of higher education are doing that are really good in this realm right now, and other things that can still be improved upon. One of the things that I’m seeing as a trend is a lot of institutes of higher education are building on the infrastructure of, for example, chatgpt, but then they’re creating their own proprietary skin on it, if you will. So for example, UT Knoxville. They have ut verse at Johns Hopkins. They have a version of a generative AI platform. And these are designed to kind of have safeguards, among other things. They protect privacy a lot more than the public versions of chat GPT does, you know? They help maintain institutional safeguards in terms of the types of prompts that can be used, you know, and so forth. All of these things really help put guardrails on, you know. And guardrails, to me, are not something that are used to control, but to teach, to guide, and that helps with equity. It helps with some of the ethical things that we’ve been talking about this. The other thing, though, is I feel that there needs to be ongoing, transparent, open, coaching and support. We’ve seen this for a long time, where older generations sort of have this, this misunderstanding that the younger generations must be excellent technologists because they grew up in the age of computers, right, and technology and tools, right? And it’s just not necessarily true. You know, there’s a lot of people who are Gen Z, who are great at scrolling through Instagram, but that doesn’t mean that they know how to use technology to learn, right? So just because some, some people were young at the advent of generative AI, you know, hitting the public doesn’t mean that they’re skilled with it, right? And we the people who are skilled with it, what I’m finding is that you can keep getting exponentially better with generative AI if you start on the right footing. And if you if you never find your footing, you know you might just be getting left behind. And we’re getting right back to that. Some people are accelerating exponential pace, and other people are kind of stuck in the doldrums. As part of education in the contemporary world, we don’t just teach our subject matter. You know, whether that’s literature, philosophy, music, whatever. We also need to be teaching cultural understandings and technological resources for being a citizen and a employee in the contemporary world. We really need to drive forth that the promise of UDL to develop expert learners, or, I can’t remember what the new language on that is now, but to

Lillian Nave  28:03

learner, agency,

Eric Moore  28:05

agency, thank you. And that has to be, but the more public that is, the more ubiquitous those conversations are, the more we’re going to catch everybody and kind of see all boats begin to rise, an example of what generative AI can do to level the playing field. Now is the way that a well designed agent, as it were, could be used as a tutor for a given class, you know? So I might, if I’m teaching, let’s say, a calculus. And a lot of students struggle with calculus, I might front load it with information about these are the common, frequently asked questions or points of confusion that my students have brought to me. This is how I speak to that. This is what I want you to reference. These are the sources I want you to draw from. Don’t just go searching the internet, you know, I really like fine tune a very specific knowledge set for this and then allow the students to interact with it. I might even say, Don’t give them the answer, but instead prompt them with with, you know, formative questions, scaffolded questions, and give them feedback. You know, if they’re if they’re wrong, tell them why they’re wrong, that kind of thing. So we can create a custom tutor for my calculus class that doesn’t cost a penny for the students who use it, who they can use it at three in the morning, if that happens to be when they’re doing their their work, you know, whatever. And it really means that students who, either because of time or money or embarrassment, would never use a tutor now can right? And it really expands that the ability for people to to to benefit from that sort of immediate feedback and guidance that has been exclusive in the past. That’s just one example of many.

Lillian Nave  29:53

Yeah, that’s an amazing example too, because so that that idea about. Who can pay for a tutor, right? And so then we have colleges that have all of their tutoring hours for math and usually the STEM courses, but still, there’s an equity about do you go to a college that has all those Do you have the ability to participate in that program? Does the institution have the money to do all of those things so I can see how that’s a really good leveler of the playing field. And I’m also interested, because you’ve talked about this and written about it, Kevin, this is going to be a question for you, which is about professional learning communities, PLCs and that has also been a really great effective thing in applying Gen AI, generative AI in adult and continuing education. So how has that worked? Do you have some examples or success stories about that sort of way of using generative AI?

Kevin Mallary  31:01

I do actually, for me, professional learning communities have really been one of the most effective ways to use generative AI responsibly in adult and continuing education with keyword being responsibly. PLCs, what they do is they move the work of designing instruction from individual experimentation to shared reflection. So the faculty come together and they ask, what is it that supports learning, access and equity? Actually saw this clearly in a fully asynchronous graduate program in library and information science, where I was part of a PLC, and our conversations helped us to agree on what tools are accessible, what are the boundaries for Gen AI youth, and what are things that we can do to really help support our students, and one of the primary ways at that time was the use of transparent rubrics for scoring assessments. And I think that that has clearly evolved as Gen AI tools have advanced as well, and so this consistency allows students to use AI to to really check their understanding, to unpack complex theories without being confused about the expectations for their assignments. I think that’s really important. Is Gen AI has helped to avoid confusion as much as possible. I had, I actually had a similar experience teaching undergraduate Communication and Information Science courses, and so as a part of a PLC in that institution, we shifted assignments to emphasize the process. So we encourage students to use AI for outlining or exploring alternative phrasing and reflecting on the choices that they make. And we came together routinely to share what we observed with our students, and that really allowed us to refine our teaching practices on the fly as we went throughout each semester. And what’s been really meaningful for me is how PLCs what they do is they center responsibility as someone who is, as I said before, profoundly deaf, legally blind, ADHD, you name it, I’m always attentive to where access breaks down. And what PLC do is they create space to evaluate tools such as captioning transcription services, and what we do is the PLC really talks to the students and figures out how the tools are working. Are they effective? Are they not? And with that PLCs are only effective if they are reflective. When we ask questions about whether Gen AI is deepening learning or improving access or aligning with our institutional or programmatic values. You know, it’s very important that we have these conversations, because if we don’t, then our students really cannot benefit from using the Gen AI responsively, responsibly and effectively. And in fact, Eric, I’m going to, I’m. For you out there has a very helpful framework for how PLC can indeed support Gen AI adoption in adult and continuing ed and Eric, would you mind sharing that framework with us?

Eric Moore  35:17

Yeah, for framework might be a little bit a little bit strong for, for what this is. It’s really just just a little sketch, but as a way of, sort of understanding the the evolution, the fears, the potential, and then what PLC is add. So if we think about, you know, pre high tech learning, you know, in back, back in the day, before there were even computers or anything else, learning was, was purely humanist, you know, like, like a humans, we’re teaching humans who got feedback from humans. You know, it’s just completely human mediated. The fear, you know, with with computers and now, especially with generative AI, is, is that, again, we’re going to get to a place where computers are communicating with computers, and humans are just all they need is control C and Control V. You know, that kind of thing, right? And that is, again, a legitimate concern that we need to guard against. The potential that we talked about is to keep humans in the loop, you know, where humans are actively involved on both sides of the use of generative AI, and so I’m designing if and then how and when and why I would use generative AI. Then I use generative AI for specific purpose, and then I’m back involved again, integrating that into whatever task it is that I’m completing. Right? That’s that’s the potential where we maintain human agency through the process. What PLCs do is it, it elevates the human humanity and the human connections before and outside of the computer’s involvement, again, potentially before and after the use of generative AI. So, for example, my PLC, I might be talking through this opportunity or this goal or this problem, and just working through different ideas or solutions with people. And then, you know, part of the conversation might be with my PLC. Would generative AI be helpful in unpacking this? And if so, how might I use it so that design doesn’t just have to be through my own head, but also borrowing from the ideas and the support of my colleagues. And I might find that I don’t need generative AI. The ideas that we generated together are sufficient for me to meet my needs, or it might give me guidance on how I might use generative AI. And then again, I’m taking the output and I’m processing it, but now I also get to process the output with my PLC and share the outcomes of that and unpack it and refine it better. Woodrow Wilson once said, I not only use all the brains I have, but all the brains I can borrow. To me, that’s what generative AI is. It’s borrowing a whole lot of brains in a heartbeat, but it still doesn’t replace the social component of utilizing brains in real time with actual humans that PLCs provide.

Lillian Nave  38:05

Wow. I have seen that there are a lot of universities that are having generative AI, professional learning communities. And I’ll do a shout out to Appalachian State. We have an AI, the number four AI, that’s it. So, five digits, AI for AI, which is Appalachian instructors for artificial intelligence. Clever, yes. I shout out to Derek Eggers, one of our educational developers at Appalachian State, who’s running that group, or kind of convening, I should say, the humans in the group, and they’ve really done an amazing job in sharing what they’ve come up with, and in working on how AI can be useful in academic settings, in their courses and everything. And I love that you’re just adding more and more humans every time. So in that loop with the PLC, it’s doubling the humans or quadrupling the humans, right that it goes through that loop with a computer. And I will put your little framework or idea about that, I think, on our resources so people can see that pre AI learning, what’s the humans to humans, and then how that works through AI and professional learning communities, because the more we can put humans in the loop, the more we’re putting those safeguards on, as you mentioned, and the more we’re, I think, broadening the cone for positives and narrowing the cone for negatives, as you mentioned in Plato’s Phaedo from the very beginning, right? Well, this is actually really a very positive discussion about AI, where there’s a lot of negative discussions out there. And so what are the things that we can look forward to? And Eric, I’ll, you know, Throw it. Throw that out to you again.

Eric Moore  39:55

Yeah. Well, so I before, before we get to that. Well, I just wanted to suggest one further. Thing for, for the conversation before, you know, well, while this, this is a positive dialog, even as we are also not shying away from the reality of the potential, you know, for negatives, one of the biggest things of human in the loop, you know, whether it’s an individual interacting with the AI or a PLC working with the with the AI is to not go in with the assumption that the generative AI has a role at all. Abraham Maslow once was famous Saturn. All you have is a hammer. Every problem begins to resemble a nail. You know. Sometimes people who are early implementers, and I count myself among them, like to say, Ooh, shiny new tool. How can I use it? How can I use in this situation? How can use in this situation, right? And we’re just playing with it, just coming to learn it. That’s one thing. But when that really becomes integrated into our workflow, that, like everything I do, I’m going to put before I write that email, I’m going to drop it through AI, before I, you know, finish my paper, I’m gonna drop in today, like we, I think we need to, as part of the human loop, individually or collectively, be willing to challenge that. You know, going back to that, the first question that we ask is, if generative AI has a role in this, what does it what is the value added that it can bring? And sometimes the answer is, none, right? Or this is not an appropriate time to use it. And so that’s going to be an important part of maintaining the human loop, maintaining the design approach to the use of generative AI in education, in higher education, and in professional and community life outside of school. So to your question about what we can look forward to, I think there’s a technologist, you know, a techno, technophile approach to this or that we’ve, you know, depending on who you listen to, there’s a lot of tech bros who are out there trying to, you know, tell us about the wonderful Jetsons world that we’re looking forward to with generative AI, you know, we’ll see again, equal potential for Good and Evil. And it’s really going to come down to how well it’s regulated, how well is used by people and systems. I think that we can expect growth. You know, looking historically, technology has never stopped growing. You know, it’s always moving faster and faster. Moore’s law, if you’re familiar with that, you know that sort of calculates the doubling of technology over time seems to be getting outdated as the exponential growth of technology continues to move. At the same time, we’ve seen that some of these futuristic expectations of generative AI that we would hit general intelligence by 2030, is one that I’ve heard. Don’t seem to be panning out, you know, part of this is really interesting from a philosophical perspective, like when we talk about intelligence, you know, in AI, what even is intelligence? It’s hard for us to define what intelligence is. It always seems to outstrip our definitions. The metaphor that we’ve used for a long time, of the human brain as a computer, like any other model, is useful but limited, is not really like a computer in some ways, right? It’s hard to pin that down, but the expected the growth models for generative AI and getting towards general intelligence were very much grounded in generative AI, or the brain is like a computer intelligence just means more processing speed, just means, you know, better hardware. And so as they’ve built bigger and bigger and bigger data centers, they expected an equivalent growth in the power of generative AI. And that has, not necessarily, it nave, not necessarily that hasn’t panned out to the degree that they were expecting. That said, I think we will continue to learn, will continue to grow. There’s going to be further specialization of generative AI tools, where Red is seen this where chat GPT was really just a generalist. We’re seeing generative AI, the specific for medical documentation, for instructional design, for graphic design, whatever, we’re going to see more focused tools that are able to help people accomplish things in specific sectors, for specific purposes. The other thing, what do we have to look forward to depending on how we frame that, it implies a passivity, like we’re just going to see what the technologists bring to us. And I think that we’re really driving home this message of human in the loop. We need to get to a place where we’re not just passively waiting and seeing but we’re active consumers. We’re conscientious consumers who are driving demand, driving responsible use and non use to preserve and enhance humanity and human learning. I think we need to get to a place where there’s less hand wringing and more educators willing to roll up their sleeves and get involved in shaping the. Future within their spheres of influence. When we’re showing industry, this is what we want more of. We don’t want this play that down. We’re able to help guide the future of generative AI, instead of just waiting for it to happen to us.

Lillian Nave  45:14

Absolutely, we need to add those humans in the loop. I think that’s going to be the title of this episode for sure, and I’m really appreciative of what you both have accomplished already, and in thinking through I think this is a really good conversation for folks to to hear. And so I wanted to thank you both very much for taking the time to speak with me and Kevin, I wanted to ask you if you wanted to add anything to what what Eric had said, Yeah,

Kevin Mallary  45:46

first of all, it’s really important. I think humans in the keeping the human in the loop is definitely the theme. I think of today’s conversation, for sure, and it goes show you that despite the I remember attending faculty town halls where professors were saying, Oh my goodness. Gen AI, oh no. You know our jobs and our job security is going away, and that’s just not the case. That’s just not the case at least, at least for now. But seriously, though we really do need to keep people involved. At the heart of technology is the is the designer, is the developer. So I think that we are definitely in prime position to, as Eric talked about, really shape the future by taking an active role and not sitting back and just letting the technology do the work, right? But overall, I’ve really been an honor being here with you guys today, and I’m so thankful that you invited me on to talk about such an important topic.

Lillian Nave  47:05

It is. It is so important. And I must say, I was one of those kind of dinosaurs in the beginning. I’m like, I don’t want to use this. I am not. And that’s that’s actually pretty bad for me as a UDL person you know to like, want to try these new things and figure them out. And I was really reluctant.

Kevin Mallary  47:27

Yeah, I was a skeptic, too, and I think that that’s it. Really, Eric has really influenced me in many ways over the course of my career, but one of the ways that he’s influenced me is saying, Hey, it’s okay to try this technology sooner rather than later. So it’s I was one of those that kind of wagged behind and but finally got to a place where, you know, we can really appreciate the value of the technology while, at the same time understanding that while it certainly allows us to be more creative, there are constraints and challenges that we need to recognize and really design and develop for.

Lillian Nave  48:12

Yes, absolutely, and I too, I put myself in the acolyte of Eric Moore’s learning as well that he’s pushed me to think in different ways, and I think that’s what’s great. We push each other. And very glad, yeah, we got to keep these humans right here, these humans in the loop, for sure, absolutely awesome. Yeah. Well, thank you both so much. It’s been a really great conversation, and I will make sure that your article too is somehow linked, so people can find out more about what you’re doing, and they can find that in the resources of the podcast. So thanks very much. Thank you. Thank you for listening to this episode of The think UDL podcast. New episodes are posted on social media, on LinkedIn, Facebook, X and blue sky, you can find transcripts and resources pertaining to each episode on our website, think u, d, l.org, the music in each episode is Created by the Oddyssey quartet. Oddyssey is spelled with two D’s, by the way, comprised of Rex Shepard, David Pate, Bill Folwell and Jose Cochez. I’m your host, Lillian Nave, and I want to thank Appalachian State University for helping to support this podcast. And if you call it appellation, I’ll throw an apple at you. Thank you for joining. I’m your host. Lillian Nave, thanks for listening to the think UDL podcast. 

Discover more from Think UDL

Subscribe now to keep reading and get access to the full archive.

Continue reading