- The AI Chalkboard
- Posts
- The AI Chalkboard Issue #6: A Whirlwind Trip (1)
The AI Chalkboard Issue #6: A Whirlwind Trip (1)
Lessons from the Road: Insights, Tools, and Ethical Questions in the AI-Powered Classroom
Thirteen days.
Seven flights.
Five cities.
Five workshops/presentations.
One Baby-Moon.
The past two weeks have been a whirlwind—but one I wouldn’t trade for anything. Here are a few snapshots from the road.
With Co-Presenters Nneka Mcgee, Ashlee Russell, and Tina Garrison at FETC
Big Thanks
A huge thank you to the incredible organizations and people who hosted me:
And, of course, to my wife, who joined me in Miami for our last couples' trip before our first child—a baby boy!—arrives this spring.
Top Five Takeaways
After connecting with principals, technologists, division heads, professors, and K-12 educators, here are the top topics that surfaced at every stop:
5. How Can AI Help Me in My Work?
One of the most frequent questions I heard: How can AI make my job easier?
Educators are curious about tools to streamline grading, lesson planning, and administrative tasks. While my presentations often focus on fostering AI literacy, I also like to highlight practical tools. Here are a few favorites:
BoodleBox.AI: An all-in-one assistant for educators.
Napkin.AI: Simplifies visual graphic creation.
Beautiful.AI and Gamma.app: Quick, AI-driven presentation creators.
Perplexity.AI: Excellent for research.
Brisk Teaching: All-in-one teacher productivity tool with grading features.
If you’re new to AI, start with foundational tools like ChatGPT or Claude, then explore other education-specific platforms like Magic School, Flint, or SchoolAI.
4. Which Tool Should We Use?
This is the million-dollar question. With so many options, it’s tough to know where to start. My advice:
Begin with foundational models (e.g., ChatGPT, Claude, see below.)
Practice prompting to understand how AI works.
Explore specialized EdTech tools once you’ve mastered the basics.
Each tool has strengths, but finding the right fit depends on your unique needs.
As you evaluate tools, consider if a department-by-department approach might be most efficient. Set up free trials for specific users by department, then circulate information about the efficacy of various tools through teacher-led AI Committees.
Below are the foundational tools to get started. Here is a list of approved tools for the SAU16 School District in New Hampshire, for reference.
(H/T to Kristina Peterson for providing this document!)
3. Data Privacy: Can We Trust AI Systems?
Even with safeguards, educators are rightfully cautious about where their data is going. Some steps to take:
Use FERPA-compliant tools like SchoolAI and Brisk.
Create data privacy agreements with EdTech vendors.
For more resources, I recommend the Student Data Privacy Consortium. Share it with your school’s CTO—it’s a goldmine of guidance.
2. Are AI Tools Appropriate for Students?
This one’s tricky. So much depends on context and situation.
As an example, most elementary educators that I heard from felt that AI should not be brought into K-6/8 classrooms. But then, I met one second grade teacher who was putting character bots up on the screen of her classroom to help engage and deepen learning for students by facilitating teacher-guided class-wide discussions with the bot about a given topic or story.
The teacher guided the interaction and managed the flow of content. I thought that was a clever use of AI in Elementary Ed that I had not considered.
What are your thoughts? Reply to this email and we’ll share your feedback in the next newsletter.
1. Ethical AI Use: What Does It Even Mean?
This was the most pressing question at every stop:
How can we teach ethical AI use to kids?
What does "ethical use" even look like when it’s so specific to context and situation?
I understand why this is the most burning question of all. Educators are in the business of shaping young hearts, minds, and souls into moral, ethical, thoughtful, caring, and compassionate adults.
So more than anything, we’d like to understand how to guide them in a way that will protect and foster that kind of living in this world. The problem right now is – AI use is very subjective, specific, circumstantial, and even personal. There are simply too many variables at play to define ethical usage practices across every use case, every person, every situation, and even every tool.
Still, I believe we can make progress. I’ll be releasing a research exhibit soon in the WAC Repository to help start these conversations. It’s a long road ahead, but every journey starts with a single step.
Going Forward
I’ve still got some traveling to do before the baby comes. If you’ll be at DLAC, ITC, or the SAIS Curriculum Symposium, let me know—I’d love to connect.
I’ll also be in Boston (Feb. 12–14) working with Massachusetts Bay Community College. If you’re in the area, holler at me!
Weekly Prompt:
Try this with ChatGPT, Claude, or another foundational model to learn about a new concept, idea, or skill:
I want to learn about __________ [insert topic or skill] __________, but I’m a novice. Generate a concise primer with definitions and examples. Create three real-world analogies and connect them to __________ [insert a personal hobby or expertise] __________ for better understanding.
After receiving your response, keep chatting! Ask the AI to “expand” on one part of its answer or provide more detail about a specific idea.
If you try it out, let me know how it goes. I’d love to feature your experience in the next newsletter!
Until next time,
Mike
P.S. Here’s one last photo with the folks from The Human Intelligence Movement. If you’re not aware of their work, check them out!
With the folks from The Human Intelligence Movement: Philip Alcock, Michelle Ament, and Dr. Michael Harvey