Day 18 of 20 Β· AI for Teachers
Using AI Responsibly in the Classroom
β± 7 min
π Beginner
This lesson is different from everything else in the course. So far, you've been learning to use AI as a teacher β for planning, marking, resources, and communication. Today we flip the perspective. How do you teach your students about AI? How do you set policies? And how do you handle the fact that many of your students are already using it β whether you've noticed or not?
This is the conversation every school needs to have. And you're about to be the teacher who leads it.
Your students are already using AI
Let's start with the uncomfortable truth. A significant proportion of your students β particularly in secondary β are already using ChatGPT, Gemini, or similar tools for their homework. Some are using it well. Many are using it to bypass learning entirely.
Pretending this isn't happening doesn't work. Banning it without explanation doesn't work either β it just pushes the use underground. The students who are using AI irresponsibly aren't going to stop because you added "AI-generated work is not permitted" to your homework policy.
What does work is education. When students understand what AI actually is, what it's good at, what it's terrible at, and why learning still matters even when a chatbot can write your essay β their behaviour changes. Not because of a rule, but because of understanding.
Knowledge Check
Why is simply banning AI use in homework unlikely to be effective?
A
Because students will use it anyway β banning without education just pushes the behaviour underground and removes the opportunity for honest conversation
B
Because teachers use AI too, so it would be hypocritical
C
Because AI is required by the national curriculum
D
Because banning things in schools is always ineffective
A ban without education creates a culture of secrecy rather than responsibility. Students who understand AI β its strengths, its limitations, and why learning matters regardless β make better choices than students who are simply told "don't use it." Education beats prohibition every time.
Teaching students what AI actually is
Most students think AI is magic. They type a question and get an answer. They have no idea that the answer is generated by predicting the most likely next word based on patterns in training data β not by "knowing" anything.
This matters. When students understand that AI doesn't know things, they start questioning its output instead of blindly trusting it. Here's a simple lesson framework:
The prediction game: Ask students to complete the sentence "The capital of France is ___." Easy β Paris. Now ask: "The best football team in the world is ___." They'll disagree. Explain that AI works like the first example β it predicts the most likely answer based on what it's seen before. It's confident even when it's wrong.
The hallucination demo: Ask ChatGPT (live, in front of the class) to write a biography of a fictional person. Use a made-up name. Watch it confidently generate a completely fabricated life story β dates, achievements, publications. Then reveal: this person doesn't exist. AI just made it all up. This is the moment students stop trusting AI blindly.
The "AI vs. You" challenge: Give students a question. Have AI answer it. Have them answer it. Compare. Where is AI better? Where are they better? This builds critical evaluation skills rather than dependence.
Setting AI policies for homework
This is where most schools struggle. The policy needs to be clear, fair, and realistic. Here's a framework that works:
Traffic light system:
Red β Not allowed: AI writes your work for you and you submit it as your own. This is academic dishonesty, full stop.
Amber β Allowed with acknowledgement: AI helps you brainstorm, check your work, or explain a concept you're stuck on. You must state how you used it. "I used ChatGPT to help me understand photosynthesis, then wrote my answer in my own words."
Green β Encouraged: Using AI to research, explore ideas, get feedback on drafts, or learn something new. The student is doing the thinking. AI is a tool, not a shortcut.
The traffic light system works because it's nuanced. It doesn't pretend AI doesn't exist. It teaches students to use it responsibly β which is a skill they'll need for the rest of their lives.
Knowledge Check
In the traffic light system, what does the "amber" category allow students to do?
A
Use AI only if the teacher is present
B
Use AI as a learning aid β brainstorming, checking work, understanding concepts β as long as they acknowledge how they used it
C
Submit AI-generated work with a small disclaimer
D
Use AI only during exam preparation
The amber category is where most legitimate AI use falls. Students are still doing the learning β AI is helping them understand, brainstorm, or check their work. The acknowledgement requirement builds honesty and helps teachers understand how students are engaging with the tool.
Detecting AI-generated work (and why it's harder than you think)
Let's be honest about detection. AI detection tools β GPTZero, Turnitin's AI detector, and others β are unreliable. They produce false positives (flagging human-written work as AI) and false negatives (missing AI-generated work) at rates that make them unsuitable as the sole basis for an accusation.
Here's what actually works better than detection tools:
Know your students' writing. If a student who typically writes in short, simple sentences suddenly submits a perfectly structured essay with sophisticated vocabulary and flawless grammar β that's a red flag. Not proof, but a reason to have a conversation.
Ask students to explain their work. "Talk me through your argument here." "Why did you choose this example?" A student who wrote their own work can explain their thinking. A student who submitted AI-generated text often can't.
Process over product. If you can see the drafts, the notes, the planning β the work is almost certainly genuine. Build process checkpoints into longer assignments: submit your plan, submit your first draft, submit your final piece. AI-generated work typically appears fully formed with no process behind it.
Design AI-resistant tasks. Personal reflection, local case studies, classroom-specific references, opinion pieces with lived experience β these are hard to outsource to AI because they require knowledge that AI doesn't have.
Detection tools are unreliable. These four strategies work better β and they teach critical thinking as a bonus.
Knowledge Check
Why are AI detection tools unreliable as the sole method of identifying AI-generated student work?
A
Because they cost too much money
B
Because they only work on essays, not other types of work
C
Because students can easily trick them with a VPN
D
They produce both false positives (flagging genuine student work) and false negatives (missing AI-generated work) at rates too high to trust
AI detection is a statistical guess, not a definitive test. Studies have shown detection tools misidentify human writing as AI-generated at alarming rates β particularly for non-native English speakers. Relying on them for accusations risks punishing innocent students and eroding trust. Use them as one data point, not as evidence.
Age-appropriate conversations
How you talk about AI depends on who you're talking to.
Primary (KS1-2): Keep it simple and curious. "AI is a computer programme that can write stories, answer questions, and make pictures. But it doesn't actually think β it copies patterns from things it's already seen. Let's see what happens when we ask it something silly." Focus on wonder, not worry. Let them explore with you and discover the limitations together.
Upper primary (KS2): Introduce the idea of checking AI's work. "AI is like a very fast but sometimes careless helper. It's great at giving you ideas, but you always need to check whether what it says is actually true." The homework policy conversation can start here in simple terms.
Secondary (KS3): The full conversation β what AI is, how it works, why it gets things wrong, what academic honesty means, and how to use it as a tool rather than a crutch. Students at this age can understand the traffic light system and engage in genuine discussion about fairness and responsibility.
Secondary (KS4-5): Add nuance about AI in their future careers and the ethical implications. These students are about to enter a world where AI literacy is a professional skill. Frame it that way β "Learning to use AI well is a skill that will make you more employable, not less."
Across every age group, the core message is the same: AI is a tool, not a shortcut. A calculator doesn't mean you skip learning multiplication. Spell-check doesn't mean you skip learning to spell. AI doesn't mean you skip learning to think, write, argue, and create. Frame it that way and students get it. They're not stupid. They know the difference between using a tool and cheating. They just need an adult to have an honest conversation about where the line is.
I need a lesson plan for a Year 9 PSHE or tutor time session on responsible AI use. The lesson should last about 50 minutes. I want students to understand what AI is, try it themselves, discover its limitations, and leave with a clear understanding of what responsible use looks like. Make it interactive β Year 9s won't sit and listen for 50 minutes.
Here's your lesson plan:
---
**Lesson: AI β Tool, Not Shortcut**
*Year 9 | 50 minutes | PSHE / Tutor Time*
**Learning Objectives:**
- Understand what AI is and how it generates responses
- Identify AI's strengths and limitations through hands-on testing
- Distinguish between responsible and irresponsible AI use
**Starter β The Prediction Game (8 min)**
Show students five sentence starters on the board. They predict the ending. Reveal that this is exactly how AI works β it predicts the most likely next word. Key message: AI doesn't "know" things. It predicts.
Sentences:
- "The capital of France is ___" (easy β Paris)
- "The best song ever made is ___" (disagreement β AI guesses the most popular answer)
- "In 2065, the Prime Minister will be ___" (impossible β AI will still try)
**Activity 1 β The Hallucination Hunt (12 min)**
In pairs, students ask ChatGPT factual questions and fact-check the answers using a reliable source. Give them 5 questions (mix of easy and obscure). Tally: how many did AI get right? How many did it get wrong or make up?
Discussion: Why did it get some wrong? (Because it generates probable text, not verified facts.)
**Activity 2 β AI vs. You (10 min)**
Give students a short writing task: "Write three sentences explaining why friendship matters to teenagers." They write their own version first. Then they ask AI the same question. Compare in pairs.
Discussion: Which is better? Which feels more real? Which one could only YOU have written?
**Mini-Lecture β The Traffic Light (8 min)**
Present the Red/Amber/Green framework:
- Red: AI writes it, you submit it = dishonest
- Amber: AI helps you learn, you acknowledge it = fine
- Green: AI helps you explore and think = encouraged
Give three scenarios. Students hold up red, amber, or green cards (or use fingers: 1, 2, 3).
**Activity 3 β Write the Policy (10 min)**
In groups of 4, students draft their own class AI policy using the traffic light framework. What should be red, amber, and green? Groups share their top rule with the class.
**Plenary (2 min)**
Exit ticket: "Write one sentence β how will you use AI responsibly this year?"
---
This keeps Year 9s active every 8-12 minutes and gives them ownership of the policy. Want me to create the student worksheet and the five fact-checking questions?
That's excellent. Create the five fact-checking questions for the Hallucination Hunt β make sure at least two are questions AI is likely to get wrong.
Here are your five questions:
**Hallucination Hunt β Fact-Check Challenge**
1. **What year was the NHS founded?** (AI will likely get this right β 1948)
2. **Who scored the winning goal in the 2014 FIFA World Cup final?** (AI should get this β Mario Gotze)
3. **Name three books written by the author Helen Rutherford.** (Trick question β there is no famous author by this name. AI will almost certainly invent books that don't exist. This is a hallucination.)
4. **What is the population of Hartlepool to the nearest thousand?** (AI may give a confident but outdated or incorrect figure. Students should check the latest ONS data.)
5. **What were the main causes of the Pendle Witch Trials of 1612?** (AI will likely give a reasonable answer but may include inaccurate details or mix up specific names and dates. Good test of whether students actually verify.)
**Teacher note:** Question 3 is the guaranteed hallucination. Question 4 tests whether AI admits uncertainty (it usually doesn't). Question 5 tests accuracy on specific historical detail. Debrief by asking: "How did you know which answers to trust?"
β» Replay conversation
Final Check
What is the core message students need to understand about AI and learning?
A
Students should use AI for everything to prepare for the workforce
B
AI will replace the need for learning in the future
C
AI should never be used by students under any circumstances
D
AI is a tool that enhances learning when used responsibly β but the thinking is the learning, and skipping it means you're only cheating yourself
The students who understand that learning happens through struggle, thought, and practice β not through having answers handed to them β will use AI wisely. The tool analogy works because students already understand that a calculator helps but doesn't replace understanding. AI is the same principle at a larger scale.
π
Day 18 Complete
"The schools that talk honestly about AI will produce students who use it responsibly. The schools that pretend it doesn't exist will produce students who use it secretly."
Tomorrow β Day 19
Professional Development with AI
Tomorrow you'll use AI for your own career β performance reviews, interview prep, and professional development planning.