Dear Techies,

What a week!

This week we launched the AI for Teachers series and the response has been something we didn't quite expect.

Over 500 views across six videos in a single week, a lesson planning video that clearly resonated far beyond just educators, and over ten new faces joining this newsletter in the last seven days. Whether you're a teacher, a professional, a student, or simply someone curious about AI — welcome. You found us at a good time.

To everyone across the community who watched, shared, commented, and replied — thank you. This series grew directly out of the feedback we received from the Demystifying AI course. So many of you asked how AI applies to specific parts of your work that building practical, audience-specific series became the obvious next step. Teachers were first. More communities are coming.

If you missed any of this week's videos or you know a teacher who would benefit, every one of them is still live, free, and linked at the bottom of this email.

This week's insight: AI will lie to you with complete confidence

Last week we gave you the lesson planning prompt — how to go from blank page to full draft in under a minute. That applies whether you're a teacher, a business owner, a marketer, or anyone who drafts content regularly.

This week we want to give you something just as important. Not a productivity tip but a warning.

AI hallucinates.

That's the technical term for what happens when an AI tool produces information that is completely incorrect and presents it with the same tone, confidence, and authority as everything else it generates. No asterisk. No disclaimer. No hesitation. Just wrong.

It doesn't happen every time. But it happens often enough that it needs to be part of how you think about every AI output you use — in any context, in any profession.

Here's what that looks like in practice: Someone asks an AI tool to generate five facts about a topic for a document they're preparing. The AI produces five fluent, well-formatted sentences. Four are accurate. One contains a statistic that is entirely fabricated. There is no way to tell which one from reading the output alone.

That document goes to a client. A classroom. A board. A parent.

This is not a hypothetical. It is happening every day across every industry where people are using AI tools without a fact-checking habit in place.

The habit that fixes it

The good news is the fix is simple. It just needs to become a non-negotiable habit.

Check every factual claim AI generates before you use it.

Not a deep research exercise. Just a quick verification of anything specific — dates, statistics, quotes, names, scientific explanations. You will usually catch errors immediately because you know your subject.

But the habit of looking needs to be automatic, not occasional.

Think of AI as a very fast first drafter who occasionally makes things up. Your job isn't to distrust everything it produces. Your job is to be the editor and a good editor always checks the facts.

What this means for how you use AI this week

Before you use any AI-generated content professionally, ask yourself three questions:

  1. Are there any specific facts, dates, or statistics in this output?

  2. Have I verified each one against a source I trust?

  3. Would I be comfortable if someone asked me where this information came from?

If the answer to all three is yes — you're good. If not, check before you use it.

In case you missed any of this week's videos

The full series is live, free, and all in one place — six short videos covering everything from lesson planning to what AI gets wrong.

👉 Watch the full series: YouTube Playlist

If you only have time for one this weekend, make it Video 6 — What to Watch Out For. It covers everything in this newsletter and more in under six minutes.

What's coming next week

Next week we're stepping into something a lot of you have been asking about since the AI course launched.

Prompt Engineering.

How to talk to AI in a way that actually gets you what you need. Because most people use AI like a search engine and that's exactly why most people get mediocre results.

Prompt engineering isn't technical. It's a communication skill. And once you understand the basic principles, every AI tool you use gets dramatically more useful overnight.

We built this video directly from the feedback we received from the Demystifying AI course. So many of you asked "okay, but how do I actually get better results?" that it became the obvious next step.

It drops next week. Subscribe to the channel so you don't miss it.

Before you go

If you know a teacher who's started using AI tools this week — forward this email. And if you know anyone else using AI in their work, the hallucination warning applies just as much to them.

Stay savvy,
The Tech Savvy Starts Here Team

P.S. The lesson planning video was this week's most watched — and the prompt works just as well for anyone who drafts structured content regularly, not just teachers. If you haven't tried it yet, this weekend is a good time. Open ChatGPT, describe what you need, and see how fast you have a draft. Then check the facts before you use it.

Enjoyed this edition?
Forward it to a friend or colleague who will enjoy it as well.
Missed something? Catch up in the newsletter archive.

🧠 Keep learning. | 💬 Keep questioning. | 💥 Keep growing.

Keep Reading