Module 2: The Danger Zone -- When AI Goes Wrong
AI is powerful, but it fails in predictable ways. Learn to spot hallucinations, recognize bias, protect sensitive data, and build the critical thinking habits that separate responsible AI users from reckless ones.
Course Content
The Skills That Keep You Safe
Module 1 introduced you to what AI is. This module teaches you what AI gets wrong -- and how to protect yourself, your business, and your clients from those failures. This is not fear-mongering. This is professional competence. A mechanic who does not understand how brakes fail is a danger to everyone on the road.
Hallucinations: The Confident Liar
AI hallucination is when a model generates information that sounds authoritative but is completely fabricated. This is not a bug that will be fixed -- it is a fundamental characteristic of how these systems work. They predict likely text, and sometimes the most "likely" text is something that sounds right but is not.
Real examples of AI hallucinations:
- A lawyer used ChatGPT to prepare a legal brief. The AI cited six court cases that did not exist. The lawyer submitted them to the court without verification. He was sanctioned and nearly lost his license.
- AI-generated medical advice has recommended drug dosages that would be lethal, presented with the same confident tone as accurate information.
- Financial models built on AI-generated market analysis have incorporated fabricated data points, leading to real investment losses.
How to catch hallucinations:
- Never trust, always verify. If the AI gives you a fact, statistic, date, name, or citation -- verify it independently.
- Ask for sources. If the AI cites something, look it up. If it cannot provide a verifiable source, treat the information as unconfirmed.
- Cross-reference. Ask the same question in a different way or use a different tool. If the answers diverge significantly, dig deeper.
- Watch for over-specificity. AI loves to invent precise-sounding numbers. "Studies show a 47.3% improvement" is a red flag if no study is cited.
Bias: The Invisible Amplifier
AI models learn from human-created data. Human-created data contains human biases -- racial, gender, cultural, economic, and more. When AI learns from biased data, it does not correct the bias. It scales it.
This shows up in practical ways:
- Resume screening tools that consistently rank male candidates higher than equally qualified female candidates
- Loan approval systems that disadvantage applicants from certain zip codes
- Image generation that defaults to narrow representations of professions (all doctors are male, all nurses are female)
- Language models that associate certain ethnic names with negative sentiments
Your responsibility as an AI user:
- Review AI output for bias before using it in any decision-making context
- Be especially cautious with AI-assisted decisions that affect people (hiring, lending, grading, evaluating)
- If you are using AI to generate content, review it for stereotypes and assumptions
- Diversify your prompts -- explicitly ask for multiple perspectives
Data Privacy: What Happens to What You Type
When you type something into an AI tool, where does that information go? The answer depends entirely on which tool you are using and how it is configured.
Key questions to ask before using any AI tool with sensitive data:
- Does this tool use my input data to train future models?
- Is my data stored? For how long? Where?
- Who at the AI company can access my conversations?
- Is there an enterprise version with stronger data protections?
- Does this comply with my industry's regulations (HIPAA, GDPR, SOX, etc.)?
Never put these into a consumer AI tool:
- Customer personal information (names, emails, SSNs, account numbers)
- Proprietary business strategies or financial data
- Source code for commercial products
- Legal documents under privilege
- Medical records or patient information
- Employee performance reviews or HR matters
The Human in the Loop
Every responsible AI implementation has a human making the final call. This is not because AI is useless -- it is because AI lacks judgment. It cannot weigh competing values, understand context it was not given, or take responsibility for its outputs.
Your role is not to be replaced by AI. Your role is to be the quality control, the judgment layer, and the ethical compass that AI cannot be. The people who understand this will thrive. The people who either reject AI entirely or trust it blindly will struggle.
Your Deliverable: The Risk Assessment
- Take the 5 AI opportunities you identified in Module 1.
- For each one, identify the risks: What could go wrong if AI hallucinated? What bias might creep in? What data would you need to share?
- Create a verification plan for each: How would you check the AI's work before using it?
- Use the Sandbox to deliberately try to make the AI hallucinate. Ask it about something obscure or invented. Document what you learn about its failure patterns.
