Learning Objectives
- Understand AI bias and its sources
- Practice responsible AI use
- Protect your privacy
- Develop verification habits
Understanding AI Bias
AI systems can perpetuate and amplify biases. Understanding this helps you use AI more critically.
Where Bias Comes From
AI learns from data created by humans - and humans have biases. If training data contains stereotypes or underrepresents certain groups, the AI will reflect this.
Examples of AI Bias
| Domain | Bias Example |
|---|---|
| Hiring | Resume screening favoring male-coded names |
| Images | "Professional" prompts defaulting to certain demographics |
| Medical | Training data underrepresenting certain populations |
| Language | Associating occupations with specific genders |
How to Recognize Bias
Watch for:
- Default assumptions about demographics
- Stereotypical associations
- Missing perspectives or representations
- Consistent patterns that seem unfair
What You Can Do
- Be specific: Instead of "a doctor," specify "a female doctor" if that's what you want
- Check outputs: Look for stereotypical patterns
- Request diversity: Ask for multiple perspectives or representations
- Report issues: Many AI companies want to know about bias problems
Privacy Considerations
What you share with AI may be stored, analyzed, and used for training. Protect yourself.
Never Share with AI
| Category | Examples |
|---|---|
| Financial | Bank accounts, credit cards, SSN |
| Medical | Detailed health conditions, prescriptions |
| Personal | Passwords, private conversations |
| Work | Confidential documents, trade secrets |
| Identity | Full address, passport numbers |
Understanding Data Usage
Different AI services have different policies:
ChatGPT (OpenAI)
- Free tier: Conversations may train models
- Plus: Can opt out in settings
- API: Different terms
Claude (Anthropic)
- Does NOT train on conversations by default
- Conversations deleted after 90 days
Enterprise/Business Tiers
- Usually have stronger privacy protections
- Data not used for training
Best Practices
- Read privacy policies (at least the summary)
- Use anonymous examples instead of real data
- Check privacy settings in your AI tools
- Use work AI tools for work if your company provides them
- Assume nothing is private when using free tiers
Verification Best Practices
The Trust But Verify approach keeps you safe while leveraging AI's power.
The Verification Hierarchy
| Risk Level | Verification Needed |
|---|---|
| Low (casual use) | Quick sanity check |
| Medium (important decisions) | Cross-reference 1-2 sources |
| High (legal, medical, financial) | Professional verification |
Verification Strategies
For Facts and Claims
- Ask AI for its sources
- Search for those sources independently
- Cross-reference with authoritative sources
- Check dates (AI knowledge has cutoffs)
For Citations
- Search the exact citation
- Check if authors/journals exist
- Use Google Scholar for academic papers
- Assume citations might be fabricated
For Code
- Test thoroughly
- Review for security issues
- Check against documentation
- Don't trust without testing
For Medical/Legal
- AI is for preliminary research ONLY
- Always consult professionals
- Never self-diagnose or self-treat based on AI
Red Flags
Be extra skeptical when AI provides:
- Very specific statistics
- Recent events (post-training cutoff)
- Information about obscure topics
- Medical or legal advice
- Financial recommendations
Ethical Use Guidelines
Using AI responsibly benefits everyone.
Give Credit
When AI substantially helps your work:
- Disclose AI assistance when appropriate
- Don't claim AI-generated content as purely your own
- Follow your organization's AI disclosure policies
Don't Deceive
Never use AI to:
- Create fake reviews or testimonials
- Impersonate real people
- Generate misleading content
- Spread misinformation
Academic Integrity
| OK | Not OK |
|---|---|
| Research assistance | Submitting AI work as your own |
| Brainstorming | Bypassing learning objectives |
| Editing/proofreading | Cheating on exams |
| Explaining concepts | Plagiarism |
Always check your institution's AI policy.
Professional Context
- Follow your company's AI policy
- Don't input confidential information
- Verify before sending AI content externally
- Disclose AI assistance when required
The Human-AI Partnership
AI is a powerful tool, but you're still in charge.
AI Amplifies, Doesn't Replace
Think of AI as:
- A brilliant but unreliable assistant
- A first-draft generator
- A brainstorming partner
- A research accelerator
NOT as:
- A replacement for expertise
- An infallible oracle
- A substitute for human judgment
- The final word on anything important
When to Trust AI vs. Seek Humans
| Trust AI For | Seek Humans For |
|---|---|
| First drafts | Final decisions |
| Brainstorming | Emotional support |
| Research starting points | Professional advice |
| Routine tasks | Complex judgment calls |
| Learning concepts | Nuanced situations |
Building Complementary Skills
As AI handles routine tasks, focus on developing:
- Critical thinking
- Emotional intelligence
- Creative vision
- Ethical judgment
- Leadership
- Complex problem-solving
These human skills become MORE valuable, not less, in an AI world.
Your AI Usage Guidelines
Create your own rules for how you'll use AI. Consider:
- What tasks will you use AI for?
- What will you never use AI for?
- How will you verify AI outputs?
- When will you disclose AI assistance?
- What privacy boundaries will you maintain?
Exercises
- 1Write your personal AI usage guidelines (5-10 rules you'll follow)
- 2Find an example of AI bias in an image generator
- 3Practice verifying AI claims by fact-checking 3 statements