People United Foundation

Ethical AI Training for Leadership, Workforce, and Community Impact

Lesson status
Not completed
Mark lesson complete

Saved only in your browser. Return to the course index to see overall progress.

Lesson 3 Leadership

Bias, Equity, and Harm

AI systems can reinforce historical inequities if used without scrutiny. Community organizations are often closest to the people most affected.

Estimated time: 20 minutes Ethics-first Practical

Key concepts

Practice Exercise

Identify one population you serve, one way biased data could disadvantage them, and one safeguard you would require before using AI.

Template (copy/paste)

ROLE: You are my AI assistant.
GOAL: Help me assess bias risks for an AI use case.
INPUTS: Population served, data sources, decision impact.
OUTPUT: 5 risks + 5 mitigations + a 'do not deploy' threshold.
CONSTRAINTS: Prioritize equity and explain tradeoffs plainly.
Ethics & accuracy: verify important facts, avoid sharing sensitive personal data, and be transparent when AI helped draft content.

Recommended next

Use these links to keep momentum and turn learning into artifacts.

Download the kit

Use the kit prompts and templates offline.

Use the checklist

A short action list to apply this course in the next 7 days.

Certificate

Print a self-attested certificate for your personal documentation.

Go further (optional)

Join a cohort, sponsor a seat, or use the Impact Pack for funders.