Lesson Plan: Teaching Media Literacy with the X Deepfake Story and Bluesky’s Growth
Turn the 2025–26 X deepfake scandal and Bluesky surge into a ready-to-teach media-literacy unit with data worksheets, cheat-sheets, and assessments.
Hook: Fast answers teachers need — how a platform crisis becomes a media-literacy lab
Students and teachers struggle to find one place with clear, reliable examples of how platform events change user behavior. When X’s Grok-generated nonconsensual sexualized images exploded into public view in late 2025 and early 2026, the story created a living case study: rapid platform harm, regulatory response, and migration of users to alternatives like Bluesky. This lesson plan turns that real-world drama into a structured, classroom-ready unit that builds media literacy, critical thinking, and data-analysis skills.
Why teach this now (2026 context and trends)
In 2026, teachers must cover not just misinformation but the broader ecosystem of AI-generated content and platform governance. Two trends make this lesson timely:
- AI misuse and moderation gaps: AI image and video generation tools integrated into social platforms created high-profile harms in late 2025 — including the X (formerly Twitter) Grok controversy that led to investigations and public debate about nonconsensual sexualized deepfakes.
- Platform switching and network effects: After the controversy, Bluesky saw a surge in installs (Appfigures reported ~50% uplift), and the company quickly added features like cashtags and LIVE badges to convert new users into active participants.
These dynamics — harm, response, migration — are the backbone of a media-literacy case study that teaches students to analyze how platform events shift user behavior.
Learning objectives (what students will be able to do)
- Explain how a single platform event can trigger user migration and product changes.
- Evaluate primary and secondary sources for credibility, bias, and completeness.
- Interpret basic download/install and moderation data and visualize trends.
- Create evidence-based recommendations for platform policy or user behavior.
- Practice ethical communication and digital citizenship when discussing sensitive content.
Standards alignment & classroom level
This unit aligns with digital literacy and civics standards for high school and introductory college courses. It is adaptable for grades 10–12 and first-year undergraduates studying media studies, journalism, computer science ethics, or social studies.
Materials and prep
- Case packet (teacher provides): curated excerpts from credible reporting (TechCrunch, The Guardian), Appfigures data summary, and public statements from X and Bluesky.
- Student handouts: cheat-sheet on source evaluation, worksheet for trend analysis, rubric for group presentations.
- Devices for internet access, spreadsheet software (Google Sheets / Excel), and slide or poster materials.
- Optional: short screencasts showing Bluesky features (cashtags, LIVE badges) and archived screenshots of Grok outputs (content must be handled sensitively and with consent guidelines).
Timeframe
Designed for two 50–75 minute class sessions (or one 3-hour block). Extensions can add a third session for deeper data work or policy proposals.
Lesson sequence — ready to teach
Day 1 — Hook, background, and source evaluation (50–75 minutes)
-
5 min — Hook:
Show two headlines (neutral): one about X’s Grok controversy and one about Bluesky’s surge and new features. Ask: "What might connect these stories?" Collect quick responses on a shared doc.
-
10 min — Mini-lecture (teacher):
Explain the timeline: late 2025 reporting identified Grok producing sexualized imagery without consent; early 2026 saw regulatory inquiries (e.g., California AG) and scrutiny. In the days after, Bluesky installs reportedly rose ~50% and Bluesky added product features to absorb new users. Emphasize that this is a contemporary, verifiable event chain (cite TechCrunch, The Guardian, Appfigures).
-
15 min — Source evaluation workshop:
Distribute the Source Evaluation Cheat-sheet (see below). In pairs, students evaluate three documents: a TechCrunch article summary, a Guardian piece on Grok, and an Appfigures chart summary. Prompt them to identify author, date, evidence, conflicts of interest, and missing context.
-
20 min — Group breakout: credibility scorecard:
Each pair fills a 4-point credibility scorecard and posts one strong critique and one remaining question. Class discusses: Which sources are primary vs secondary? What data would change our understanding?
Day 2 — Data analysis, behavior mapping, and recommendations (50–75 minutes)
-
10 min — Quick recap and ethical framing:
Remind students to avoid sharing explicit content. Emphasize ethics when analyzing harmful material. Provide language for discussing sensitive topics respectfully.
-
25 min — Trend analysis (worked example + activity):
Teacher projects a simplified download chart (based on Appfigures summary) showing Bluesky daily installs before and after the Grok story. Walk through a worked example:
- Calculate percentage change: (post - pre) / pre * 100.
- Compute 7-day moving average to smooth noise.
- Annotate the timeline with events: press coverage date, regulatory announcement, Bluesky feature release.
Students then replicate the steps on their own sample data (worksheet provided). Ask: Does the install surge indicate sustained migration or a short-lived spike?
-
20 min — Behavior mapping and policy recommendations:
Teams map actions taken by three stakeholders: users, X (platform), and Bluesky. For each stakeholder, teams provide 2 evidence-based recommendations: one product change, one communication strategy. Each team posts a 90-second elevator pitch summarizing their plan.
Activities and assessments — detailed templates
Activity A — Source Evaluation Cheat-sheet (handout)
Use this one-page cheat-sheet to guide student judgment.
- Author & expertise: Who wrote it? What are their credentials?
- Publication: Is it a primary report, a data provider, or opinion?
- Date & timeliness: When was this published? Does it reflect new developments (late 2025 / early 2026)?
- Evidence: Are claims backed by data, quotes, or links to documents?
- Bias & purpose: Who benefits if this narrative is accepted?
- Missing info: What questions are unanswered?
Activity B — Trend Analysis Worksheet (worked example)
Provide a short dataset: daily installs for 21 days before and after the media event. Students compute:
- Pre-event average installs
- Post-event average installs
- Percent change (show formula)
- 7-day moving average and simple line chart
- Annotate chart with at least two external events (news article, regulation, product update)
Worked example (teacher demonstrates): If pre-event installs averaged 4,000/day and the 7-day post average is 5,900/day, percent change = (5900 - 4000)/4000 * 100 = 47.5%.
Activity C — Roleplay: Platform Response Council
Assign roles: X moderation lead, Bluesky product manager, California AG investigator, civil liberties NGO rep, and platform users. Each prepares a 3-point position and negotiates a 5-point joint statement on content moderation, transparency, and user safety. De-brief with a focus on trade-offs.
Rubric and assessment
Assess along these criteria:
- Source literacy (30%): Accuracy of credibility evaluations and identification of gaps.
- Data reasoning (30%): Correct calculations, clear charts, and valid inferences about behavior change.
- Communication (20%): Clarity in pitches and policy recommendations.
- Ethical awareness (20%): Sensitivity in handling deepfake content and respect for privacy.
Worked exam-style prompts (test prep)
Provide sample short-answer and analytical questions for exam preparation:
- Short answer (5 pts): Explain in two sentences how a moderation failure on a platform can cause user migrations to alternatives. Use the X–Bluesky case.
- Data question (10 pts): Given a 14-day dataset of installs with a spike on day 3 after a news event, calculate the 7-day moving average and interpret whether the spike indicates sustained user migration.
- Essay (25 pts): Using presented sources, evaluate whether Bluesky’s addition of cashtags and LIVE badges is a strategic product response to capture users fleeing X. Support with evidence and policy implications.
Safety and ethical notes for teachers
Always avoid displaying explicit image examples in class unless you have a clear pedagogical reason and parental/administrative approval. Use descriptions and anonymized screenshots that remove identifying details. Emphasize consent, legality, and the harms of nonconsensual sexualized content. Provide resources for students who might feel affected, including school counselors or online safety hotlines.
Extensions and cross-curricular links
- Computer science: small project to detect manipulated images or test watermarking approaches.
- Civics: research local/regional policy responses — compare US regulatory inquiries (e.g., California AG) with other jurisdictions in 2026.
- Economics: model market effects of user migration on small networks and monetization strategies like cashtags.
Teacher-ready cheat-sheet (one-page summary)
Copy-paste or print this to keep at hand before class.
- Hook: Headlines about X/Grok misuse and Bluesky install surge.
- Key dates: Late 2025 coverage of Grok misuse; early 2026 regulatory inquiries; immediate Bluesky installs uplift.
- Key sources: TechCrunch (product and installs), The Guardian (investigative report on content), Appfigures (install data), public AG statements.
- Essential student tasks: Evaluate sources, compute percent change, map stakeholder actions, present policy recommendations.
- Assessment focus: Source literacy + data reasoning + ethics.
Sample teacher script: first 10 minutes
"Today we’ll analyze how one platform’s AI tool and a failing moderation moment produced measurable shifts in where people choose to spend their attention online. You don’t need to know anything about coding — we are practicing critical reading, basic data interpretation, and ethical reasoning. By the end of the lesson you will draft a short policy brief or product recommendation based on evidence."
Why this builds real-world skills (Experience & Expertise)
Students leave the unit not only able to define a deepfake, but also to trace how media coverage, platform policy, and product adjustments interact to shape user behavior. The lesson mirrors how journalists, regulators, and product teams worked through the Grok story in late 2025 and early 2026. It trains learners to act as informed citizens, creators, and future employees in tech and media.
"A high-profile moderation failure can ripple through platforms, affecting installs, user trust, and product roadmaps — and it becomes a teachable moment for media literacy."
Actionable takeaways for students (quick reference)
- Use the Source Evaluation Cheat-sheet for every news item — check author, evidence, and missing context.
- Quantify behavior change: compute percent change and moving averages before inferring long-term trends.
- Map stakeholders and incentives — platform, users, regulators, and third-party apps all respond differently to crises.
- Always consider ethics: avoid sharing harmful content and center consent.
Classroom-ready resources (links & citations)
Provide students with curated links (teacher to compile):
- TechCrunch report summarizing Bluesky installs and feature updates (late 2025 / early 2026).
- The Guardian investigative piece on Grok-generated nonconsensual content.
- Appfigures summary chart or dataset for installs.
- Public statements from X and Bluesky about moderation and features.
- Recent analyses of platform governance and AI risks from 2025–2026 scholarly and policy outlets.
Adaptations for remote or hybrid learning
Run the same two-session plan using breakout rooms. Use Google Sheets shared templates for the data worksheet and a collaborative Miro or Jamboard for stakeholder mapping. For students without reliable internet, provide offline PDFs and small local datasets.
Future predictions and classroom discussion prompts (2026 forward)
Invite students to debate and forecast:
- Will platform switching become a routine user response to high-profile harms, or will established networks retain users via policy fixes?
- How will regulators evolve oversight of integrated AI tools by 2027–2028?
- Will watermarking and provenance systems for AI outputs become standard by 2026–2027?
Closing — practical next steps for teachers
Download the provided worksheets, run the two-session unit next week, and tweak examples to local contexts. Use the rubric to grade and the cheat-sheet to scaffold students who need extra help with data. Capture student pitches as short videos to create a public-facing archive of classroom work (with consent).
Call to action
If you found this plan useful, join our educator community for ready-made slide decks, downloadable worksheets, and a shared dataset modeled on the Appfigures summary. Share your classroom outcomes and student artifacts to help other teachers iterate on real-world media-literacy instruction.
Related Reading
- A Teacher's Guide to Platform Migration: Moving Class Communities Off Troubled Networks
- CI/CD for Generative Video Models: From Training to Production
- Autonomous Desktop Agents: Security Threat Model and Hardening Checklist
- Live Commerce + Pop‑Ups: Turning Audience Attention into Predictable Micro‑Revenue in 2026
- Programmatic with Privacy: Advanced Strategies for 2026 Ad Managers
- Media Consolidation 2026: What Banijay x All3 Means for Content Creators
- How to Monetize Micro Apps: From Side Income to Freelance Gigs
- Sony’s Multi-Lingual Push: A Shopper’s Guide to New Regional Content Bundles and Devices
- How to Build an AI-First Internship Project Without Letting the Tool Make Your Strategy
- Mini-Me Summer: Matching Outfits for You and Your Dog
Related Topics
asking
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you