Social Media & Young Adults

Scroll Proof

America is done asking nicely. This week, three states, a federal standoff, and one devastating courtroom exhibit redraw the battle lines around social media and young adults.

Listen
01
The US Capitol building with digital algorithm code projected onto its marble walls

The Senate Drew a Line in the Sand. The House Erased It.

Here's the core question that's been haunting Congress since the Kids Online Safety Act first gained momentum: should social media companies be legally responsible for the foreseeable harm their products cause to young people? Senator Marsha Blackburn thinks yes. The House thinks that sounds like a lawsuit factory.

The Senate version of KOSA includes a "duty of care" provision—a legal obligation that treats platform design decisions the way product liability law treats a car manufacturer's braking system. If your algorithmic feed is designed to maximize engagement and you know it correlates with depression in minors, you're on the hook. The House stripped this language entirely, replacing it with what Blackburn publicly called a "diluted" framework that amounts to voluntary best practices.

"Any kids' online safety package sent to President Trump's desk must include a duty of care. We will not support a bill that guts the core accountability mechanism for Big Tech." — Sen. Marsha Blackburn

This isn't a procedural disagreement—it's a philosophical one. The Senate wants social media regulated like a product. The House wants it regulated like a service. That distinction determines whether Meta, TikTok, and Snap face billions in liability or a sternly-worded compliance checklist. Watch this deadlock—it's the single biggest variable in determining whether federal tech regulation actually has teeth in 2026.

02
A pharmaceutical-style warning label peeling off a giant smartphone screen with New Jersey state imagery

New Jersey Wants to Slap a Surgeon General's Warning on Your Feed

Think about the last time you opened Instagram. Did it come with a warning that said "this product may harm your mental health"? Assemblywoman Andrea Katz wants to change that. Her three-bill package—A4013, A4014, and A4015—is the first state-level attempt to treat social media platforms the way we treat cigarettes: as products whose risks must be disclosed at the point of consumption.

The "Kids Code" isn't subtle. It mandates "black-box" mental health warnings on platforms that use addictive design patterns. It also requires "Privacy by Default"—the highest security settings applied automatically for minors, no opt-in needed. This flips the current paradigm where platforms ship with maximum data collection and make you hunt through 47 settings screens to dial it back.

Line chart showing explosive growth in state-level social media legislation from 12 bills in 2021 to 198 in 2025, with 2026 on pace for 356
State legislatures aren't waiting for Congress. The pace of social media regulation bills has increased 16x since 2021, with 2026 Q1 already surpassing many full-year totals.

The significance here isn't just the warning labels—it's the framing. By classifying algorithmic feeds as a public health hazard, New Jersey is creating legal precedent that could cascade. If a court upholds that social media design features are analogous to addictive substances, the liability implications for Meta and ByteDance are enormous. "As a mom of three teenagers," Katz told reporters, "I see firsthand how central social media is—but we need to prioritize mental health over maximizing engagement." The liability implications could be enormous.

03
A surreal scene of a translucent glass robot sitting across from a teenager on a therapy couch

Pennsylvania Says Your Kid's AI Therapist Needs a License

The Pennsylvania Senate just passed a bill that acknowledges something the rest of the country is still catching up to: the AI companion your 19-year-old is confiding in at 2 AM is not a harmless chatbot. It's an unregulated pseudo-therapist with zero clinical training and a financial incentive to keep the conversation going.

The SAFECHAT Act (SB 1090) passed 49-1—the kind of bipartisan margin that tells you something genuinely alarming has happened. The bill targets AI companions on platforms like Character.AI and Replika, requiring them to detect high-risk language around self-harm and suicide and immediately surface crisis resources. It also mandates that chatbots remind users every three hours that they're talking to a machine, not a person.

"Recent heartbreaking stories have come to light of minors who have used AI chatbots to cope with trauma… unfortunately, some responses contributed to incidents of self-harm." — Sen. Tracy Pennycuick

That 49-1 vote is the real signal. This isn't a partisan wedge issue—it's a "we've-all-seen-the-case-studies" moment. The regulation of AI companions is a new frontier that extends beyond traditional social media, and Pennsylvania just established the template. Every other state legislature is watching.

04
A pediatrician's stethoscope that transforms into a smartphone at its end, symbolizing digital health screening

Your Pediatrician Just Got a Better Playbook Than "Less Screen Time"

For years, the medical establishment's advice about social media and young people boiled down to a number: hours per day. Two hours was the magic threshold. Less was better. More was concerning. The American Academy of Pediatrics just retired that entire framework.

Their new clinical policy statement introduces the "5 Cs"—Child, Content, Calm, Crowding Out, and Communication. Instead of asking "how many hours?" clinicians are now trained to ask "what are you doing on there, how does it make you feel, what is it replacing, and who are you talking to about it?" This is a paradigm shift from population-level panic to individual-level screening.

The most striking recommendation: stop telling young adults to rely on social media as their primary emotional regulation tool. The AAP now explicitly warns that reaching for your phone when you're anxious or sad creates a dependency loop that mirrors the "social media inertia" pattern researchers identified this same week (see Section 05). Instead, clinicians should screen for "Problematic Internet Use" as a distinct clinical concern—not a moral failing, not a generational quirk, but a pattern that warrants the same clinical attention as disordered eating or substance misuse.

This changes how millions of American doctors will counsel young adults during well-visits. The 5 Cs framework finally gives clinicians a structured tool that matches the complexity of the problem.

05
An abstract visualization of a person trapped in an infinite scroll loop, dissolving into cascading social media cards spiraling downward

The Real Danger Isn't How Long You Scroll. It's Whether You Can Stop.

Two studies published this week may have finally settled the "screen time" debate—and the answer isn't what most parents expect. A University of Edinburgh team tracked young people in real-time over two weeks and found that total time on social media was a weak predictor of depression. What predicted depression strongly? "Social media inertia"—the inability to close the app even when you're feeling worse.

Side-by-side scatter plots comparing screen time vs depression (weak correlation) with social media inertia vs depression (strong correlation)
Edinburgh researchers found that "social media inertia"—the inability to stop scrolling even while feeling negative—is a far stronger predictor of depression than total screen time alone.

A separate study by van der Wal et al. in Current Psychology reinforced this with a massive dataset: 44,000 daily diaries from young adults. Their finding? Not all platforms are equal. TikTok, Instagram, and YouTube consistently correlated with lower self-esteem and well-being, while WhatsApp and Snapchat showed neutral or positive effects. The differentiator: infinite scroll and algorithmic feeds versus direct peer-to-peer messaging.

Horizontal bar chart showing TikTok, Instagram, and YouTube with negative well-being impact scores while Snapchat and WhatsApp show positive impacts
Analysis of 44,000 daily diaries reveals that platforms built on algorithmic feeds and infinite scroll (TikTok, Instagram, YouTube) consistently lower well-being, while messaging-first platforms (WhatsApp, Snapchat) show positive or neutral effects.

This matters enormously for policy. If the problem is specific design features rather than "screens" broadly, then regulation can be surgically precise. Ban autoplay for minors. Require algorithmic feed opt-in rather than opt-out. These are tractable design changes, not Luddite fantasies. The science is pointing lawmakers toward interventions that could actually work.

06
A dramatic courtroom scene with a massive smartphone as the central exhibit displaying internal corporate emails with redaction marks

Meta Knew. The Jury Now Knows Too.

The phrase "they knew" gets thrown around a lot in tech regulation debates. This week, in a California courtroom, it stopped being rhetoric and became evidence. Internal Meta emails from 2016 were unsealed during the KGM v. Meta bellwether trial, and the language is breathtaking in its bluntness.

Executives warned that giving parents transparency into their teenager's browsing activity would "ruin the product from the start." Other documents compared the platform's algorithmic engagement features to "pushing drugs" to keep teenagers "hooked." This isn't a plaintiff's lawyer's interpretation—it's the company's own internal vocabulary.

The timing of this evidence couldn't be more significant. As legislatures debate whether to impose a duty of care (Section 01), a jury is simultaneously evaluating whether Meta already breached one. If the KGM verdict goes against Meta, it establishes that internal knowledge of harm plus failure to act equals liability. That's the legal framework that turns every other pending social media lawsuit—and there are thousands—from speculative to near-certain.

The lead counsel's statement captures the stakes: "The evidence shows these companies knew their products were defective by design and chose to prioritize growth over child safety." A multi-billion dollar verdict or settlement is now squarely on the table. Watch this case—it may accomplish through the courts what Congress has failed to do through legislation.

Infographic showing the five-pronged approach America is taking to tackle social media harm in March 2026: Federal Duty of Care, State Warning Labels, AI Companion Regulation, Clinical Framework Update, and Courtroom Accountability
The New Playbook: Five simultaneous fronts in America's March 2026 push to regulate social media's impact on young adults

The Scroll Stops Here

What's remarkable about this week isn't any single development—it's the convergence. Legislatures, clinicians, researchers, and courts are all arriving at the same conclusion simultaneously: the burden of proof has shifted. It's no longer on young people to prove social media harms them. It's on platforms to prove their products are safe. That's a fundamentally different conversation than we were having even a year ago. The question isn't whether regulation is coming. It's whether it'll be smart enough to target the actual mechanisms of harm—the infinite scrolls, the algorithmic amplification, the parasocial AI relationships—rather than blunt instruments like screen time caps that the research says don't work. The science is clear. The evidence is unsealed. The bills are on the floor. What happens next depends on whether we're willing to be as precise in our solutions as we are in our diagnoses.

Share X LinkedIn