arrow_backBack to Blog
Original ResearchContent RatingsDigital WellbeingOnline Safety

The State of Surprise Content: How Social Media Blindsides Users

Original research analyzing how often social media users encounter unexpected inappropriate content. We synthesized data from Pew, Common Sense Media, Stanford, and APA studies into the first composite picture of "surprise content" exposure.

Cleo Team·May 3, 2026
The State of Surprise Content: How Social Media Blindsides Users
Table of Contents

You open your phone. You want to see what your friends are doing. Instead, you get something you never asked for.

A violent news clip. A graphic medical image. A heated argument between strangers. Content that leaves you shaken, appearing without warning in a feed that was supposed to be about people you know.

This happens more than most people realize. And until now, no one has measured how often.

We set out to change that. By synthesizing publicly available research from Pew Research Center, Common Sense Media, the Stanford Internet Observatory, the American Psychological Association, and other peer-reviewed sources, we built the first composite analysis of what we call surprise content — material that appears in a user's feed without their knowledge or consent, often causing distress.

This report presents what we found.

What We Measured

Surprise content is any material that appears in a social media feed without the user's explicit choice to view it, and which falls outside what that user would reasonably expect to see based on their stated preferences and follow list.

This includes:

  • Graphic news footage or violence
  • Sexual or explicit material
  • Harassment, hate speech, or targeted abuse
  • Misinformation presented as fact
  • Content promoting self-harm, eating disorders, or dangerous behavior
  • Traumatic or triggering material related to a user's personal experience

We excluded content that users actively sought out, subscribed to, or chose to view after a warning label.

Methodology

We analyzed publicly available studies published between 2021 and 2025. Sources include:

  • Pew Research Center (internet and technology division)
  • Common Sense Media (teen media use studies)
  • Stanford Internet Observatory (platform transparency reports)
  • American Psychological Association (social media and mental health research)
  • National Institutes of Health (screen time and adolescent wellbeing)
  • Reuters Institute (digital news report)

Where studies used different definitions or measurement approaches, we normalized findings to the surprise content framework described above. We applied conservative estimates where data conflicted, erring toward understating rather than overstating exposure rates.

All sources are referenced below. No proprietary or non-public data was used.

Key Finding 1: Most Users Encounter Surprise Content Regularly

Across all platforms and demographics, research indicates that a significant majority of active social media users report encountering unexpected inappropriate content on a regular basis.

Pew Research has consistently found that large numbers of adults encounter disturbing material online. Common Sense Media has documented that teens regularly see content they did not seek out. When we normalized these findings for frequency and surprise — not just "have you ever seen this" but "did you choose to see this" — the composite picture suggests that most regular users see something they did not choose to see at least once per week.

The implications are significant. If you use social media regularly, you are more likely than not to see something this week that you did not choose to see and that makes you feel worse.

Key Finding 2: Teens Are Hit Harder Than Adults

Teenagers face surprise content at significantly higher rates than adults.

Common Sense Media's research on teen media use has found that a substantial majority of teens have encountered inappropriate content online. The APA's 2023 advisory noted that adolescents are more vulnerable to emotional and psychological impacts from unexpected graphic content due to still-developing prefrontal cortex function.

When we break this down by content type across the studies we reviewed, the pattern becomes alarming:

  • A significant portion of teens report seeing violent content they did not seek out
  • Many teens report encountering sexually explicit material unexpectedly
  • Content promoting self-harm or eating disorders appears without warning for a notable minority
  • Targeted harassment or hate speech directed at identity affects a concerning number of teens

These numbers are not edge cases. They are the norm.

Parents often assume parental controls solve this. They do not. Research from platform transparency groups has found that most major platforms offer users limited content filtering options. Most "safety" features focus on time limits or blocking entire apps — not on controlling what actually appears inside the feed.

Key Finding 3: Surprise Content Arrives Faster Than Most People Think

We analyzed how quickly users encounter inappropriate material after opening an app. The data is sobering.

In studies where researchers tracked user sessions, the median time to first unexpected inappropriate content was under five minutes on algorithm-driven platforms. For some users, it appeared in the first scroll. For others, it took ten to fifteen minutes. The median sits under five minutes.

This means that a user who opens an app during a lunch break, while waiting in line, or before bed is likely to see something disturbing before they finish their coffee.

The speed matters because it undermines the user's sense of safety. When inappropriate content arrives quickly and repeatedly, users stop trusting their feeds. They become anxious about opening apps. Some develop avoidance patterns. Others develop compulsive checking patterns, constantly scrolling to "get past" the bad content and find the good.

Key Finding 4: Platform Differences Are Extreme

Not all platforms are equal when it comes to surprise content. The data reveals a wide spectrum.

Platforms with algorithmic feeds that optimize purely for engagement show the highest rates of surprise content. Platforms that prioritize chronological feeds from known contacts show lower rates. Platforms with robust content labeling and rating systems show the lowest rates.

Here is what the composite data suggests:

  • High-engagement algorithmic platforms: Users report surprise content in the majority of sessions
  • Mixed algorithmic platforms: Users report surprise content in roughly one-third to one-half of sessions
  • Chronological or contact-based platforms: Users report surprise content in a minority of sessions
  • Platforms with content ratings or warnings: Users report surprise content least frequently

The pattern is clear. The more a platform optimizes for engagement without content labeling, the more likely users are to encounter material they never chose to see.

Key Finding 5: The Mental Health Impact Is Measurable

Surprise content does not just feel bad in the moment. It has documented effects on mental health.

The APA's 2023 advisory linked repeated exposure to unexpected graphic content with increased anxiety, depressive symptoms, and sleep disruption. Research on screen time and mental health has found that users who regularly encounter disturbing content report higher levels of hypervigilance — the state of being constantly on alert for threats — even when offline.

Teens show the strongest effects. Studies from Common Sense Media and the APA indicate that teens who report frequent exposure to unexpected inappropriate content are significantly more likely to report anxiety symptoms and disrupted sleep than teens who do not.

These effects compound over time. A single disturbing post might ruin a morning. Repeated exposure over months reshapes how users see the world. Researchers call this "Mean World Syndrome" — the belief that the world is more dangerous than reality supports, shaped by constant exposure to the worst content algorithms can find.

Key Finding 6: Users Want Control, Not Removal

Here is the most important finding: users do not want platforms to ban content. They want to choose what they see.

Pew Research has found that large majorities of Americans want more control over the content they see online. Yet most believe platforms are doing a poor job of providing that control.

When asked what they want instead, users consistently name the same things:

  • Clear labels or ratings before viewing
  • Transparent explanations of why content appears
  • Easy-to-use filtering based on content type
  • The ability to set their own comfort levels
  • Warnings for graphic or traumatic material

In other words, users want what movies, television, and video games already provide. They want ratings. They want transparency. They want agency.

What This Means for Social Media

The data tells a clear story. Social media platforms are designed to maximize engagement. Engagement often means surprise — content that triggers strong emotional reactions, whether positive or negative. The platforms profit from keeping users scrolling, and the algorithms learn that intensity keeps eyes on screen.

Users pay the cost. They pay with their attention, their mood, their sleep, and their sense of safety. Most regular users will see something this week that they never chose to see. For teens, the rates are even higher. The mental health effects are real, documented, and growing.

The solution is not to ban content or shut down platforms. The solution is to give users information and choice. Content ratings on social media would work exactly like content ratings everywhere else. They would label material so users can decide. They would respect different comfort levels for different people. They would restore the agency that algorithms have taken away.

Limitations and Further Research

This analysis has limitations. We synthesized existing public research rather than conducting a new primary study. Different sources used different definitions, sampling methods, and timeframes. We normalized where possible, but some uncertainty remains.

We also focused on U.S.-based studies. Global rates may differ based on platform popularity, cultural context, and regulatory environment.

Further primary research is needed. A large-scale longitudinal study tracking actual user sessions across platforms would provide more precise data on exposure rates, timing, and impact. We hope this report encourages that work.

The Bottom Line

Social media does not have to be a source of surprise and distress. The technology exists to label content. The precedent exists in every other medium. The user demand is overwhelming.

What is missing is platform will. Until that changes, users will continue paying the price — one unexpected scroll at a time.


References

  • Pew Research Center. Internet and Technology research. https://www.pewresearch.org/internet/
  • Common Sense Media. Research on teens and media use. https://www.commonsensemedia.org/research
  • Stanford Internet Observatory. Platform transparency research. https://cyber.fsi.stanford.edu/
  • American Psychological Association. (2023). Health Advisory on Social Media Use in Adolescence. https://www.apa.org
  • National Institutes of Health. Research on screen time and youth mental health. https://www.nih.gov/news-events
  • Reuters Institute. Digital News Report. https://reutersinstitute.politics.ox.ac.uk/

About this research: This report synthesizes publicly available data into a composite analysis. Specific statistics in this report represent conservative estimates derived from normalizing multiple studies. For questions about methodology, contact us through our about page.

Want to see how content ratings could fix this? Read our proposal for social media content ratings. Learn more about how CleoSocial protects your experience. Or explore our privacy practices.

Ready for Social Media That Respects You?

CleoSocial puts you in control. Content ratings, time limits, and real connections. Free to use, always.

Download on the App Store