Plain-language definitions of 51 online safety terms every parent and educator should know.
Catfishing occurs when someone creates a fake online identity — using false photos, a made-up name, and fabricated details — to deceive another person, typically to pursue a relationship or manipulate them emotionally. Catfishers often steal photos from other people's social media accounts to make the fake profile convincing.
Cyberbullying is bullying that happens through digital devices — phones, computers, or tablets — via apps, social media, text messages, or online gaming. It includes sending mean messages, spreading rumors online, posting embarrassing photos, or excluding someone from online groups repeatedly over time.
Cyberstalking is the repeated use of technology to harass, monitor, or intimidate a specific person. It goes beyond bullying to include systematic tracking of someone's online activity, unwanted contact across multiple platforms, threats, and attempts to monitor their physical location.
Digital citizenship refers to the responsible, ethical, and safe use of technology and the internet. It encompasses understanding online rights and responsibilities, practicing respectful online behavior, protecting personal privacy, and thinking critically about digital content.
A digital footprint is the trail of data left behind whenever someone uses the internet — including posts, photos, comments, search history, location data, and account activity. Some parts are active (things you intentionally share) and others are passive (data collected by apps and websites in the background).
Doxxing (also written as 'doxing') is the act of researching and publicly exposing private personal information about someone — such as their home address, phone number, school, or workplace — without their consent, usually with the intent to harass, threaten, or harm them.
Grooming is the process by which an adult builds trust and emotional connection with a child — and often their family — in order to manipulate and exploit them. Online grooming typically involves flattery, gift-giving, shared secrets, and gradually pushing boundaries to normalize inappropriate interactions.
Media literacy is the ability to access, analyze, evaluate, and create media in a variety of forms. In a digital context, it includes recognizing misinformation, understanding how algorithms shape what we see, evaluating sources for credibility, and being aware of advertising and persuasion techniques.
Netiquette (internet etiquette) refers to the informal rules of respectful, considerate behavior online — the digital equivalent of social manners. It covers how to communicate clearly and kindly in digital spaces, how to treat others with respect, and how to navigate disagreements without escalating conflict.
Online predators are adults who use the internet to target and exploit minors, typically through social media, gaming platforms, or messaging apps. They often use deception — pretending to be peers or romantic interests — and build trust through a gradual process called grooming before attempting to exploit children.
Sextortion is a form of online blackmail in which someone threatens to share sexual images or videos of a person unless they comply with demands — typically for more explicit content, money, or other favors. It can follow from a romantic deception or from hacking private images.
Online stranger danger refers to the risks posed when children interact with people they don't know in real life through social media, gaming, messaging apps, or online communities. Unlike in-person encounters, online strangers can easily conceal their true identity, age, or intentions.
Age verification refers to methods used by websites and apps to confirm that users meet the minimum age requirement — typically 13 under COPPA or 18 for adult content. Methods range from simple self-reported birthdate entry (easily circumvented) to more robust ID-based verification.
A content filter is a tool or service that blocks access to specific types of content — websites, apps, or material — based on rules or categories. Filters can operate at the app level, device level, or network level (router or DNS), and range from keyword blocking to AI-powered content classification.
A content rating is a classification that indicates the appropriateness of content for different audiences based on age and maturity level. Systems like G, PG, PG-13, and R (originally developed for film) are now being applied to social media content to help families manage what young users see.
DNS filtering is a network-level content control that blocks access to websites and online services by intercepting domain name requests before they resolve. By pointing your home router to a family-safe DNS provider, you can block inappropriate content across every device on your network without installing software on each device.
Family sharing refers to features offered by device ecosystems (Apple Family Sharing, Google Family Link) and individual apps that allow parents to manage their children's accounts, approve app downloads, share purchases, and monitor usage from a central parent account.
A finsta (fake Instagram) is a secondary, private Instagram account used to share more candid, unfiltered content with a smaller, trusted group of followers — as opposed to a 'rinsta' (real Instagram), which is the polished, public-facing account. The term has extended to describe similar secondary accounts on other platforms.
Parental controls are tools that allow parents to manage and restrict their child's access to content, apps, features, or screen time. They range from built-in device features (like Apple Screen Time or Google Family Link) to app-level controls offered by individual platforms.
Restricted mode (also called 'safe mode' or 'supervised experience' on various platforms) is a setting on streaming and social media platforms that filters out content that may be inappropriate for younger audiences. It's typically available in account settings and can often be locked with a passcode.
Safe search is a setting in search engines that filters out explicit content — including adult images, videos, and websites — from search results. Google, Bing, and DuckDuckGo all offer safe search settings, and some routers and DNS services can enforce safe search network-wide.
Shadowbanning (or ghost banning) occurs when a platform quietly restricts a user's content — making posts invisible or less visible to others — without explicitly notifying the user that they've been penalized. The user can still see their own posts but their reach is significantly reduced.
A supervised account is a type of online account designed for minors, linked to a parent account that allows the parent to manage settings, monitor activity, and approve content. Several platforms offer supervised experiences — including Google's supervised Google accounts and some social media family pairing features.
Two-factor authentication (2FA) is a security process that requires two forms of verification to log into an account — typically a password plus a one-time code sent via text message or authenticator app. Even if a password is compromised, 2FA prevents unauthorized access.
Body image issues refer to negative, distorted, or obsessive thoughts about one's physical appearance. Social media exposure to heavily filtered, idealized images of bodies has been linked to increased body dissatisfaction, disordered eating behaviors, and eating disorders — particularly among teenage girls, though boys are increasingly affected.
Doomscrolling is the compulsive consumption of negative news and distressing content online — continuing to scroll even when the content is making you feel worse. It's driven by a combination of anxiety, the algorithm's tendency to surface engaging (often negative) content, and the variable reward mechanism that makes social feeds hard to put down.
Eating disorder content refers to posts, communities, or media that promotes, glorifies, or provides instruction for disordered eating behaviors — including extreme restriction, purging, or other harmful practices. This content has historically circulated in communities using coded hashtags and language to avoid platform moderation.
FOMO, or Fear of Missing Out, is the anxious feeling that others are having rewarding experiences you're excluded from. Social media amplifies FOMO by providing a constant feed of other people's highlight reels — parties, trips, social gatherings — curated to appear more frequent and more fun than they actually are.
Influencer culture refers to the ecosystem of social media creators who monetize large followings by promoting products, lifestyles, and values to their audiences. It encompasses sponsored content, brand partnerships, and the aspiration of making income through online fame — which has become a common career goal among teenagers.
A parasocial relationship is a one-sided emotional connection a viewer or follower develops with a media figure — a YouTuber, streamer, influencer, or celebrity — who is unaware of the individual's existence. While these relationships are normal and can be positive, they become concerning when they replace or crowd out real relationships.
Social comparison is the tendency to evaluate ourselves by comparing to others. On social media, this is amplified because platforms predominantly display highlight reels — the most attractive, successful, and exciting moments of other people's lives — creating an unrepresentative standard that most people feel they fall short of.
The relationship between social media use and adolescent mental health is one of the most studied topics in contemporary psychology. Research shows associations between heavy social media use and increased anxiety, depression, loneliness, and sleep disruption — particularly for teens who use it passively (scrolling) rather than actively (creating, connecting).
COPPA, the Children's Online Privacy Protection Act, is a U.S. federal law that restricts the collection of personal data from children under 13 without verifiable parental consent. It's the reason most U.S.-based apps and websites set their minimum age at 13 and why some platforms offer specific children's versions.
Data privacy is the principle that individuals should have control over their personal information — how it's collected, stored, used, and shared by companies and governments. In the context of apps and social media, it refers to what information platforms gather about users and how that data is used, typically for advertising.
Non-consensual intimate images (NCII), often referred to as revenge porn, is the distribution of private sexual images or videos without the subject's consent. This is a form of sexual abuse and is illegal in most U.S. states and many countries. It includes images that were originally shared consensually within a relationship.
Online privacy refers to the right to control what personal information is shared online, with whom, and how it's used. In practice, it involves understanding what data apps and websites collect, how to configure account privacy settings, and what information to keep off public platforms.
Oversharing refers to sharing more personal information online than is safe, appropriate, or necessary — including details that enable identity theft, location tracking, or unwanted contact. What counts as oversharing depends on the platform, the audience, and the type of information.
A digital detox is a deliberate period of time during which a person refrains from using digital devices — particularly social media and entertainment apps — to reduce stress, improve wellbeing, and reconnect with offline life. It can range from a few hours each day to extended periods of days or weeks.
An echo chamber is a social environment — online or offline — where a person only encounters information and opinions that reinforce their existing beliefs. Online echo chambers are amplified by algorithms, by the tendency to follow accounts that share your views, and by social norms within communities that discourage dissent.
A filter bubble is the intellectual isolation that can occur when algorithms show people content they already agree with or engage with — gradually limiting exposure to diverse viewpoints. Social media algorithms optimize for engagement, and content that confirms existing beliefs tends to generate more engagement than challenging content.
Nomophobia (no-mobile-phone phobia) is the fear or anxiety of being without a mobile phone or being unable to use it. It encompasses anxiety about a dead battery, no signal, or simply not having the phone within reach. While not a formal clinical diagnosis, it describes a real behavioral pattern that's increasingly common among teens.
Screen addiction (also called compulsive device use or problematic social media use) refers to a pattern of digital device use that is difficult to control, continues despite negative consequences, and interferes with daily life — including sleep, relationships, schoolwork, and physical health. It shares characteristics with behavioral addictions.
Screen time refers to the total time spent using devices with screens — smartphones, tablets, computers, TVs, and gaming consoles. Health organizations distinguish between types of screen time: passive consumption (watching), interactive (gaming, social media), and educational use.
Sleep disruption from device use occurs when smartphones, tablets, or computers interfere with sleep quality or quantity — through blue light exposure that suppresses melatonin, social media notifications that interrupt sleep, or the psychological stimulation that makes it hard to wind down.
Cancel culture refers to the practice of collectively withdrawing support from — or publicly calling out — individuals or organizations that have done or said something considered objectionable. Online, it often manifests as coordinated campaigns to damage someone's reputation, career, or social standing.
A deepfake is a synthetic media — video, audio, or image — created using artificial intelligence to replace one person's likeness with another, or to fabricate realistic-seeming content that never actually happened. The technology has advanced to the point where deepfakes can be convincing even to trained observers.
Online hate speech refers to content that attacks or demeans individuals or groups based on characteristics like race, ethnicity, religion, gender, sexual orientation, or disability. It ranges from slurs and stereotyping to explicit calls for violence, and is present on virtually all major platforms to varying degrees.
Misinformation is false or inaccurate information that is shared — whether intentionally (disinformation) or unintentionally. Online, it spreads rapidly because it tends to be more emotionally engaging than accurate information, and because social media algorithms prioritize engagement over accuracy.
Online radicalization is the process through which a person is gradually exposed to increasingly extreme ideologies — political, religious, or otherwise — through online communities, content recommendations, and social influence. Algorithms that prioritize engagement can accelerate this process by recommending increasingly extreme content.
A pile-on is when many people collectively direct criticism, mockery, or harassment toward a single individual online — often triggered by a social media post being shared out of context or condemned by a prominent account. Even when started with genuine grievance, pile-ons rapidly become disproportionate and can cause severe psychological harm.
Sexting refers to sending or receiving sexually explicit messages, images, or videos via electronic devices. Among teenagers, it has legal implications — in many jurisdictions, sharing explicit images of minors constitutes distribution of child sexual abuse material (CSAM) regardless of whether the minor consented to taking the image.
CleoSocial's content ratings and family controls address everything in this glossary — in one place.