Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

You’re Not Powerless in the Face of Online Harassment

  • Viktorya Vilk

write a speech about uses and abuses of social media

Eight steps to take.

If you or someone you know is experiencing online harassment, remember that you are not powerless. There are concrete steps you can take to defend yourself and others. First, understand what’s happening to you. If you’re being critiqued or insulted, you can choose to refute it or let it go. But if you’re being abused, naming what you’re experiencing not only signals that it’s a tangible problem, but can also help you communicate with allies, employers, and law enforcement. Next, be sure to document. If you report online abuse and succeed in getting it taken down, you could lose valuable evidence. Save emails, voicemails, and texts. Take screenshots on social media and copy direct links whenever possible. Finally, assess your safety. If you’re being made to feel physically unsafe in any way — trust your instincts. While police may not always be able to stop the abuse (and not all authorities are equally well-trained in dealing with it), at the very least you are creating a record that could be useful later.

Online abuse — from impersonation accounts to hateful slurs and death threats — began with the advent of the internet itself, but the problem is pervasive and growing. A 2017 study from the Pew Research Center found that more than  40% of Americans have experienced online abuse, and more than 60% have witnessed it. People of color and LGBTQ+ people are disproportionately targeted, and women are twice as likely as men to experience sexual harassment online.

write a speech about uses and abuses of social media

  • VV Viktorya Vilk is Program Director for Digital Safety and Free Expression at PEN America, where she develops resources, including the Online Harassment Field Manual , and conducts trainings on online abuse, self-defense, and best practices for offering support.

Partner Center

  • Foreign Affairs
  • CFR Education
  • Newsletters

Council of Councils

Climate Change

Global Climate Agreements: Successes and Failures

Backgrounder by Lindsay Maizland December 5, 2023 Renewing America

  • Defense & Security
  • Diplomacy & International Institutions
  • Energy & Environment
  • Human Rights
  • Politics & Government
  • Social Issues

Myanmar’s Troubled History

Backgrounder by Lindsay Maizland January 31, 2022

  • Europe & Eurasia
  • Global Commons
  • Middle East & North Africa
  • Sub-Saharan Africa

How Tobacco Laws Could Help Close the Racial Gap on Cancer

Interactive by Olivia Angelino, Thomas J. Bollyky , Elle Ruggiero and Isabella Turilli February 1, 2023 Global Health Program

  • Backgrounders
  • Special Projects

United States

CFR Welcomes Daniel B. Poneman Back as Senior Fellow

News Releases July 11, 2024

  • Centers & Programs
  • Books & Reports
  • Independent Task Force Program
  • Fellowships

Oil and Petroleum Products

Academic Webinar: The Geopolitics of Oil

Webinar with Carolyn Kissane and Irina A. Faskianos April 12, 2023

  • State & Local Officials
  • Religion Leaders
  • Local Journalists

NATO's Future: Enlarged and More European?

Virtual Event with Emma M. Ashford, Michael R. Carpenter, Camille Grand, Thomas Wright, Liana Fix and Charles A. Kupchan June 25, 2024 Europe Program

  • Lectureship Series
  • Webinars & Conference Calls
  • Member Login

Hate Speech on Social Media: Global Comparisons

A memorial outside Al Noor mosque in Christchurch, New Zealand.

  • Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing.
  • Policies used to curb hate speech risk limiting free speech and are inconsistently enforced.
  • Countries such as the United States grant social media companies broad powers in managing their content and enforcing hate speech rules. Others, including Germany, can force companies to remove posts within certain time periods.

Introduction

A mounting number of attacks on immigrants and other minorities has raised new concerns about the connection between inflammatory speech online and violent acts, as well as the role of corporations and the state in policing speech. Analysts say trends in hate crimes around the world echo changes in the political climate, and that social media can magnify discord. At their most extreme, rumors and invective disseminated online have contributed to violence ranging from lynchings to ethnic cleansing.

The response has been uneven, and the task of deciding what to censor, and how, has largely fallen to the handful of corporations that control the platforms on which much of the world now communicates. But these companies are constrained by domestic laws. In liberal democracies, these laws can serve to defuse discrimination and head off violence against minorities. But such laws can also be used to suppress minorities and dissidents.

How widespread is the problem?

  • Radicalization and Extremism
  • Social Media
  • Race and Ethnicity
  • Censorship and Freedom of Expression
  • Digital Policy

Incidents have been reported on nearly every continent. Much of the world now communicates on social media, with nearly a third of the world’s population active on Facebook alone. As more and more people have moved online, experts say, individuals inclined toward racism, misogyny, or homophobia have found niches that can reinforce their views and goad them to violence. Social media platforms also offer violent actors the opportunity to publicize their acts.

A bar chart of the percent agreeing "people should be able to make statements that are offensive to minority groups publicly" showing the U.S. with 67% in agreement

Social scientists and others have observed how social media posts, and other online speech, can inspire acts of violence:

  • In Germany a correlation was found between anti-refugee Facebook posts by the far-right Alternative for Germany party and attacks on refugees. Scholars Karsten Muller and Carlo Schwarz observed that upticks in attacks, such as arson and assault, followed spikes in hate-mongering posts .
  • In the United States, perpetrators of recent white supremacist attacks have circulated among racist communities online, and also embraced social media to publicize their acts. Prosecutors said the Charleston church shooter , who killed nine black clergy and worshippers in June 2015, engaged in a “ self-learning process ” online that led him to believe that the goal of white supremacy required violent action.
  • The 2018 Pittsburgh synagogue shooter was a participant in the social media network Gab , whose lax rules have attracted extremists banned by larger platforms. There, he espoused the conspiracy that Jews sought to bring immigrants into the United States, and render whites a minority, before killing eleven worshippers at a refugee-themed Shabbat service. This “great replacement” trope, which was heard at the white supremacist rally in Charlottesville, Virginia, a year prior and originates with the French far right , expresses demographic anxieties about nonwhite immigration and birth rates.
  • The great replacement trope was in turn espoused by the perpetrator of the 2019 New Zealand mosque shootings, who killed forty-nine Muslims at prayer and sought to broadcast the attack on YouTube.
  • In Myanmar, military leaders and Buddhist nationalists used social media to slur and demonize the Rohingya Muslim minority ahead of and during a campaign of ethnic cleansing . Though Rohingya comprised perhaps 2 percent of the population, ethnonationalists claimed that Rohingya would soon supplant the Buddhist majority. The UN fact-finding mission said, “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet [PDF].”
  • In India, lynch mobs and other types of communal violence, in many cases originating with rumors on WhatsApp groups , have been on the rise since the Hindu-nationalist Bharatiya Janata Party (BJP) came to power in 2014.
  • Sri Lanka has similarly seen vigilantism inspired by rumors spread online, targeting the Tamil Muslim minority. During a spate of violence in March 2018, the government blocked access to Facebook and WhatsApp, as well as the messaging app Viber, for a week, saying that Facebook had not been sufficiently responsive during the emergency.

Does social media catalyze hate crimes?

The same technology that allows social media to galvanize democracy activists can be used by hate groups seeking to organize and recruit. It also allows fringe sites, including peddlers of conspiracies, to reach audiences far broader than their core readership. Online platforms’ business models depend on maximizing reading or viewing times. Since Facebook and similar platforms make their money by enabling advertisers to target audiences with extreme precision, it is in their interests to let people find the communities where they will spend the most time.

Users’ experiences online are mediated by algorithms designed to maximize their engagement, which often inadvertently promote extreme content. Some web watchdog groups say YouTube’s autoplay function, in which the player, at the end of one video, tees up a related one, can be especially pernicious. The algorithm drives people to videos that promote conspiracy theories or are otherwise “ divisive, misleading or false ,” according to a Wall Street Journal investigative report. “YouTube may be one of the most powerful radicalizing instruments of the 21st century,” writes sociologist Zeynep Tufekci .

YouTube said in June 2019 that changes to its recommendation algorithm made in January had halved views of videos deemed “borderline content” for spreading misinformation. At that time, the company also announced that it would remove neo-Nazi and white supremacist videos from its site. Yet the platform faced criticism that its efforts to curb hate speech do not go far enough. For instance, critics note that rather than removing videos that provoked homophobic harassment of a journalist, YouTube instead cut off the offending user from sharing in advertising revenue.  

How do platforms enforce their rules?

Social media platforms rely on a combination of artificial intelligence, user reporting, and staff known as content moderators to enforce their rules regarding appropriate content. Moderators, however, are burdened by the sheer volume of content and the trauma that comes from sifting through disturbing posts , and social media companies don’t evenly devote resources across the many markets they serve.

A ProPublica investigation found that Facebook’s rules are opaque to users and inconsistently applied by its thousands of contractors charged with content moderation. (Facebook says there are fifteen thousand.) In many countries and disputed territories, such as the Palestinian territories, Kashmir, and Crimea, activists and journalists have found themselves censored , as Facebook has sought to maintain access to national markets or to insulate itself from legal liability. “The company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities,” ProPublica found.

Daily News Brief

A summary of global news developments with cfr analysis delivered to your inbox each morning.  weekdays., the world this week, a weekly digest of the latest from cfr on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. every friday., think global health.

A curation of original analyses, data visualizations, and commentaries, examining the debates and efforts to improve health worldwide.  Weekly.

Addressing the challenges of navigating varying legal systems and standards around the world—and facing investigations by several governments—Facebook CEO Mark Zuckerberg called for global regulations to establish baseline content, electoral integrity, privacy, and data standards.

Problems also arise when platforms’ artificial intelligence is poorly adapted to local languages and companies have invested little in staff fluent in them. This was particularly acute in Myanmar, where, Reuters reported, Facebook employed just two Burmese speakers as of early 2015. After a series of anti-Muslim violence began in 2012, experts warned of the fertile environment ultranationalist Buddhist monks found on Facebook for disseminating hate speech to an audience newly connected to the internet after decades under a closed autocratic system.

Facebook admitted it had done too little after seven hundred thousand Rohingya were driven to Bangladesh and a UN human rights panel singled out the company in a report saying Myanmar’s security forces should be investigated for genocidal intent. In August 2018, it banned military officials from the platform and pledged to increase the number of moderators fluent in the local language.

How do countries regulate hate speech online?

In many ways, the debates confronting courts, legislatures, and publics about how to reconcile the competing values of free expression and nondiscrimination have been around for a century or longer. Democracies have varied in their philosophical approaches to these questions, as rapidly changing communications technologies have raised technical challenges of monitoring and responding to incitement and dangerous disinformation.

United States. Social media platforms have broad latitude [PDF], each establishing its own standards for content and methods of enforcement. Their broad discretion stems from the Communications Decency Act . The 1996 law exempts tech platforms from liability for actionable speech by their users. Magazines and television networks, for example, can be sued for publishing defamatory information they know to be false; social media platforms cannot be found similarly liable for content they host.

A list of data points on Americans' level of concern over online hate speech, including that 59% believe online hate and harassment make hate crimes more common.

Recent congressional hearings have highlighted the chasm between Democrats and Republicans on the issue. House Judiciary Committee Chairman Jerry Nadler convened a hearing in the aftermath of the New Zealand attack, saying the internet has aided white nationalism’s international proliferation. “The President’s rhetoric fans the flames with language that—whether intentional or not—may motivate and embolden white supremacist movements,” he said, a charge Republicans on the panel disputed. The Senate Judiciary Committee, led by Ted Cruz, held a nearly simultaneous hearing in which he alleged that major social media companies’ rules disproportionately censor conservative speech , threatening the platforms with federal regulation. Democrats on that panel said Republicans seek to weaken policies  dealing with hate speech and disinformation that instead ought to be strengthened.

European Union. The bloc’s twenty-eight members all legislate the issue of hate speech on social media differently, but they adhere to some common principles. Unlike the United States, it is not only speech that directly incites violence that comes under scrutiny; so too does speech that incites hatred or denies or minimizes genocide and crimes against humanity. Backlash against the millions of predominantly Muslim migrants and refugees who have arrived in Europe in recent years has made this a particularly salient issue, as has an uptick in anti-Semitic incidents in countries including France, Germany, and the United Kingdom.

In a bid to preempt bloc-wide legislation, major tech companies agreed to a code of conduct with the European Union in which they pledged to review posts flagged by users and take down those that violate EU standards within twenty-four hours. In a February 2019 review, the European Commission found that social media platforms were meeting this requirement in three-quarters of cases .

The Nazi legacy has made Germany especially sensitive to hate speech. A 2018 law requires large social media platforms to take down posts that are “manifestly illegal” under criteria set out in German law within twenty-four hours. Human Rights Watch raised concerns that the threat of hefty fines would encourage the social media platforms to be “overzealous censors.”

New regulations under consideration by the bloc’s executive arm would extend a model similar to Germany’s across the EU, with the intent of “preventing the dissemination of terrorist content online .” Civil libertarians have warned against the measure for its “ vague and broad ” definitions of prohibited content, as well as for making private corporations, rather than public authorities, the arbiters of censorship.

India. Under new social media rules, the government can order platforms to take down posts within twenty-four hours based on a wide range of offenses, as well as to obtain the identity of the user. As social media platforms have made efforts to stanch the sort of speech that has led to vigilante violence, lawmakers from the ruling BJP have accused them of censoring content in a politically discriminatory manner, disproportionately suspending right-wing accounts, and thus undermining Indian democracy . Critics of the BJP accuse it of deflecting blame from party elites to the platforms hosting them. As of April 2018, the New Delhi–based Association for Democratic Reforms had identified fifty-eight lawmakers facing hate speech cases, including twenty-seven from the ruling BJP. The opposition has expressed unease with potential government intrusions into privacy.

Japan. Hate speech has become a subject of legislation and jurisprudence in Japan in the past decade [PDF], as anti-racism activists have challenged ultranationalist agitation against ethnic Koreans. This attention to the issue attracted a rebuke from the UN Committee on the Elimination of Racial Discrimination in 2014 and inspired a national ban on hate speech in 2016, with the government adopting a model similar to Europe’s. Rather than specify criminal penalties, however, it delegates to municipal governments the responsibility “to eliminate unjust discriminatory words and deeds against People from Outside Japan.” A handful of recent cases concerning ethnic Koreans could pose a test: in one, the Osaka government ordered a website containing videos deemed hateful taken down , and in Kanagawa and Okinawa Prefectures courts have fined individuals convicted of defaming ethnic Koreans in anonymous online posts.

What are the prospects for international prosecution?

Cases of genocide and crimes against humanity could be the next frontier of social media jurisprudence, drawing on precedents set in Nuremberg and Rwanda. The Nuremberg trials in post-Nazi Germany convicted the publisher of the newspaper Der Sturmer ; the 1948 Genocide Convention subsequently included “ direct and public incitement to commit genocide ” as a crime. During the UN International Criminal Tribunal for Rwanda, two media executives were convicted on those grounds. As prosecutors look ahead to potential genocide and war crimes tribunals for cases such as Myanmar, social media users with mass followings could be found similarly criminally liable.

Recommended Resources

Andrew Sellars sorts through attempts to define hate speech .

Columbia University compiles relevant case law from around the world.

The U.S. Holocaust Memorial Museum lays out the legal history of incitement to genocide.

Kate Klonick describes how private platforms have come to govern public speech .

Timothy McLaughlin chronicles Facebook’s role in atrocities against Rohingya in Myanmar.

Adrian Chen reports on the psychological toll of content moderation on contract workers.

Tarleton Gillespie discusses the politics of content moderation .

  • Technology and Innovation

More From Our Experts

How Will the EU Elections Results Change Europe?

In Brief by Liana Fix June 10, 2024 Europe Program

Iran Attack Means an Even Tougher Balancing Act for the U.S. in the Middle East

In Brief by Steven A. Cook April 14, 2024 Middle East Program

Iran Attacks on Israel Spur Escalation Concerns

In Brief by Ray Takeyh April 14, 2024 Middle East Program

Top Stories on CFR

Trump and NATO: Global Perspectives on the 2024 NATO Summit and America Link

via Council of Councils July 18, 2024

Will Maduro Hold on to Power in Venezuela’s 2024 Election?

Expert Brief by Shannon K. O'Neil and Julia Huesa July 16, 2024 Latin America Studies Program

International Law

Trump v. U.S.: Has the Supreme Court Made the Presidency More Dangerous?

Expert Brief by David J. Scheffer July 10, 2024

You are using an outdated browser. Please upgrade your browser to improve your experience.

Suggested Results

Antes de cambiar....

Esta página no está disponible en español

¿Le gustaría continuar en la página de inicio de Brennan Center en español?

al Brennan Center en inglés

al Brennan Center en español

Informed citizens are our democracy’s best defense.

We respect your privacy .

  • Research & Reports

Social Media Surveillance by the U.S. Government

A growing and unregulated trend of online surveillance raises concerns for civil rights and liberties.

Rachel Levinson-Waldman

  • Social Media
  • Transparency & Oversight
  • First Amendment

Social media has become a significant source of information for U.S. law enforcement and intelligence agencies. The Department of Homeland Security, the FBI, and the State Department are among the many federal agencies that routinely monitor social platforms, for purposes ranging from conducting investigations to identifying threats to screening travelers and immigrants. This is not surprising; as the U.S. Supreme Court has  said , social media platforms have become “for many . . . the principal sources for knowing current events, . . . speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge” — in other words, an essential means for participating in public life and communicating with others.

At the same time, this growing — and mostly unregulated — use of social media raises a host of civil rights and civil liberties concerns. Because social media can reveal a wealth of personal information — including about political and religious views, personal and professional connections, and health and sexuality — its use by the government is rife with risks for freedom of speech, assembly, and faith, particularly for the Black, Latino, and Muslim communities that are historically targeted by law enforcement and intelligence efforts. These risks are far from theoretical: many agencies have a track record of using these programs to target minority communities and social movements. For all that, there is little evidence that this type of monitoring advances security objectives; agencies rarely measure the usefulness of social media monitoring and DHS’s own pilot programs showed that they were not helpful in identifying threats. Nevertheless, the use of social media for a range of purposes continues to grow.

In this Q&A, we survey the ways in which federal law enforcement and intelligence agencies use social media monitoring and the risks posed by its thinly regulated and growing use in various contexts.

Which federal agencies use social media monitoring?

Many federal agencies use social media, including the  Department of Homeland Security  (DHS),  Federal Bureau of Investigation  (FBI),  Department of State  (State Department),  Drug Enforcement Administration  (DEA),  Bureau of Alcohol, Tobacco, Firearms and Explosives  (ATF),  U.S. Postal Service  (USPS),  Internal Revenue Service  (IRS),  U.S. Marshals Service , and  Social Security Administration  (SSA). This document focuses primarily on the activities of DHS, FBI, and the State Department, as the agencies that make the most extensive use of social media for monitoring, targeting, and information collection.

Why do federal agencies monitor social media?

Publicly available information shows that federal agencies use social media for four main — and sometimes overlapping — purposes. The examples below are illustrative and do not capture the full spectrum of social media surveillance by federal agencies.

Investigations : Law enforcement agencies, such as the FBI and some components of DHS, use social media monitoring to assist with criminal and civil investigations. Some of these investigations may not even require a showing of criminal activity. For example, FBI agents can open an “assessment” simply on the basis of an “authorized purpose,” such as preventing crime or terrorism, and without a factual basis. During assessments, FBI agents can carry out searches of publicly available online information. Subsequent investigative stages, which require some factual basis, open the door for more invasive surveillance tactics, such as the monitoring and recording of chats, direct messages, and other private online communications in real time.

At DHS, Homeland Security Investigations (HSI) — which is part of Immigration and Customs Enforcement (ICE) — is the Department’s “ principal investigative arm .” HSI  asserts  in its training materials that it has the authority to enforce any federal law, and relies on social media when conducting investigations on matters ranging from civil immigration violations to terrorism. ICE agents can look at publicly available social media content for purposes ranging from finding fugitives to gathering evidence in support of investigations to probing “potential criminal activity,” a “threat detection” function discussed below. Agents can also operate undercover online and monitor private online communications, but the circumstances under which they are permitted to do so are not publicly known.

Monitoring to detect threats:  Even without opening an assessment or other investigation, FBI agents can monitor public social media postings. DHS components from ICE to its intelligence arm, the Office of Intelligence & Analysis, also  monitor social media  — including specific individuals — with the goal of identifying potential threats of violence or terrorism. In addition, the FBI and DHS both engage private companies to conduct online monitoring of this type on their behalf. One firm, for example, was  awarded  a  contract  with the FBI in December 2020 to scour social media to proactively identify “national security and public safety-related events” — including various unspecified threats, as well as crimes — which have not yet been reported to law enforcement.

Situational awareness:  Social media  may   provide  an “ear to the ground” to help the federal government coordinate a response to breaking events. For example, a range of DHS components — from Customs and Border Protection (CBP) to the National Operations Center (NOC) to the Federal Emergency Management Agency ( FEMA ) — monitor the internet, including by keeping tabs on a broad list of websites and keywords being discussed on social media platforms and tracking information from sources like news services and local government agencies.  Privacy impact assessments  suggest there are few limits on the content that can be reviewed — for instance, the PIAs list a sweeping range of keywords that are monitored (ranging, for example, from “attack,” “public health,” and “power outage,” to “jihad”). The purposes of such monitoring include helping keep the public, private sector, and governmental partners informed about developments during a crisis such as a natural disaster or terrorist attack; identifying people needing help during an emergency; and knowing about “ threats or dangers ” to DHS facilities.

“Situational awareness” and “threat detection” overlap because they both involve broad monitoring of social media, but situational awareness has a wider focus and is generally not intended to monitor or preemptively identify specific people who are thought to pose a threat.

Immigration and travel screening:  Social media is  used to  screen and vet travelers and immigrants coming into the United States and even to monitor them while they live here. People applying for a range of immigration benefits  also undergo  social media checks to verify information in their application and determine whether they pose a security risk.

How can the government’s use of social media harm people?

Government monitoring of social media can work to people’s detriment in at least four ways: (1) wrongly implicating an individual or group in criminal behavior based on their activity on social media; (2) misinterpreting the meaning of social media activity, sometimes with severe consequences; (3) suppressing people’s willingness to talk or connect openly online; and (4) invading individuals’ privacy. These are explained in further detail below.

Assumed criminality:  The government may use information from social media to label an individual or group as a threat, including characterizing  ordinary activity  (like wearing a particular sneaker brand or making common hand signs) or social media connections as evidence of criminal or threatening behavior. This kind of assumption can have high-stakes consequences. For example, the NYPD  wrongly arrested  19-year-old Jelani Henry for attempted murder, after which he was denied bail and jailed for over a year and a half, in large part because prosecutors thought his “likes” and photos on social media proved he was a member of a violent gang. In another  case  of guilt by association, DHS officials barred a Palestinian student arriving to study at Harvard from entering the country based on the content of his friends’ social media posts. The student had neither written nor engaged with the posts, which were critical of the U.S. government. Black, Latino, and Muslim people are especially vulnerable to being falsely labeled threats based on social media activity, given that it is used to inform government decisions that are often already tainted by bias such as  gang determinations  and  travel screening  decisions.

Mistaken judgments:  It can be difficult to accurately interpret online activity, and the repercussions can be severe. In 2020, police in Wichita, Kansas  arrested  a teenager on suspicion of inciting a riot based on a mistaken interpretation of his Snapchat post, in which he was actually denouncing violence. British travelers were interrogated at Los Angeles International Airport and  sent back  to the U.K. due to a border agent’s misinterpretation of a joking tweet. And DHS and the FBI  disseminated  reports to a Maine-area intelligence-sharing hub warning of potential violence at anti-police brutality demonstrations based on fake social media posts by right-wing provocateurs, which were distributed as a warning to local police.

Chilling effects:  People are highly likely to  censor  themselves when they think they are being watched by the government, and this undermines everything from political speech to creativity to other forms of self-expression. The Brennan Center’s  lawsuit  against the State Department and DHS documents how the collection of social media identifiers on visa forms — which are then stored indefinitely and shared across the U.S. government, and sometimes with state, local, and foreign governments — led a number of international filmmakers to stop talking about politics and promoting their work on social media. They self-censored because they were concerned that what they said online would prevent them from getting a U.S. visa or be used to retaliate against them because it could be misinterpreted or reflect controversial viewpoints.

Loss of privacy:  A person’s  social media presence  — their posts, comments, photos, likes, group memberships, and so on — can collectively reveal their ethnicity, political views, religious practices, gender identity, sexual orientation, personality traits, and vices. Further, social media can reveal more about a person than they intend. Platforms’ privacy settings frequently change and can be difficult to navigate, and even when individuals keep information private it can be disclosed through the activity or identity of their connections on social media. DHS at least has recognized this risk, categorizing social media handles as “sensitive personally identifiable information” that could “result in substantial harm, embarrassment, inconvenience, or unfairness to an individual.” Yet the agency has failed to place robust safeguards on social media monitoring.

Who is harmed by social media monitoring?

While all Americans may be harmed by untrammeled social media monitoring, people from historically marginalized communities and those who protest government policies typically bear the brunt of suspicionless surveillance. Social media monitoring is no different.

Echoing the transgressions of the  civil rights era , there  are   myriad   examples  of the FBI and DHS using social media to surveil people speaking out on issues from racial justice to the treatment of immigrants. Both agencies have monitored Black Lives Matter activists. In 2017, the FBI  created  a specious terrorism threat category called “Black Identity Extremism” (BIE), which can be read to include protests against police violence. This category has been used to rationalize  continued   surveillance  of black activists, including monitoring of social media activity. In 2020, DHS’s Office of Intelligence & Analysis (I&A)  used  social media and other tools to target and monitor racial justice protestors in Portland, OR, justifying this surveillance by pointing to the threat of vandalism to Confederate monuments. I&A then  disseminated  intelligence reports on journalists reporting on this overreach.

DHS especially has  focused  social media surveillance on immigration activists, including those engaged in  peaceful protests  against the Trump administration’s family separation policy and others  characterized  as “anti-Trump protests.” From 2017 through 2020, ICE  kept tabs  on immigrant rights groups’ social media activity, and in late 2018 and early 2019, CBP and HSI  used   information  gleaned from social media in compiling dossiers and putting out travel alerts on advocates, journalists, and lawyers — including U.S. citizens — whom the government suspected of helping migrants south of the U.S. border.

Muslim, Arab, Middle Eastern, and South Asian communities have often been particular targets of the U.S. government’s  discriminatory  travel and immigration screening practices, including social media screening. The State Department’s collection of social media identifiers on visa forms, for instance,  came out  of President Trump’s Muslim ban, while  earlier  social media monitoring and collection programs focused disproportionately on people from predominantly Muslim countries and Arabic speakers.

Is social media surveillance an effective way of getting information about potential threats?

Not particularly. Broad social media monitoring for threat detection purposes untethered from suspicion of wrongdoing generates reams of useless information, crowding out information on — and resources for — real public safety concerns.

Social media conversations are difficult to interpret because they are often highly context-specific and can be riddled with slang, jokes, memes, sarcasm, and references to popular culture; heated rhetoric is also common. Government officials and assessments have repeatedly recognized that this dynamic makes it difficult to distinguish a sliver of genuine threats from the millions of everyday communications that do not warrant law enforcement attention. As the former acting chief of DHS I&A  said , “actual intent to carry out violence can be difficult to discern from the angry, hyperbolic — and constitutionally protected — speech and information commonly found on social media.” Likewise, a 2021  internal review  of DHS’s Office of Intelligence & Analysis noted: “[s]earching for true threats of violence before they happen is a difficult task filled with ambiguity.” The review observed that personnel trying to anticipate future threats ended up collecting information on a “broad range of general threats that did not meet the threshold of intelligence collection” and provided I&A’s law enforcement and intelligence customers with “information of limited value,” including “memes, hyperbole, statements on political organizations and other protected First Amendment speech.” Similar  concerns  cropped up with the DHS’s pilot programs to use social media to vet refugees.

The result is a high volume of false alarms, distracting law enforcement from investigating and preparing for genuine threats: as the FBI bluntly  put it , for example, I&A’s reporting practices resulted in “crap” being sent through one of its threat notification systems.

What rules govern federal agencies’ use of social media?

Some agencies, like the FBI, DHS, State Department and  IRS , have released information on the rules governing their use of social media in certain contexts. Other agencies — such as the ATF, DEA, Postal Service, and Social Security Administration — have not made any information public; what is known about their use of social media has emerged from media coverage, some of which has attracted  congressional   scrutiny . Below we describe some of what is known about the rules governing the use of social media by the FBI, DHS, and State Department.

FBI:  The main document governing the FBI’s social media surveillance practices is its  Domestic Investigations and Operations Guide  (DIOG), last made public in redacted form in 2016. Under the DIOG, FBI agents may review publicly available social media information prior to initiating any form of inquiry. During the lowest-level investigative stage, called an assessment (which requires an “authorized purpose” such as stopping terrorism, but no factual basis), agents may also log public, real-time communications (such as public chat room conversations) and work with informants to gain access to private online spaces, though they may not record private communications in real-time.

Beginning with “preliminary investigations” (which require that there be “information or an allegation” of wrongdoing but not that it be credible), FBI agents may monitor and record private online communications in real-time using informants and may even use false social media identities with the approval of a supervisor. While conducting full investigations (which require a reasonable indication of criminal activity), FBI agents may use all of these methods and can also get probable cause warrants to conduct wiretapping, including to collect private social media  communications .

The DIOG does restrict the FBI from probing social media based  solely  on “an individual’s legal exercise of his or her First Amendment rights,” though such activity can be a substantial motivating factor. It also requires that the collection of online information about First Amendment-protected activity be connected to an “authorized investigative purpose” and be as minimally intrusive as reasonable under the circumstances, although it is not clear how adherence to these standards is evaluated.

DHS:  DHS policies can be pieced together using a combination of legally mandated disclosures — such as privacy impact assessments and data mining reports — and publicly available policy guidelines, though the amount of information available varies. In 2012, DHS published  a   policy  requiring that components collecting personally identifiable information from social media for “operational uses,” such as investigations (but not intelligence functions), implement basic guidelines and training for employees engaged in such uses and ensure compliance with relevant laws and privacy rules. Whether this policy has been holistically implemented for “operational uses” of social media across DHS remains unclear. However, the Brennan Center has obtained a number of templates describing how DHS components use social media, created pursuant to the 2012 policy, through the Freedom of Information Act.

In practice, DHS policies are generally permissive. The examples below illustrate the ways in which various parts of the Department use social media.

  • ICE agents monitor social media for purposes ranging from situational awareness and criminal intelligence gathering to support for investigations. In addition to engaging private companies to monitor social media, ICE agents  may collect  public social media data whenever they determine it is “relevant for developing a viable case” and “supports the investigative process.”
  • Parts of DHS, including the National Operations Center (NOC) (part of the Office of Operations Coordination and Planning ( OPS )), Federal Emergency Management Agency ( FEMA ), and Customs and Border Protection ( CBP ), use social media monitoring for situational awareness. The goal is generally not to “seek or collect” personally identifiable information. DHS may do so in “in extremis situations,” however, such as when serious harm to a person may be imminent or there is a “credible threat[] to [DHS] facilities or systems.” NOC’s situational awareness operations are not covered by the 2012 policy; other components carrying out situational awareness monitoring must create a but may receive an exception from the broader policy with the approval of DHS’s Chief Privacy Officer.
  • DHS’s U.S. Citizenship and Immigration Services ( USCIS ) uses social media to verify the accuracy of materials provided by applicants for immigration benefits (such as applications for refugee status or to become a U.S. citizen) and to identify fraud and threats to public safety. USCIS says it only looks at publicly available information and that it will respect account holders’ privacy settings and refrain from direct dialogue with subjects, though staff may use fictitious accounts in certain cases, including when “overt research would compromise the integrity of an investigation.”
  • DHS’s Office of Intelligence & Analysis (I&A), as a member of the Intelligence Community, is not covered by the 2012 policy. Instead it operates under a separate set of  guidelines  — pursuant to Executive Order 12,333, issued by the Secretary of Homeland Security and approved by the Attorney General — that govern its management of information collected about U.S. persons, including via social media. The office incorporates social media into the open-source intelligence reports it produces for federal, state, and local law enforcement; these reports provide threat warnings, investigative leads, and referrals. I&A personnel  may  collect and retain social media information on U.S. citizens and green card holders so long as they reasonably believe that doing so supports a national or departmental mission; these missions are broadly defined to include addressing homeland security concerns. And they may disseminate the information further if they believe it would help the recipient with “lawful intelligence, counterterrorism, law enforcement, or other homeland security-related functions.”

State Department.  The Department’s policies covering social media monitoring for visa vetting purposes are not publicly available. However,  public   disclosures  shed some light on the rules consular officers are supposed to follow when vetting visa applicants using social media. For example, consular officers are not supposed to interact with applicants on social media, request their passwords, or try to get around their privacy settings — and if they create an account to view social media information, they “must abide by the contractual rules of that service or platform provider,” such as Facebook’s real name policy. Further, information gleaned from social media must not be used to deny visas based on protected characteristics (i.e., race, religion, ethnicity, national origin, political views, gender or sexual orientation). It is supposed to be used only to confirm an applicant’s identity and visa eligibility under criteria set forth in U.S. law.

Are there constitutional limits on social media surveillance?

Yes. Social media monitoring may violate the First or Fourteenth Amendments. It is well established that public posts receive constitutional protection: as the investigations guide of the Federal Bureau of Investigation recognizes, “[o]nline information, even if publicly available, may still be protected by the First Amendment. Surveillance is clearly unconstitutional when a person is specifically  targeted  for the exercise of constitutional rights protected by the  First Amendment  (speech, expression, association, religious practice) or on the basis of a characteristic protected by the  Fourteenth Amendment  (including race, ethnicity, and religion). Social media monitoring may also violate the First Amendment when it burdens constitutionally protected activity and does not contribute to a legitimate government objective. Our  lawsuit  against the State Department and DHS ( Doc Society v. Blinken ), for instance, challenges the collection, retention, and dissemination of social media identifiers from millions of people — almost none of whom have engaged in any wrongdoing — because the government has not adequately justified the screening program and it imposes a substantial burden on speech for little demonstrated value. The White House office that reviews federal regulations noted the latter point — which a DHS Inspector General  report  and  internal reviews  have also underscored  — when it  rejected , in April 2021, DHS’s proposal to collect social media identifiers on travel and immigration forms.

Additionally, the  Fourth Amendment  protects people from “unreasonable searches and seizures” by the government, including searches of data in which people have a “reasonable expectation of privacy.” Judges have  generally   concluded  that content posted publicly online cannot be reasonably expected to be private, and that police therefore do not need a warrant to view or collect it. Courts are increasingly recognizing, however, that when the government can collect far more information — especially information revealing sensitive or intimate details — at a far lower cost than traditional surveillance, the Fourth Amendment  may protect  that data. The same is true of social media monitoring and the use of powerful social media monitoring tools, even if they are employed to review publicly available information.

Are there statutory limits on social media surveillance?

Yes. Most notably, the  Privacy Act  limits the collection, storage, and sharing of personally identifiable information about U.S. citizens and permanent residents (green card holders), including social media data. It also bars, under most circumstances, maintaining records that describe the exercise of a person’s First Amendment rights. However, the statute contains an exception for such records “within the scope of an authorized law enforcement activity.” Its coverage is limited to databases from which personal information can be retrieved by an individual identifier like a name, social security address, or phone number.

Additionally, federal agencies’ collection of social media handles must be authorized by law and, in some cases, be subject to public notice and comment and justified by a reasoned explanation that accounts for contrary evidence.  Doc Society v. Blinken , for example, alleges that the State Department’s collection of social media identifiers on visa forms violates the Administrative Procedure Act (APA) because it exceeds the Secretary of State’s statutory authority and did not consider that prior social media screening pilot programs had failed to demonstrate efficacy.

Is the government’s use of social media consistent with platform rules?

Not always. Companies do not bar government officials from making accounts and looking at what is happening on their platforms. However, after the ACLU  exposed  in 2016 that third-party social media monitoring companies were pitching their services to California law enforcement agencies as a way to monitor protestors against racial injustice,  Twitter ,  Facebook , and Instagram changed or clarified their rules to prohibit the use of their data for surveillance (though the actual  application  of those rules can be murky).

Additionally, Facebook has a  policy  requiring users identify themselves by their “real names,” with no exception for law enforcement. The FBI and other federal law enforcement agencies permit their agents to use false identities notwithstanding this rule, and there have been documented instances of other law enforcement departments  violating  this policy as well.

How do federal agencies share information collected from social media, and why is it a problem?

Federal agencies may share information they collect from social media across all levels of government and the private sector and will sometimes even disclose data to foreign governments (for instance,  identifiers  on travel and immigration forms). In particular, information is shared domestically with state and local law enforcement, including through fusion centers, which are post-9/11 surveillance and intelligence hubs that were intended to facilitate coordination among federal, state, and local law enforcement and private industry. Such unfettered data sharing magnifies the risks of abusive practices.

Part of the risk stems from the dissemination of data to actors with a documented history of discriminatory surveillance, such as fusion centers. A 2012 bipartisan Senate investigation  concluded  that fusion centers have “yielded little, if any, benefit to federal counterterrorism intelligence efforts,” instead producing reams of low-quality information while labeling Muslim Americans engaging in innocuous activities, such as voter registration, as potential threats. More recently,  fusion centers  have been  caught   monitoring  racial and social justice organizers and protests and  promoting  fake social media posts by right-wing provocateurs as credible intelligence regarding potential violence at anti-police brutality protests. Further, many police departments that get information from social media through fusion centers (or from federal agencies like the FBI and DHS directly) have a  history  of targeting and surveilling minority communities and activists, but lack basic policies that govern their use of social media. Finally, existing agreements  permit  the U.S. government to share social media data — collected from U.S. visa applicants, for example — with repressive foreign governments that are known to retaliate against online critics.

The broad dissemination of social media data amplifies some of the harms of social media monitoring by eliminating context and safeguards. Under some circumstances, a government official who initially reviews and collects information from social media may better understand — from witness interviews, notes of observations from the field, or other material obtained during an investigation, for example — its meaning and relevance than a downstream recipient lacking this background. And any safeguards the initial agency places upon its monitoring and collection — use and retention limitations, data security protocols, etc. — cannot be guaranteed after it disseminates what has been gathered. Once social media is disseminated, the originating agency has little control over how such information is used, how long it is kept, whether it could be misinterpreted, or how it might spur overreach.

Together, these dynamics amplify the harms to free expression and privacy that social media monitoring generates. A qualified and potentially unreliable assessment based on social media that a protest could turn violent or that a particular person poses a threat might easily turn into a justification for policing that protest aggressively or arresting the person, as illustrated by the examples above. Similarly, a person who has applied for a U.S. visa or been investigated by federal authorities, even if they are cleared, is likely to be wary of what they say on social media well into the future if they know that there is no endpoint to potential scrutiny or disclosure of their online activity. Formerly, one branch of DHS I&A had a  practice  of redacting publicly available U.S. person information contained in open-source intelligence reports disseminated to partners because of the “risk of civil rights and liberties issues.” This practice was an apparent justification for removing pre-publication oversight to identify such issues, which implies that DHS recognized that information identifying a person could be used to target them without a legitimate law enforcement reason.

What role do private companies play, and what is the harm in using them?

Both  the   FBI  and  DHS  have reportedly hired private firms to help conduct social media surveillance, including to help identify threats online. This raises concerns around transparency and accountability as well as effectiveness.

Transparency and accountability:  Outsourcing surveillance to private industry obscures how monitoring is being carried out; limited information is available about relationships between the federal government and social media surveillance contractors, and the contractors, unlike the government, are not subject to freedom of information laws. Outsourcing also weakens safeguards because private vendors may not be subject to the same legal or institutional constraints as public agencies.

Efficacy:  The most ambitious tools use artificial intelligence with the goal of making judgments about which threats, calls for violence, or individuals pose the highest risk. But doing so reliably is beyond the capacity of both humans and existing technology, as more than 50 technologists  wrote  in opposing an ICE proposal aimed at predicting whether a given person would commit terrorism or crime. The more rudimentary of these tools look for specific words and then flag posts containing those words. Such flags are overinclusive, and garden-variety content will regularly be  elevated . Consider how the word “extremism,” for instance, could appear in a range of news articles, be  used  in reference to a friend’s strict dietary standards, or arise in connection with discussion about U.S. politics. Even the best Natural Language Processing tools, which attempt to ascertain the meaning of text, are prone to  error , and fare particularly  poorly  on speakers of non-standard English, who may more frequently be from minority communities, as well as speakers of languages other than English. Similar  concerns  apply to mechanisms used to flag images and videos, which generally lack the context necessary to differentiate a scenario in which an image is used for reporting or commentary from one where it is used by a group or person to incite violence.

Capitol building in background and "Black Lives Matter" sign held up in the foreground

Records Show DC and Federal Law Enforcement Sharing Surveillance Info on Racial Justice Protests

Officers tracked social media posts about racial justice protests with no evidence of violence, threatening First Amendment rights.

DC police car

Documents Reveal How DC Police Surveil Social Media Profiles and Protest Activity

A lawsuit forced the Metropolitan Police Department to reveal how it uses social media surveillance tools to track First Amendment–protected activity.

Study Reveals Inadequacy of Police Departments’ Social Media Surveillance Policies

Ftc must investigate meta and x for complicity with government surveillance, we’re suing the nypd to uncover its online surveillance practices, senate ai hearings highlight increased need for regulation, documents reveal widespread use of fake social media accounts by dhs, informed citizens are democracy’s best defense.

The harmful effects of online abuse

A look at how the offline harm of online abuse is real and widespread with potentially severe consequences.

write a speech about uses and abuses of social media

The price of shame

write a speech about uses and abuses of social media

The conversation we're not having about digital child abuse

write a speech about uses and abuses of social media

When online shaming goes too far

write a speech about uses and abuses of social media

How online abuse of women has spiraled out of control

  • All Stories
  • Journalists
  • Expert Advisories
  • Media Contacts
  • X (Twitter)
  • Arts & Culture
  • Business & Economy
  • Education & Society
  • Environment
  • Law & Politics
  • Science & Technology
  • International
  • Michigan Minds Podcast
  • Michigan Stories
  • 2024 Elections
  • Artificial Intelligence
  • Abortion Access
  • Mental Health

Hate speech in social media: How platforms can do better

  • Morgan Sherburne

With all of the resources, power and influence they possess, social media platforms could and should do more to detect hate speech, says a University of Michigan researcher.

Libby Hemphill

Libby Hemphill

In a report from the Anti-Defamation League , Libby Hemphill, an associate research professor at U-M’s Institute for Social Research and an ADL Belfer Fellow, explores social media platforms’ shortcomings when it comes to white supremacist speech and how it differs from general or nonextremist speech, and recommends ways to improve automated hate speech identification methods.

“We also sought to determine whether and how white supremacists adapt their speech to avoid detection,” said Hemphill, who is also a professor at U-M’s School of Information. “We found that platforms often miss discussions of conspiracy theories about white genocide and Jewish power and malicious grievances against Jews and people of color. Platforms also let decorous but defamatory speech persist.”

How platforms can do better

White supremacist speech is readily detectable, Hemphill says, detailing the ways it is distinguishable from commonplace speech in social media, including:

  • Frequently referencing racial and ethnic groups using plural noun forms (whites, etc.)
  • Appending “white” to otherwise unmarked terms (e.g., power)
  • Using less profanity than is common in social media to elude detection based on “offensive” language
  • Being congruent on both extremist and mainstream platforms
  • Keeping complaints and messaging consistent from year to year
  • Describing Jews in racial, rather than religious, terms

“Given the identifiable linguistic markers and consistency across platforms, social media companies should be able to recognize white supremacist speech and distinguish it from general, nontoxic speech,” Hemphill said.

The research team used commonly available computing resources, existing algorithms from machine learning and dynamic topic modeling to conduct the study.

“We needed data from both extremist and mainstream platforms,” said Hemphill, noting that mainstream user data comes from Reddit and extremist website user data comes from Stormfront.

What should happen next?

Even though the research team found that white supremacist speech is indentifiable and consistent—with more sophisticated computing capabilities and additional data—social media platforms still miss a lot and struggle to distinguish nonprofane, hateful speech from profane, innocuous speech.

“Leveraging more specific training datasets, and reducing their emphasis on profanity can improve platforms’ performance,” Hemphill said.

The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists’ conversations are dangerous and hateful.

“Social media platforms can enable social support, political dialogue and productive collective action. But the companies behind them have civic responsibilities to combat abuse and prevent hateful users and groups from harming others,” Hemphill said. “We hope these findings and recommendations help platforms fulfill these responsibilities now and in the future.”

More information:

  • Report: Very Fine People: What Social Media Platforms Miss About White Supremacist Speech
  • Related: Video: ISR Insights Speaker Series: Detecting white supremacist speech on social media
  • Podcast: Data Brunch Live! Extremism in Social Media

University of Michigan Logo

412 Maynard St. Ann Arbor, MI 48109-1399 Email [email protected] Phone 734-764-7260 About Michigan News

  • Engaged Michigan
  • Global Michigan
  • Michigan Medicine
  • Public Affairs

Publications

  • Michigan Today
  • The University Record

Office of the Vice President for Communications © 2024 The Regents of the University of Michigan

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Online harassment occurs most often on social media, but strikes in other places, too

A woman sitting in a hallway and holding her smartphone. (kieferpix via Getty Images)

As has been the case since at least 2014 , social media sites are the most common place Americans encounter harassment online, according to a September 2020 Pew Research Center survey . But harassment often occurs in other online locations, too.

Overall, three-quarters of U.S. adults who have recently faced some kind of online harassment say it happened on social media. But notable shares say their most recent such experience happened elsewhere, including on forum or discussion sites (25%), texting or messaging apps (24%), online gaming platforms (16%), their personal email account (11%) or online dating sites or apps (10%).

Certain kinds of harassing behavior, meanwhile, are particularly likely to occur in certain locations online, according to a new analysis of the 2020 data. The analysis focuses on respondents’ most recent experience with online harassment. (See “Measuring online harassment” box for more information.)

This analysis focuses on U.S. adults’ experiences and attitudes related to online harassment and is based on a survey of 10,093 U.S. adults conducted from Sept. 8 to 13, 2020. Everyone who took part is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the  ATP’s methodology . Here are the  questions used for this report , along with responses, and  its methodology .

Looking first at where harassing behavior occurs, several findings stand out. For one, people who most recently faced harassment over a sustained period were especially likely to have experienced it while using a texting or messaging app (47%) or on an online forum (44%) compared with the overall shares whose most recent harassment of any kind took place on these platforms.

Social media sites are the most commonly reported location for people’s most recent online harassment encounter, regardless of the type of behavior

Measuring online harassment

This study measures six distinct harassment behaviors.

We classify two of the behaviors as “less severe”:

  • Offensive name-calling
  • Purposeful embarrassment

We classify four of them as “more severe”:

  • Physical threats
  • Harassment over a sustained period of time
  • Sexual harassment

In all, 41% of Americans say they had ever experienced at least one of those behaviors.

The people who faced any of those behaviors were asked follow-up questions about their most recent harassment episode, including the specific behaviors that were involved and where the incidents occurred. We inquired about six potential locations:

  • Social media
  • Online forums or discussion sites
  • Texting or messaging apps
  • Online gaming
  • Personal email accounts
  • Online dating sites or apps

Similarly, those who say they most recently had been stalked or sexually harassed online were more likely to have faced this while using a texting platform (54% and 46%, respectively) compared with the broader rate of harassment on those venues. In addition, people who were most recently stalked are roughly three times as likely to have experienced this harassment via email (30%) compared with the share of all whose latest harassment incident was email-based (11%).

In general, those who faced any of the more severe behaviors in their most recent incident are more likely to say the experience occurred across multiple locations online. Some 55% of those who have faced at least one of these more severe forms of online harassment in their most recent incident – such as stalking or sustained harassment – encountered it in multiple places online, compared with 41% of those who have experienced any form of harassment. Roughly six-in-ten or more adults whose most recent incident involved sustained harassment (67%), stalking (65%), physical threats (60%) or sexual harassment (58%) say their encounter took place across multiple online locations.

Those who endured multiple forms of online harassment (57%), meanwhile, are also particularly likely to say the harassment spanned multiple locations, compared with the overall share whose recent encounters occurred across multiple locations.

More severe harassment is more common in some online locations

Overall, the most common types of harassment across all six digital spaces we examined are those we classified as “less severe” – that is, offensive name calling and purposeful embarrassment. For instance, 61% of those who were harassed on an online forum or discussion site in their most recent incident were met with offensive name calling, while 46% of those harassed in this kind of venue faced purposeful embarrassment.

More severe forms of harassment are more likely to happen in some online venues, including online dating sites and personal email accounts

However, it is also the case that certain online platforms see higher shares of more severe harassing behaviors than are seen across all platforms. For example, recent incidents on dating sites (60%), in personal emails (57%) or on a texting or messaging app (52%) are especially likely to involve at least one more severe behavior.

Specifically, those who were most recently harassed on a dating app or site are about three times as likely to say this harassment was sexual, compared with the general prevalence of sexual harassment in recent encounters (36% vs. 13%). In addition, stalking, physical threats and sustained harassment on dating sites all occur at notably higher rates than those seen among recent incidents across all online platforms.

Among those who report that their most recent encounter occurred in just one of these six locations, 64% say they faced only less severe behaviors, while a quarter reported any more severe behaviors. Specifically, about one-in-ten or fewer people who had most recently experienced harassment in only one location reported facing each of these more severe behaviors. Similarly, those who were most recently harassed in only one location are about three times as likely to have faced just one type of harassing behavior rather than multiple behaviors (66% vs. 23%).

Note: Here are the  questions used for this report , along with responses, and  its methodology .

  • Online Harassment & Bullying
  • Smartphones
  • Social Media

Emily A. Vogels is a former research associate focusing on internet and technology at Pew Research Center .

Most Americans think the government could be monitoring their phone calls and emails

What the public knows about cybersecurity, americans and cybersecurity, candidates’ social media outpaces their websites and emails as an online campaign news source, in an historic move, census bureau tries electronic outreach, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

A row of dominoes collapses until it reachs a human figure holding up a hand to stop it.

Social media and political violence – how to break the cycle

write a speech about uses and abuses of social media

Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

Disclosure statement

Richard Forno has received research funding related to cybersecurity from the National Science Foundation (NSF) and the Department of Defense (DOD) during his academic career. He is a registered independent voter.

University of Maryland, Baltimore County provides funding as a member of The Conversation US.

View all partners

The attempted assassination of Donald Trump on July 13, 2024, added more fuel to an already fiery election season. In this case, political violence was carried out against the party that is most often found espousing it . The incident shows how uncontrollable political violence can be – and how dangerous the current times are for America.

Part of the complication is the contentious and adversarial nature of American politics, of course. But technology makes it more difficult for Americans to understand sudden news developments .

Gone are the days when only a handful of media outlets reported the news to broad swaths of society after rigorous fact-checking by professional journalists.

By contrast, anyone today can “report” news online, provide what they claim is “analysis” of events, and combine fact, fiction, speculation and opinion to fit a desired narrative or political perspective.

Then that perspective is potentially made to seem legitimate by virtue of the poster’s official office, net worth, number of social media followers, or attention from mainstream news organizations seeking to fill news cycles.

And that’s before any mention of convincing deepfake audio and video clips , whose lies and misrepresentations can further sow confusion and distrust online and in society.

Today’s internet-based narratives also often involve personal attacks either directly or through inference and suggestion – what experts call “ stochastic terrorism ” that can motivate people to violence. Political violence is the inevitable result – and has been for years, including attacks on U.S. Rep. Gabby Giffords , former House Speaker Nancy Pelosi’s husband, Paul , the 2017 congressional baseball practice shooting , the Jan. 6, 2021 insurrection , and now the attempted assassination of a former president running for the White House again.

A man with blood on his face holds up a clenched fist while people in suits surround him.

When bullets and conspiracies fly

As a security and internet researcher , I believe it was entirely predictable that within minutes of the attack, right-wing social media exploded with instant-reaction narratives that assigned blame to political rivals, the media, or implied that a sinister “inside job” by the federal government was behind the incident.

But it wasn’t just average internet users or prominent business magnates fanning these flames. Several Republicans issued such statements from their official social media accounts. For instance, less than an hour after the attack, Georgia Congressman Mike Collins accused President Joe Biden of “inciting an assassination” and said Biden “ sent the orders .” Ohio Senator J.D. Vance, now Trump’s nominee for vice president , also implied that Biden was responsible for the attack.

The bloodied former president stood up and delayed his Secret Service evacuation for a fist-pumping photo before leaving the rally, and his campaign issued a defiant fundraising email later that evening. This led some Trump critics to suggest the incident was a “ false flag ” attack staged to earn a sympathetic national spotlight. Others claimed the incident fits into Trump’s ongoing messaging to supporters that he’s the victim of persecution.

From a historical perspective, it’s worth noting former Brazil right-wing President Jair Bolsonaro survived an assassination attempt in 2018 to become the country’s next president in 2019.

A line of red jumps around the world, fixed at particular places by pushpins.

It’s long been known that internet narratives, memes and content can spread around the world like wildfire well before the actual truth becomes known. Unfortunately, those narratives, whether factual or fictional, can get picked up – and thus given a degree of perceived legitimacy and further disseminated – by traditional news organizations.

Many who see such messages, amplified by both social media and traditional news services, often believe them – and some may respond with political violence or terrorism.

Can anything help?

Several threads of research show that there are some ways regular people can help break this dangerous cycle.

In the immediate aftermath of breaking news, it’s important to remember that first reports often are wrong, incomplete or inaccurate . Rather than rushing to repost things during rapidly developing news events, it’s best to avoid retweeting, reposting or otherwise amplifying online content right away. When information has been confirmed by multiple credible sources, ideally across the political spectrum, then it’s likely safe enough to believe and share.

A human figure stands in the middle of a maze with many unclear passages.

In the longer term, as a nation and a society, it will be useful to further understand how technology and human tendencies interact. Teaching schoolchildren more about media literacy and critical thinking can help prepare future citizens to separate fact from fiction in a complex world filled with competing information.

Another potential approach is to expand civics and history lessons in school classrooms, to give students the ability to learn from the past and – we can all hope – not repeat its mistakes.

Social media companies are part of the potential solution, too. In recent years, they have disbanded teams meant to monitor content and boost users’ trust in the information available on their platforms. Recent Supreme Court rulings make clear that these companies are free to actively police their platforms for disinformation, misinformation and conspiracy theories if they wish. But companies and purported “ free speech absolutists ” including X owner Elon Musk, who refuse to remove controversial, though technically legal, internet content from their platforms may well endanger public safety.

Traditional media organizations bear responsibility for objectively informing the public without giving voice to unverified conspiracy theories or misinformation. Ideally, qualified guests invited to news programs will add useful facts and informed opinion to the public discourse instead of speculation. And serious news hosts will avoid the rhetorical technique of “ just asking questions ” or engaging in “ bothsiderism ” as ways to move fringe theories – often from the internet – into the news cycle, where they gain traction and amplification.

The public has a role, too.

Responsible citizens could focus on electing officials and supporting political parties that refuse to embrace conspiracy theories and personal attacks as normal strategies. Voters could make clear that they will reward politicians who focus on policy accomplishments, not their media imagery and social media follower counts.

That could, over time, deliver the message that the spectacle of modern internet political narratives generally serve no useful purpose beyond sowing social discord and degrading the ability of government to function – and potentially leading to political violence and terrorism.

Understandably, these are not instant remedies. Many of these efforts will take time – potentially even years – and money and courage to accomplish.

Until then, maybe Americans can revisit the golden rule – doing onto others what we would have them do unto us. Emphasizing facts in the news cycle, integrity in the public square, and media literacy in our schools seem like good places to start as well.

  • Social media
  • Political violence
  • Disinformation
  • Social media disinformation
  • right-wing media
  • Right-wing violence
  • Online misinformation
  • January 6 US Capitol attack
  • Trump assassination attempt

write a speech about uses and abuses of social media

Administration Officer

write a speech about uses and abuses of social media

Apply for State Library of Queensland's next round of research opportunities

write a speech about uses and abuses of social media

Associate Professor, Psychology

write a speech about uses and abuses of social media

Professor and Head of School, School of Communication and Arts

write a speech about uses and abuses of social media

Management Information Systems & Analytics – Limited Term Contract

Institute for Social Research

News Releases

write a speech about uses and abuses of social media

Hate speech in social media: How platforms can do better

February 17, 2022

ANN ARBOR—With all of the resources, power and influence they possess, social media platforms could and should do more to detect hate speech, says a University of Michigan researcher.

In a report from the Anti-Defamation League , Libby Hemphill , an associate research professor at U-M’s Institute for Social Research and an ADL Belfer Fellow, explores social media platforms’ shortcomings when it comes to white supremacist speech and how it differs from general or nonextremist speech, and recommends ways to improve automated hate speech identification methods.

“We also sought to determine whether and how white supremacists adapt their speech to avoid detection,” said Hemphill, who is also a professor at U-M’s School of Information. “We found that platforms often miss discussions of conspiracy theories about white genocide and Jewish power and malicious grievances against Jews and people of color. Platforms also let decorous but defamatory speech persist.”

How platforms can do better

White supremacist speech is readily detectable, Hemphill says, detailing the ways it is distinguishable from commonplace speech in social media, including:

  • Frequently referencing racial and ethnic groups using plural noun forms (whites, etc.)
  • Appending “white” to otherwise unmarked terms (e.g., power)
  • Using less profanity than is common in social media to elude detection based on “offensive” language
  • Being congruent on both extremist and mainstream platforms
  • Keeping complaints and messaging consistent from year to year
  • Describing Jews in racial, rather than religious, terms

“Given the identifiable linguistic markers and consistency across platforms, social media companies should be able to recognize white supremacist speech and distinguish it from general, nontoxic speech,” Hemphill said.

The research team used commonly available computing resources, existing algorithms from machine learning and dynamic topic modeling to conduct the study.

“We needed data from both extremist and mainstream platforms,” said Hemphill, noting that mainstream user data comes from Reddit and extremist website user data comes from Stormfront.

What should happen next?

Even though the research team found that white supremacist speech is indentifiable and consistent—with more sophisticated computing capabilities and additional data—social media platforms still miss a lot and struggle to distinguish nonprofane, hateful speech from profane, innocuous speech.

“Leveraging more specific training datasets, and reducing their emphasis on profanity can improve platforms’ performance,” Hemphill said.

The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists’ conversations are dangerous and hateful.

“Social media platforms can enable social support, political dialogue and productive collective action. But the companies behind them have civic responsibilities to combat abuse and prevent hateful users and groups from harming others,” Hemphill said. “We hope these findings and recommendations help platforms fulfill these responsibilities now and in the future.”

  • Report: Very Fine People: What Social Media Platforms Miss About White Supremacist Speech
  • Video: ISR Insights Speaker Series: Detecting white supremacist speech on social media
  • Podcast: Data Brunch Live! Extremism in Social Media

Contact: Dory Knight-Ingram, [email protected] Morgan Sherburne, [email protected]

How should social media platforms combat misinformation and hate speech?

Subscribe to the center for technology innovation newsletter, niam yaraghi niam yaraghi nonresident senior fellow - governance studies , center for technology innovation.

April 9, 2019

Social media companies are under increased scrutiny for their mishandling of hateful speech and fake news on their platforms. There are two ways to consider a social media platform: On one hand, we can view them as technologies that merely enable individuals to publish and share content, a figurative blank sheet of paper on which anyone can write anything. On the other hand, one can argue that social media platforms have now evolved curators of content. I argue that these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with fake news and hate speech.

Artificial and Human Intelligence together

At the beginning, social media companies established themselves not to hold any accountability over the content being published on its platform. In the intervening years, they have since set up a mix of automated and human driven editorial processes to promote or filter certain types of content. In addition to that, their users are increasingly using these platforms as the primary source of getting their news. Twitter moments , in which you can see a brief snapshot of the daily news, is a prime example of how Twitter is getting closer to becoming a news media. As social media practically become news media, their level of responsibility over the content which they distribute should increase accordingly.

While I believe it is naïve to consider social media as merely neutral content sharing technologies with no responsibility, I do not believe that we should either have the same level of editorial expectation from social media that we have from traditional news media.

The sheer volume of content shared on social media makes it impossible to establish a comprehensive editorial system. Take Twitter as an example: It is estimated that 500 million tweets are sent per day. Assuming that each tweet contains 20 words on average, the volume of content published on Twitter in one single day will be equivalent to that of New York Times in 182 years. The terminology and focus of the hate speech changes over time, and most fake news articles contain some level of truthfulness in them. Therefore, social media companies cannot solely rely on artificial intelligence or humans to monitor and edit their content. They should rather develop approaches that utilize artificial and human intelligence together.

Finding the needle in a haystack

To overcome the editorial challenges of so much content, I suggest that the companies focus on a limited number of topics which are deemed important with significant consequences. The anti-vaccination movement and those who believe in flat-earth theory are both spreading anti-scientific and fake content. However, the consequences of believing that vaccines cause harm are eminently more dangerous than believing that the earth is flat. The former creates serious public health problems, the latter makes for a good laugh at a bar. Social media companies should convene groups of experts in various domains to constantly monitor the major topics in which fake news or hate speech may cause serious harm.

It is also important to consider how recommendation algorithms on social media platforms may inadvertently promote fake and hateful speech. At their core, these recommendation systems group users based on their shared interests and then promote the same type of content to all users within each group. If most of the users in one group have interests in, say, flat-earth theory and anti-vaccination hoaxes, then the algorithm will promote the anti-vaccination content to the users in the same group who may only be interested in flat-earth theory. Over time, the exposure to such promoted content could persuade the users who initially believed in vaccines to become skeptical about them. Once the major areas of focus for combating the fake and hateful speech is determined, the social media companies can tweak their recommendation systems fairly easily so that they will not nudge users to the harmful content.

Once those limited number of topics are identified, social media companies should decide on how to fight the spread of such content. In rare instances, the most appropriate response is to censor and ban the content with no hesitation. Examples include posts that incite violence or invite others to commit crimes. The recent New Zealand incident in which the shooter live broadcasted his heinous crimes on Facebook is the prime example of the content which should have never been allowed to be posted and shared on the platform.

Facebook currently relies on its community of users to flag such content and then uses an army of real humans to monitor such content within 24 hours to determine if they are actually in violation of its terms of use. Live content is monitored by humans once it reaches a certain level of popularity. While it is easier to use artificial intelligence to monitor textual content in real-time, our technologies to analyze images and videos are quickly advancing. For example, Yahoo! has recently made its algorithms to detect offensive and adult images public. The AI algorithms of Facebook are getting smart enough to detect and flag non-consensual intimate images .

Fight misinformation with information

Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and Facebook bans white supremacist content. The other is to provide alternative information alongside the content with fake information so that the users are exposed to the truth and correct information. This approach, which is implemented by YouTube, encourages users to click on the links with verified and vetted information that would debunk the misguided claims made in fake or hateful content. If you search “Vaccines cause autism” on YouTube, while you still can view the videos posted by anti-vaxxers, you will also be presented with a link to the Wikipedia page of MMR vaccine that debunks such beliefs.

While we yet have to empirically examine and compare the effectiveness of these alternative approaches, I prefer to present users with the real information and allow them to become informed and willfully abandon their misguided beliefs by exposing them to the reliable sources of information. Regardless of their short-lived impact, diversity of ideas will ultimately move us forward by enriching our discussions. Social media companies will be able to censor content online, but they cannot control how ideas spread offline. Unless individuals are presented with counter arguments, falsehoods and hateful ideas will spread easily, as they have in the past when social media did not exist.

Related Content

Mary Blankenship, Carol Graham

July 6, 2020

Chris Meserole, Alina Polyakova

May 25, 2018

Clara Hendrickson

May 28, 2019

Related Books

Mark MacCarthy

November 7, 2023

Darrell M. West

May 26, 2011

Nicol Turner Lee

August 6, 2024

Media & Journalism Social Media

Governance Studies

Center for Technology Innovation

Daniel S. Schiff, Kaylyn Jackson Schiff, Natália Bueno

May 30, 2024

Valerie Wirtschafter

October 26, 2023

Courtney C. Radsch

April 13, 2023

Find anything you save across the site in your account

The Evolving Free-Speech Battle Between Social Media and the Government

write a speech about uses and abuses of social media

Earlier this month, a federal judge in Louisiana issued a ruling that restricted various government agencies from communicating with social-media companies. The plaintiffs, which include the attorneys general of Missouri and Louisiana, argued that the federal government was coercing social-media companies into limiting speech on topics such as vaccine skepticism. The judge wrote, in a preliminary injunction, “If the allegations made by plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States’ history. The plaintiffs are likely to succeed on the merits in establishing that the government has used its power to silence the opposition.” The injunction prevented agencies such as the Department of Health and Human Services and the F.B.I. from communicating with Facebook , Twitter, or other platforms about removing or censoring content. (The Biden Administration appealed the injunction and, on Friday, the Fifth Circuit paused it. A three-judge panel will soon decide whether it will be reinstated as the case proceeds.) Critics have expressed concern that such orders will limit the ability of the government to fight disinformation.

To better understand the issues at stake, I recently spoke by phone with Genevieve Lakier, a professor of law at the University of Chicago Law School who focusses on issues of social media and free speech. (We spoke before Friday’s pause.) During our conversation, which has been edited for length and clarity, we discussed why the ruling was such a radical departure from the way that courts generally handle these issues, how to apply concepts like free speech to government actors, and why some of the communication between the government and social-media companies was problematic.

In a very basic sense, what does this decision actually do?

Well, in practical terms, it prevents a huge swath of the executive branch of the federal government from essentially talking to social-media platforms about what they consider to be bad or harmful speech on the platforms.

There’s an injunction and then there’s an order, and both are important. The order is the justification for the injunction, but the injunction itself is what actually has effects on the world. And the injunction is incredibly broad. It says all of these defendants—and we’re talking about the President, the Surgeon General, the White House press secretary, the State Department, the F.B.I.—may not urge, encourage, pressure, or induce in any manner the companies to do something different than what they might otherwise do about harmful speech. This is incredibly broad language. It suggests, and I think is likely to be interpreted to mean, that, basically, if you’re a member of one of the agencies or if you’re named in this injunction, you just cannot speak to the platforms about harmful speech on the platform until, or unless, the injunction ends.

But one of the puzzling things about the injunction is that there are these very significant carve-outs. For example, my favorite is that the injunction says, basically, “On the other hand, you may communicate with the platforms about threats to public safety or security of the United States.” Now, of course, the defendants in the lawsuit would say, “That’s all we’ve been doing. When we talk to you, when we talk to the platforms about election misinformation or health misinformation, we are alerting them to threats to the safety and security of the United States.”

So, read one way, the injunction chills an enormous amount of speech. Read another way, it doesn’t really change anything at all. But, of course, when you get an injunction like this from a federal court, it’s better to be safe than sorry. I imagine that all of the agencies and government officials listed in the injunction are going to think, We’d better shut up.

And the reason that specific people, jobs, and agencies are listed in the injunction is because the plaintiffs say that these entities were communicating with social-media companies, correct?

Correct. And communicating in these coercive or harmful, unconstitutional ways. The presumption of the injunction is that if they’ve been doing it in the past, they’re probably going to keep doing it in the future. And let’s stop continuing violations of the First Amendment.

As someone who’s not an expert on this issue, I find the idea that you could tell the White House press secretary that he or she cannot get up at the White House podium and say that Twitter should take down COVID misinformation—

Does this injunction raise issues on two fronts: freedom of speech and separation of powers?

Technically, when the press secretary is operating as the press secretary, she’s not a First Amendment-rights holder. The First Amendment limits the government, constrains the government, but protects private people. And so when she’s a private citizen, she has all her ordinary-citizen rights. Government officials technically don’t have First Amendment rights.

That said, it’s absolutely true that, when thinking about the scope of the First Amendment, courts take very seriously the important democratic and expressive interests in government speech. And so government speakers don’t have First Amendment rights, but they have a lot of interests that courts consider. A First Amendment advocate would say that this injunction constrains and has negative effects on really important government speech interests.

More colloquially, I would just say the irony of this injunction is that in the name of freedom of speech it is chilling a hell of a lot of speech. That is how complicated these issues are. Government officials using their bully pulpit can have really powerful speech-oppressive effects. They can chill a lot of important speech. But one of the problems with the way the district court approaches the analysis is that it doesn’t seem to be taking into account the interest on the other side. Just as we think that the government can go too far, we also think it’s really important for the government to be able to speak.

And what about separation-of-powers issues? Or is that not relevant here?

I think the way that the First Amendment is interpreted in this area is an attempt to protect some separation of powers. Government actors may not have First Amendment rights, but they’re doing important business, and it’s important to give them a lot of freedom to do that business, including to do things like express opinions about what private citizens are doing or not doing. Courts generally recognize that government actors, legislators, and executive-branch officials are doing important business. The courts do not want to second-guess everything that they’re doing.

So what exactly does this order say was illegal?

The lawsuit was very ambitious. It claimed that government officials in a variety of positions violated the First Amendment by inducing or encouraging or incentivizing the platforms to take down protected speech. And by coercing or threatening them into taking down protected speech. And by collaborating with them to take down protected speech. These are the three prongs that you can use in a First Amendment case to show that the decision to take down speech that looks like it’s directly from a private actor is actually the responsibility of the government. The plaintiffs claimed all three. What’s interesting about that district-court order is that it agreed with all three. It says, Yeah, there was encouragement, there was coercion, and there was joint action or collaboration.

And what sort of examples are they providing? What would be an example of the meat of what the plaintiffs argued, and what the judge found to violate the First Amendment?

A huge range of activities—some that I find troubling and some that don’t seem to be troubling. Public statements by members of the White House or the executive branch expressing dissatisfaction with what the platforms are doing. For instance, President Biden’s famous statement that the platforms are killing people. Or the Surgeon General’s warning that there is a health crisis caused by misinformation, and his urging the platforms to do something about it. That’s one bucket.

There is another bucket in which the platforms were going to agencies like the C.D.C. to ask them for information about the COVID pandemic and the vaccine—what’s true and what’s false, or what’s good and what’s bad information—and then using that to inform their content-moderation rules.

Very different and much more troubling, I think, are these e-mails that they found in discovery between White House officials and the platforms in which the officials more or less demand that the platforms take down speech. There is one e-mail from someone in the White House who asked Twitter to remove a parody account that was linked to President Biden’s granddaughter, and said that he “cannot stress the degree to which this needs to be resolved immediately”—and within forty-five minutes, Twitter takes it down. That’s a very different thing than President Biden saying, “Hey, platforms, you’re doing a bad job with COVID misinformation.”

The second bucket seems full of the normal give-and-take you’d expect between the government and private actors in a democratic society, right?

Yeah. Threats and government coercion on private platforms seem the most troubling from a First Amendment perspective. And traditionally that is the kind of behavior that these cases have been most worried about.

This is not the first case to make claims of this kind. This is actually one of dozens of cases that have been filed in federal court over the last years alleging that the Biden Administration or members of the government had put pressure on or encouraged platforms to take down vaccine-skeptical speech and speech about election misinformation. What is unusual about this case is the way that the district court responded to these claims. Before this case, courts had, for the most part, thrown these cases out. I think this was largely because they thought that there was insufficient evidence of coercion, and coercion is what we’re mostly worried about. They have found that this kind of behavior only violates the First Amendment if there is some kind of explicit threat, such as “If you don’t do X, we will do Y,” or if the government actors have been directly involved in the decision to take down the speech.

In this case, the court rejects that and has a much broader test, where it says, basically, that government officials violate the First Amendment if they significantly encourage the platforms to act. And that may mean just putting pressure on them through rhetoric or through e-mails on multiple occasions—there’s a campaign of pressure, and that’s enough to violate the First Amendment. I cannot stress enough how significant a departure that is from the way courts have looked at the issue before.

So, in this case, you’re saying that the underlying behavior may constitute something bad that the Biden Administration did, that voters should know about it and judge them on it, but that it doesn’t rise to the level of being a First Amendment issue?

Yes. I think that this opinion goes too far. It’s insufficiently attentive to the interests on the other side. But I think the prior cases have been too stingy. They’ve been too unwilling to find a problem—they don’t want to get involved because of this concern with separation of powers.

The platforms are incredibly powerful speech regulators. We have largely handed over control of the digital public sphere to these private companies. I think there is this recognition that when the government criticizes the platforms or puts pressure on the platforms to change their policies, that’s some form of political or democratic oversight, a way to promote public welfare. And those kinds of democratic and public-welfare concerns are pretty significant. The courts have wanted to give the government a lot of room to move.

But you think that, in the past, the courts have been too willing to give the government space? How could they develop a better approach?

Yeah. So, for example, the e-mails that are identified in this complaint—I think that’s the kind of pressure that is inappropriate for government actors in a democracy to be employing against private-speech platforms. I’m not at all convinced that, if this had come up in a different court, those would have been found to be a violation of the First Amendment. But there need to be some rules of the road.

On the one hand, I was suggesting that there are important democratic interests in not having too broad a rule. But, on the other hand, I think part of what’s going on here—part of what the facts that we see in this complaint are revealing—is that, in the past, we’ve thought about this kind of government pressure on private platforms, which is sometimes called jawboning, as episodic. There’s a local sheriff or there’s an agency head who doesn’t like a particular policy, and they put pressure on the television station, or the local bookseller, to do something about it. Today, what we’re seeing is that there’s just this pervasive, increasingly bureaucratized communication between the government and the platforms. The digital public theatre has fewer gatekeepers; journalists are not playing the role of leading and determining the news that is fit to print or not fit to print. And so there’s a lot of stuff, for good or for ill, that is circulating in public. You can understand why government officials and expert agencies want to be playing a more significant role in informing, influencing, and persuading the platforms to operate one way or the other. But it does raise the possibility of abuse, and I’m worried about that.

That was a fascinating response, but you didn’t totally answer the question. How should a court step in here without going too far?

The traditional approach that courts have taken, until now, has been to say that there’s only going to be a First Amendment violation if the coercion, encouragement, or collaboration is so strong that, essentially, the platform had no choice but to act. It had no alternatives; there was no private discretion. Because then we can say, Oh, yes, it was the government actor, not the platform, that ultimately was responsible for the decision.

I think that that is too restrictive a standard. Platforms are vulnerable to pressure from the government that’s a lot less severe. They’re in the business of making money by disseminating a lot of speech. They don’t particularly care about any particular tweet or post or speech act. And their economic incentives will often mean that they want to curry favor with the government and with advertisers by being able to continue to circulate a lot of speech. If that means that they have to break some eggs, that they have to suppress particular kinds of posts or tweets, they will do that. It’s economically rational for them to do so.

The challenge for courts is to develop rules of the road for how government officials can interact with platforms. It has to be the case that some forms of communication are protected, constitutionally O.K., and even democratically good. I want expert agencies such as the C.D.C. to be able to communicate to the platforms. And I want that kind of expert information to be constitutionally unproblematic to deliver. On the other hand, I don’t think that White House officials should be writing to platforms and saying, “Hey, take this down immediately.”

I never thought about threatening companies as a free-speech issue that courts would get involved with. Let me give you an example. If you had told me four years ago that the White House press secretary had got up and said, “I have a message from President Trump. If CNN airs one more criticism of me, I am going to try and block its next merger,” I would’ve imagined that there would be a lot of outrage about that. What I could not have imagined was a judge releasing an injunction saying that people who worked for President Trump were not allowed to pass on the President’s message from the White House podium. It would be an issue for voters to decide. Or, I suppose, CNN, during the merger decision, could raise the issue and say, “See, we didn’t get fair treatment because of what President Trump said,” and courts could take that into account. But the idea of blocking the White House press secretary from saying anything seems inconceivable to me.

I’ll say two things in response. One is that there is a history of this kind of First Amendment litigation, but it’s usually about private speech. We might think that public speech has a different status because there is more political accountability. I don’t know. I find this question really tricky, because I think that the easiest cases from a First Amendment perspective, and the easiest reason for courts to get involved, is when the communication is secret, because there isn’t political accountability.

You mentioned the White House press secretary saying something in public. O.K., that’s one thing. But what about if she says it in private? We might think, Well, then the platforms are going to complain. But often regulated parties do not want to say that they have been coerced by the government into doing something against their interests, or that they were threatened. There’s often a conspiracy of silence.

In those cases, it doesn’t seem to me as if there’s democratic accountability. But, even when it is public, we’ve seen over the past year that government officials are writing letters to the platforms: public letters criticizing them, asking for information, badgering them, pestering them about their content-moderation policies. And we might think, Sure, people know that that’s happening. Maybe the government officials will face political accountability if it’s no good. But we might worry that, even then, if the behavior is sufficiently serious, if it’s repeated, it might give the officials too much power to shape the content-moderation policies of the platforms. From a First Amendment perspective, I don’t know why that’s off the table.

Now, from a practical perspective, you’re absolutely right. Courts have not wanted to get involved. But that’s really worrying. I think this desire to just let the political branches work it out has meant that, certainly with the social-media platforms, it’s been like the Wild West. There are no rules of the road. We have no idea what’s O.K. or not for someone in the White House to e-mail to a platform. One of the benefits of the order and the injunction is that it’s opening up this debate about what’s O.K. and what’s not. It might be the case that the way to establish rules of the road will not be through First Amendment-case litigation. Maybe we need Congress to step in and write the rules, or there needs to be some kind of agency self-regulation. But I think it’s all going to have to ultimately be viewed through a First Amendment lens. This order and injunction go way too far, but I think the case is at least useful in starting a debate. Because up until now we’ve been stuck in this arena where there are important free-speech values that are at stake and no one is really doing much to protect them. ♦

More New Yorker Conversations

Naomi Klein sees uncanny doubles in our politics .

Olivia Rodrigo considers the meanings of “Guts.”

Isabel Allende’s vision of history .

Julia Fox didn’t want to be famous, but she knew she would be .

John Waters is ready for his Hollywood closeup .

Patrick Stewart boldly goes there .

Support The New Yorker’s award-winning journalism. Subscribe today .

Inside the Trump Plan for 2025

Official Logo MTSU Freedom Of Speech

  • ENCYCLOPEDIA
  • IN THE CLASSROOM

Home » Articles » Topic » Issues » Issues Related to Speech, Press, Assembly, or Petition » Online Harassment

Online Harassment

Benjamin Wilson

Testing Author Article

Online harassment can include unwanted emails, texts and direct messages; blog posts; nonconsensual intimate photos and videos; doxxing; impersonation; and illegal hacking. Online harassment laws that are overly broad in criminalizing protected speech can run afoul of the First Amendment. However, criminal conduct that involves speech is not immunized under the First Amendment. Also, some types of speech, such as true threats, are not protected by the First Amendment.

Online or cyber harassment affects the lives of a huge variety of people, including scientists, journalists, artists, politicians and law enforcement.

It also wreaks havoc on the lives of victims of domestic and sexual abuse. The internet has produced many new methods of harassment in electronic form: unwanted emails, texts and direct messages; blog posts; nonconsensual intimate photos and videos; doxxing; impersonation; and illegal hacking.

Unlike face-to-face interactions, online harassment often involves strangers or anonymous communications, which can make identifying the perpetrator difficult. The harasser could be thousands of miles away. Or next door. In addition, the content of the harassing speech varies in severity from simple name calling to humiliation, body shaming, sexual harassment, racial epithets and threats of violence. 

A PEW Research Center report issued in January 2021 surveyed over 10,000 adults in the United States and found that 41% reported experiencing online harassment. Women are reported to be twice as likely as men to experience online harassment. People of color and those in the LGBTQ+ community are also more frequently targeted than the general population.

The National Domestic Violence Hotline conducted a survey in 2022 of 960 survivors of domestic abuse and found that 100% of respondents had experienced at least one form of online harassment or abuse. In June 2022, President Joe Biden issued a Presidential Memorandum that established the White House Task Force to Address Online Harassment and Abuse . 

Defining harassment

The concept of harassment , like hate speech and obscenity , is used in different ways both legally and colloquially. It is hard to define. Merriam-Webster   defines harass as “to annoy persistently,” and “to create an unpleasant or hostile situation for especially by uninvited and unwelcome verbal or physical conduct.” Black’s Law Dictionary defines harassment as “[w]ords, conduct, or action (usu. repeated or persistent) that, being directed at a specific person, annoys, alarms, or causes substantial emotional distress to that person and serves no legitimate purpose; purposeful vexation.” As indicated by those definitions, harassment encompasses both speech and conduct. 

Many laws provide a specific definition harassment, while other laws use harass or harassment as an element or description, like anti-stalking and anti-bullying laws. For example, a Wisconsin law makes it a misdemeanor to threaten injury or physical harm via electronic communication “with intent to … harass another person.” Wis. Stat. §§ 947.0125(a)-(b). In 2022, the Colorado Supreme Court held that part of the state’s harassment statute was unconstitutional because the phrase “intended to harass” was overbroad. The court reasoned that “people often legitimately communicate in a manner ‘intended to harass’ by persistently annoying or alarming others to emphasize an idea or prompt a desired response.” 

Legal regimes of state criminal laws against harassment

Many states have specific criminal laws against harassment. New Jersey and Rhode Island even have laws against cyber-harassment (R.I. Gen. Laws § 11-52-4.2; N.J. Stat. Ann. § 2C:33-4.1 (West)). States that do not have specific harassment laws often have relevant laws against stalking or menacing. There is a federal law against cyberstalking (18 U.S.C. § 2261A). 

Cyberbullying is one type of harassment, usually involving minors in an educational setting. U.S. law also recognizes specific categories of harassment in the workplace and in educational settings, known as hostile environment harassment. While online harassment may be involved, those specific settings and harassment laws are not the subject of this article.  

Victims of harassment may also seek injunctive relief or an order of protection against a harasser. In a Massachusetts case , for example, the plaintiff obtained an abuse prevention order against her ex-boyfriend that restricted his ability to post information about her online or to encourage “hate mobs.” 

A significant issue with harassment cases can be the challenges in enforcing the law against anonymous harassers and those who conceal their digital tracks . These technical challenges pose problems for law enforcement officers who may not have the training or expertise needed to identify the perpetrators or assemble electronic evidence. In one extremely disturbing instance of sexual harassment and stalking , police needed two years to gather the evidence needed to prosecute the harasser.  

First Amendment concerns with overly broad harassment laws

Laws aimed at combatting online harassment raise First Amendment problems when they are overbroad and restrict — or potentially restrict and chill — protected speech. Valid laws therefore can avoid those problems if they aim at conduct rather than speech. Or if speech is included, such laws should aim narrowly at constitutionally unprotected speech, like true threats and speech integral to criminal conduct, and be content neutral . 

Unwanted communications can be harassing regardless of content or viewpoint. Repeated phone calls at night “would be equally intrusive whether they conveyed messages of hate or love.” Voluminous unwanted emails or comments on social media can be harassing. Valid legal restrictions, therefore, must avoid criminalizing or restricting speech based on its content or viewpoint. In 2019, a North Carolina appellate court overturned a stalking conviction against a defendant because the indictments against him were based in part on social-media posts about the victim, which the court found was an impermissible content-based restriction. 

Online harassment laws have often run afoul of the First Amendment overbreadth doctrine , as noted above with the Colorado harassment statute. In 2017, the Illinois Supreme Court held that part of the state’s stalking and cyberstalking statutes were unconstitutionally overbroad where they sought to criminalize two or more nonconsensual communications “to or about” a person, where the speaker knew or should have known the communications would cause a reasonable person to suffer emotional distress. The court found that the statutes “criminalize[d] any number of commonplace situations in which an individual engages in expressive activity that he or she should know will cause another person to suffer emotional distress.” 

Online harassment that rises to the level of a true threat, and other forms of constitutionally unprotected speech, can be restricted by anti-harassment laws. For example, a Massachusetts appellate court upheld a man’s conviction in 2022 based on blog posts about the victim. The blog posts included photos of the victim and statements such as “[name] RIP … May my beautiful and beloved [name] rest in peace.” In 2023, the U.S. Supreme Court raised the burden of proof in true threats cases.

The Supreme Court held in Counterman v. Colorado (2023) that a criminal conviction based on a true threat must include evidence that the defendant had a subjective mental state of recklessness in making the threat, meaning the defendant “consciously disregarded a substantial risk that his communications would be viewed as threatening violence.” This heightened burden of proof will arguably make convictions more difficult.

For more information about online harassment, see the PEN America, Online Harassment Field Manual .

Benjamin Wilson is the Stanton Foundation Legal Fellow in the First Amendment Clinic at Washington University School of Law. The clinic represents clients in matters advancing and defending freedom of speech, press, and assembly. 

How To Contribute

The Free Speech Center operates with your generosity! Please  donate now!

write a speech about uses and abuses of social media

Supreme Court’s Social Media Ruling Tilts Toward Free Speech

Robert Raskopf

The US Supreme Court this month declined to rule on whether Florida and Texas laws limiting social media platforms’ content moderation violates the First Amendment, sending the issue back to the lower courts. But in doing so, its guidance strongly suggested that modern day social media communications—including how they are shaped by the platforms where they appear—receive full, time-honored protections of the First Amendment.

Though technically a remand for further deliberation, this decision rings loudly for the future of our communication modalities.

In Moody v. NetChoice , the Supreme Court reviewed two cases out of Florida (the federal Eleventh Circuit) and Texas (the federal Fifth Circuit). Both states enacted laws in 2021 that limited large tech companies’ ability to moderate user content.

Tech industry trade groups, whose members include Alphabet Inc., Meta Platforms Inc., and X Corp., sued Florida in federal court, arguing that the law violated the First Amendment. Both district courts agreed with the trade groups, blocking enforcement of the law.

The US Court of Appeals for the Eleventh Circuit then affirmed in favor of the trade groups. Meanwhile, the US Court of Appeals for the Fifth Circuit concluded the opposite, ruling in favor of Texas on its similar law.

On appeal, the Supreme Court unanimously vacated both decisions. However, the underlying opinions weren’t uniform. Each justice either joined in the majority or at least concurred in the judgment, but their reasoning varied.

Writing for the court, Justice Elena Kagan said the lower courts had applied the wrong analysis. She drew a distinction between a law that’s unconstitutional on its face and one that’s unconstitutional only as applied in certain circumstances. For example, it’s possible that the laws could be constitutional as applied to platforms such as Uber or Venmo, which aren’t primarily social media platforms—even if they are unconstitutional as applied to YouTube, TikTok, X, or Meta.

Although the court punted on the merits of the laws’ constitutionality, Kagan cautioned that the lower courts’ analyses going forward “must be done consistent with the First Amendment, which does not go on leave when social media are involved.”

She warned that the law’s application as to social media platforms is “unlikely to withstand First Amendment scrutiny” given that the court has repeatedly held “that it is no job for government to decide what counts as the right balance of private expression—to ‘un-bias’ what it thinks biased, rather than to leave such judgments to speakers and their audiences.”

The heart of the issue for the majority is that moderating, curating, and editorializing content is fully protected by the First Amendment. Just as private individuals and companies generally have the First Amendment right to say whatever they wish without government interference, they also may editorialize, curate, and moderate content on their own platforms, according to the opinion of the court.

Those familiar with the First Amendment law in Florida might recall the Tornillo case from a half century ago. Florida tried to require newspapers to allow a political candidate a “right of reply” to any criticisms in the newspaper. The Supreme Court voided that law, finding that media editors had the First Amendment right to choose what gets published.

Kagan indicated that cases such as Tornillo will control the fate of these laws because social media companies, like newspapers, are private companies with full First Amendment rights. This view is in stark contrast with the Fifth Circuit’s position (and perhaps a position favored by a minority of the Supreme Court) that social media companies are “common carriers” that must allow all consumers on board, so to speak.

Kagan further observed that cable companies, which carry content such as newspapers and social media companies, are protected by the First Amendment, in an apparent rejection of the “common carrier” argument.

Meanwhile, the concurring opinions all focused on the difficulties presented by facial challenges to laws. Justice Amy Coney Barrett’s concurrence explained “the dangers of bringing a facial challenge” and suggested that an as-applied challenge would allow courts to home in on specific platform and function issues that might bear on the First Amendment analysis. Relatedly, Justice Ketanji Brown Jackson’s concurrence discussed the “high bar for facial challenges” brought before the Supreme Court and recommended that the court “strive to avoid deciding more than is necessary.”

Justice Clarence Thomas’s concurrence broke down problems that facial challenges pose, including to the Supreme Court’s case-or-controversy jurisdiction and the balance of power among the three branches of federal government and federal and state governments. Considering the many problems presented, Thomas urged the majority to “discontinue the practice of facial challenges.”

Justice Samuel Alito’s concurrence aimed to provide more details regarding the Florida and Texas laws at issue and the underlying litigations. It also took issue with the majority going beyond the question presented—whether the laws are facially unconstitutional—and addressing specific provisions as applied to two social media platforms.

None of the concurring justices appeared to expressly contradict the majority’s thesis. It is uncertain whether the justices who didn’t join in the majority opinion in Moody would join in Kagan’s reasoning on the merits if these cases (or another like them) return to the high court in the proper posture. Notably, Justices Neil Gorsuch, Thomas, and Alito expressed an interest in seeing the “common carrier” argument developed further by the lower courts.

For now, both states’ laws remain blocked while the litigation resumes in the lower courts. Several years may pass before either or both cases make it back to the Supreme Court. By then, the cases should be properly teed up for a decision on the merits, and Kagan has already previewed that Florida and Texas will face an uphill battle in defending the laws.

The case is Moody v. Netchoice, LLC , U.S., No. 22-277, decided 7/1/24.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Robert Raskopf is senior counsel at Bilzin Sumberg, with focus on intellectual property, media, sports, entertainment, and privacy issues.

Kenneth Duvall is partner and assistant general counsel at Bilzin Sumberg, focusing on commercial and financial litigation.

Megan Barney is an associate at Bilzin Sumberg, focusing on commercial litigation.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Daniel Xu at [email protected] ; Melanie Cohen at [email protected]

Learn more about Bloomberg Law or Log In to keep reading:

Learn about bloomberg law.

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.

share this!

July 2, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

A new approach to regulating speech on social media: Treating users as workers

by Nate Luce, Vanderbilt University

facebook user

Social media has proven difficult to regulate for the last 20+ years, in large part because First Amendment considerations present a significant obstacle to regulating platforms. Arguments for and against regulating speech on social media tend to view platforms as offering content and connectivity and users as consumers of a service, exacerbating First Amendment concerns.

But what if the "user-as-consumer" characterization misconstrues the business model of social media, and platform users actually behave more like workers?

" Social Network as Work ," a paper by Francesca Procaccini, Assistant Professor of Law at Vanderbilt Law School, establishes a novel paradigm for regulating speech on social media—by equating the use of social media with labor.

"Reorienting how we think about social media by framing users as workers suggests that legal frameworks from labor and employment are especially productive for governing social media," she writes.

Social media as a form of labor

User engagement—in the form of posts, scrolls, clicks, likes, etc.—generates content and data that social platforms repackage and sell to advertisers. As compensation, platforms provide social, informational, and entertainment benefits to users. While this arrangement differs from traditional workplace models, Procaccini argues that the essential characteristics of work factor directly into the platform-user relationship: "(The) defining economic and power dynamics between employers and workers are analogous to those between platforms and users."

Platforms supervise user activity and enjoy an informational advantage, all while operating in "an otherwise socially collegial environment," similar to most workplaces. Users and workers alike are potentially subject to safety hazards, discrimination, harassment, and misinformation.

"Social media users share analogous structural conditions, risks, and harms as traditional workers, and are in need of analogous statutory protections as employees," she writes.

Protections for speech in the workplace

The First Amendment permits ample regulation of speech in the workplace.

"The same words in different contexts carry different levels of First Amendment protection, largely in accordance with the varying power and information asymmetries that define the setting," Procaccini explains. In the workplace setting, speech rights of employers and workers have long been diminished, "to protect the efficacy of the employment relationship and the rights and dignity of those in it."

Many of the features that justify regulating speech in the workplace are present in social media as well. Both are confined settings that present considerable alternatives for speech. The "inherently coercive nature" of each environment creates a greater risk of harm. Importantly, speech in the workplace and social media is "inextricably bound up with commercial conduct."

"Circumscribing constitutional protection in the private workplace to account for these dynamics is quite sound under the First Amendment," Procaccini writes, "because doing so actually maximizes the freedom of speech by augmenting private citizens' capacity to speak and contribute to the marketplace of ideas."

The paper details federal and state regulations on employer and employee speech, including bans on discriminatory, abusive, false, and coercive speech, proselytizing, and undue influence on political and labor choices. Employers are in many cases required to disclosure factual information like legal rights and health and safety warnings. Workers are regularly protected from employer reprisal for whistleblowing and other forms of speech. These work laws address the competing interests and rights of employers, workers, and co-workers to eliminate unjust social stratification and subjugation.

"This is exactly the type of law social media needs," Procaccini writes.

Regulating speech on social media

Procaccini uses these speech-related work laws to develop a framework for social media regulation. The paper advocates for measures such as stricter prohibitions on discriminatory, harassing, false, and coercive speech between users, stronger mechanisms to combat abuse on platforms, broader disclosure and disclaimer requirements, and prohibitions on child social networking.

While work law motivates her proposal, Procaccini notes that it should not apply in full to social media. Social media is not work under current labor and employment law," she writes. "But it is enough like work—and produces harms that map onto those in the workplace so tightly—that work law offers a surprisingly generative framework for regulating social media consistent with the First Amendment."

"Social Network as Work" is forthcoming in the Cornell Law Review.

Provided by Vanderbilt University

Explore further

Feedback to editors

write a speech about uses and abuses of social media

New genetic test can help eliminate a form of inherited blindness in dogs

8 hours ago

write a speech about uses and abuses of social media

Saturday Citations: Scientists study monkey faces and cat bellies; another intermediate black hole in the Milky Way

Jul 20, 2024

write a speech about uses and abuses of social media

Researchers zero in on the underlying mechanism that causes alloys to crack when exposed to hydrogen-rich environments

Jul 19, 2024

write a speech about uses and abuses of social media

International study highlights large and unequal life expectancy declines in India during COVID-19

write a speech about uses and abuses of social media

Global study demonstrates benefit of marine protected areas to recreational fisheries

write a speech about uses and abuses of social media

Killifish can adjust their egg-laying habits in response to predators, study shows

write a speech about uses and abuses of social media

Enhanced information in national policies can accelerate Africa's efforts to track climate adaptation

write a speech about uses and abuses of social media

Innovative microscopy reveals amyloid architecture, may give insights into neurodegenerative disease

write a speech about uses and abuses of social media

Study deciphers intricate 3D structure of DNA aptamer for disease theranostics

write a speech about uses and abuses of social media

Gold co-catalyst improves photocatalytic degradation of micropollutants, finds study

Relevant physicsforums posts, cover songs versus the original track, which ones are better.

13 hours ago

Who is your favorite Jazz musician and what is your favorite song?

Music to lift your soul: 4 genres & honorable mention, a rain song -- favorite one memorable one one you like, today's fusion music: t square, cassiopeia, rei & kanade sato.

Jul 17, 2024

For WW2 buffs!

More from Art, Music, History, and Linguistics

Related Stories

write a speech about uses and abuses of social media

US Supreme Court hears challenges to social media laws

Feb 26, 2024

write a speech about uses and abuses of social media

Federal judge blocks Texas law that would have opened doors for right-wing lawsuits against social media

Dec 2, 2021

write a speech about uses and abuses of social media

Shadowbanning: Some marginalized social media users believe their content is suppressed

Apr 16, 2024

write a speech about uses and abuses of social media

Hong Kong minister says no social media ban under security law

Mar 6, 2024

write a speech about uses and abuses of social media

Hate speech in social media: How platforms can do better

Feb 18, 2022

write a speech about uses and abuses of social media

Texas law against blocking online posts on hold for now

Jun 1, 2022

Recommended for you

write a speech about uses and abuses of social media

Gender inequality across US states revealed by new tool

write a speech about uses and abuses of social media

Study finds most Afghans support women's rights, especially when men think of their daughters

write a speech about uses and abuses of social media

The current international poverty line is a 'misleading shortcut method,' say experts

Jul 16, 2024

write a speech about uses and abuses of social media

Women and social exclusion: Research explores the complicated nature of rejection and retaliation

Jul 11, 2024

write a speech about uses and abuses of social media

Perceived warmth, competence predict callback decisions in meta-analysis of hiring experiments

Jul 10, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Minutes after Trump shooting, misinformation started flying. Here are the facts

What began as a jubilant rally for Donald Trump, just days before he becomes the official Republican presidential nominee, ended in mere minutes with the former president bloodied and a suspected would-be assassin shot dead by Secret Service.

Image

Trump 2024 flag is raised outside of Trump Tower, Sunday, July 14, 2024, in New York. (AP Photo/Yuki Iwamura)

  • Copy Link copied

WASHINGTON (AP) — Within minutes of the gunfire, the attempted assassination of former President Donald Trump spawned a vast sea of claims — some outlandish, others contradictory — reflecting the frightening uncertainties of the moment as well as America’s fevered, polarized political climate.

The cloudburst of speculation and conjecture as Americans turned to the internet for news about the shooting is the latest sign of how social media has emerged as a dominant source of information — and misinformation — for many, and a contributor to the distrust and turbulence now driving American politics.

Mentions of Trump on social media soared up to 17 times the average daily amount in the hours after the shooting, according to PeakMetrics, a cyber firm that tracks online narratives. Many of those mentions were expressions of sympathy for Trump or calls for unity . But many others made unfounded, fantastical claims.

“We saw things like ‘The Chinese were behind it,’ or ‘ Antifa was behind it,’ or ‘the Biden administration did it.’ We also saw a claim that the RNC was behind it,’” said Paul Bartel, senior intelligence analyst at PeakMetrics. “Everyone is just speculating. No one really knows what’s going on. They go online to try to figure it out.”

Here’s a look at the claims that surfaced online following the shooting:

Claims of an inside job or false flag are unsubstantiated

Many of the more specious claims that surfaced immediately after the shooting sought to blame Trump or his Democratic opponent, President Joe Biden, for the attack.

Some voices on the left quickly proclaimed the shooting to be a false flag concocted by Trump, while some Trump supporters suggested the Secret Service intentionally failed to protect Trump on the White House’s orders.

The Secret Service on Sunday pushed back on claims circulating on social media that Trump’s campaign had asked for greater security before Saturday’s rally and was told no.

What to know :

  • Timeline of events : How the assassination attempt on former President Donald Trump unfolded.
  • RNC: The Republican presidential ticket came together when Trump named JD Vance as his running mate. Follow live updates .
  • Biden’s response : The president says it was a “mistake” to say he wanted to put a “bull’s-eye” on Trump .
  • Key question : Officials are demanding to know how an armed man was able to get to the top of a building and shoot the former president .
  • A “man of conviction” : Victim Corey Comperatore, a former fire chief, used his body to shield his family from gunfire.
  • Stay informed. Keep your pulse on the news with breaking news email alerts. Sign up here .

“This is absolutely false,” agency spokesman Anthony Guglielmi wrote Sunday on X. “In fact, we added protective resources & technology & capabilities as part of the increased campaign travel tempo.”

Videos of the shooting were quickly dissected in partisan echo chambers and Trump supporters and detractors looked for evidence to support their beliefs. Videos showing Secret Service agents moving audience members away from Trump before the shooting were offered as evidence that it was an inside job. Images of Trump’s defiantly raised fist were used to make the opposite claim — that the whole event was staged by Trump.

“How did the USSS allow him to stop and pose for a photo opp if there was real danger??” wrote one user, using the abbreviation for the U.S. Secret Service.

Social media bots helped amplify the false claims on platforms including Facebook, Instagram, X and TikTok, according to an analysis by the Israeli tech firm Cyabra, which found that a full 45% of the accounts using hashtags like #fakeassassination and #stagedshooting were inauthentic.

An image created using artificial intelligence — depicting a smiling Trump moments after the shooting — was also making the rounds, Cyabra found.

Moments like this are ‘cannon fodder’ for extremists

Conspiracy theories quickly emerged online that misidentified the suspected shooter, blamed other people without evidence and espoused hate speech, including virulent antisemitism.

“Moments like this are cannon fodder for extremists online , because typically they will react with great confidence to whatever has happened without any real evidence” said Jacob Ware, a research fellow at the Council on Foreign Relations. “People will fall into spirals and will advance their own ideologies and their own conclusions.”

Before authorities identified the suspect, photos of two different people circulated widely online falsely identifying them as the shooter.

In all the speculation and conjecture, others were trying to exploit the event financially. On X on Sunday morning, an account named Proud Patriots urged Trump supporters to purchase their assassination-attempt themed merchandise.

“First they jail him, now they try to end him,” reads the ad for the commemorative Trump Assassination Attempt Trading Card. “Stand Strong & Show Your Support!”

Republicans cast blame on Biden

After the shooting, some Republicans blamed Biden for the shooting, arguing sustained criticisms of Trump as a threat to democracy have created a toxic environment. They pointed in particular to a comment Biden made to donors on July 8, saying “it’s time to put Trump in the bullseye.”

Ware said that comment from Biden was “violent rhetoric” that is “raising the stakes,” especially when combined with Biden’s existential words about the election. But he said it was important not to make conclusions about the shooter’s motive until we know more information. Biden’s remarks were part of a broader approach to turn scrutiny on Trump, with no explicit call to violence.

Trump’s own incendiary words have been criticized in the past for encouraging violence. His lies about the 2020 election and his call for supporters to “fight like hell” preceded the Jan. 6, 2021, attack on the Capitol, which led to his second impeachment on charges of incitement of insurrection. Trump also mocked the hammer attack that left 80-year-old Paul Pelosi, the husband of the former House speaker, with a fractured skull.

Surveys find that Americans overwhelmingly reject violence as a way to settle political differences, but overheated rhetoric from candidates and social media can motivate a small minority of people to act, said Sean Westwood, a political scientist who directs the Polarization Research Lab at Dartmouth College.

Westwood said he worries that Saturday’s shooting could spur others to consider violence as a tactic.

“There is a real risk that this spirals,” he said. “Even if someone doesn’t personally support violence, if they think the other side does, and they witness an attempted political assassination, there is a real risk that this could lead to escalation.”

The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here . The AP is solely responsible for all content.

Image

2024 Republican National Convention: Day 1

Samantha Putterman, PolitiFact Samantha Putterman, PolitiFact

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/politics/fact-checking-j-d-vances-past-statements-and-relationship-with-trump

Fact-checking JD Vance’s past statements and relationship with Trump

This fact check originally appeared on PolitiFact .

Former President Donald Trump has selected Ohio Sen. J.D. Vance as his vice presidential running mate.

“After lengthy deliberation and thought, and considering the tremendous talents of many others, I have decided that the person best suited to assume the position of Vice President of the United States is Senator J.D. Vance of the Great State of Ohio,” Trump wrote July 15 on Truth Social.

Vance, 39, won his Senate seat in 2022 with Trump’s backing. He would be one of the youngest vice presidents in U.S. history.

But before becoming one of Trump’s fiercest allies and defenders, Vance sharply criticized the former president. During the 2016 presidential election, Vance wrote that he goes “back and forth between thinking Trump is a cynical a–hole like Nixon who wouldn’t be that bad (and might even prove useful) or that he’s America’s Hitler.”

WATCH: 2024 RNC delegates react to Trump shooting

He has since sounded a different tone including in defending Trump’s actions in the events leading up to and during the Jan. 6, 2021 attack on the U.S. Capitol. Vance was critical of Vice President Mike Pence’s handling of the 2020 election results certification and in an interview with Kaitlan Collins on CNN questioned whether the vice president’s life was actually endangered during the riots. Vance also vocally condemned what he sees as the tenor of political rhetoric, which he tied to an assassination attempt during Trump’s July 13 rally in Butler, Pennsylvania.

“The central premise of the Biden campaign is that President Donald Trump is an authoritarian fascist who must be stopped at all costs,” Vance posted on X shortly after the shooting. “That rhetoric led directly to President Trump’s attempted assassination.”

Who is J.D. Vance and what is his relationship with Trump?

Before winning his Senate seat in 2022, Vance worked as an investor, commentator and bestselling author.

Vance, who was born in Middleton, Ohio, served in the U.S. Marine Corps before attending Ohio State University and Yale Law School. He worked as a corporate lawyer before moving into the tech industry as a venture capitalist.

WATCH: JD Vance’s evolution from Trump critic to running mate

Vance rose to fame through his 2016 memoir “Hillbilly Elegy,” which describes his growing up in poverty and details the isolation, violence and drug addiction that often surrounds poor white communities in middle America.

When the book was released, Vance started talking to the media about issues important to people in his community — and started criticizing Trump.

Vance told ABC News in August 2016 that, although Trump successfully “diagnoses the problems” people are facing, he didn’t see Trump “offering many solutions.” In an October 2016 interview with journalist Charlie Rose, Vance said he was a “never-Trump guy.”

In another 2016 interview about his book, Vance told a reporter that, although his background would have made him a natural Trump supporter, “the reason, ultimately, that I am not … is because I think that (Trump) is the most-raw expression of a massive finger pointed at other people.”

Vance began to publicly change course when he launched his Senate campaign in 2021. He deleted tweets from 2016 that included him calling Trump “reprehensible” and an “idiot.” In another deleted tweet following the release of the “Access Hollywood” tape on which Trump said fame enabled him to grope women, Vance wrote: “Fellow Christians, everyone is watching us when we apologize for this man. Lord help us.”

He apologized about his Trump criticisms in a July 2021 Fox News interview, and asked people not to judge him based on what he had said. “I’ve been very open that I did say those critical things and I regret them, and I regret being wrong about the guy,” Vance said. “I think he was a good president, I think he made a lot of good decisions for people, and I think he took a lot of flak.”

In June, after news circulated that Vance was on Trump’s short list for vice president, Fox News host Bret Baier asked Vance about the comments. Vance said he was wrong about Trump. “He was a great president, and it’s one of the reasons why I’m working so hard to make sure he gets a second term,” he said.

Trump endorsed Vance in the Ohio GOP Senate primary, helping him win the race and the general election.

As a senator, Vance lobbied to defeat Ohio’s constitutional amendment that ensured access to abortion, calling it a “gut punch” after the measure passed.

After the hazardous East Palestine, Ohio, train derailment in 2023, which ignited a fire and led to evacuations and a controlled release of chemicals, Vance worked with Ohio’s senior senator, Democrat Sherrod Brown, to introduce rail safety legislation.

PolitiFact has fact-checked Vance 10 times. He’s received two Pants on Fire ratings, three Falses, two Mostly Falses and two Half Trues. He also received one Mostly True rating before he was a politician in 2018.

In March, Vance echoed a popular Republican talking point, saying that “100% of net job creation under the Biden administration has gone to the foreign-born.” We rated that Mostly False. Since Biden took office in early 2021, the number of foreign-born Americans who are employed has risen by about 5.6 million. But over the same time period, the number of native-born Americans employed has increased by almost 7.4 million. We rated False Vance’s claim in February that the $95 million Ukraine supplemental aid package included a “hidden impeachment clause against President Trump.” The measure doesn’t mention impeachment.

Because it’s the president’s job to spend congressionally appropriated funds, experts said whoever is elected president next will be responsible for spending the money allocated in the law. It doesn’t target former Trump; it would apply the same way to Biden, should he be reelected.

On immigration, Vance falsely claimed in 2022 that Biden’s “open border” meant that “more Democrat voters were “pouring into this country.” Immigrants who cross the border illegally cannot vote in federal elections. The process for immigrants to become citizens and therefore gain the right to vote can take a decade or longer.

PolitiFact also addressed controversial comments Vance made in 2021, surfaced during his Senate run, about rape being “inconvenient.”

Vance didn’t directly say “rape is inconvenient.” But when he was asked in an interview whether laws should allow people to get abortions if they were victims of rape or incest, he said that society shouldn’t view a pregnancy or birth resulting from rape or incest as “inconvenient.”

In the interview, which occurred before the U.S. Supreme Court overturned Roe v. Wade in 2022, Vance was also asked whether anti-abortion laws should include rape and incest exceptions. “Two wrongs don’t make a right,” he said in response. “At the end of day, we are talking about an unborn baby. What kind of society do we want to have? A society that looks at unborn babies as inconveniences to be discarded?”

​When asked again about the exceptions, Vance said: “The question portrays a certain presumption that is wrong. It’s not whether a woman should be forced to bring a child to term, it’s whether a child should be allowed to live, even though the circumstances of that child’s birth are somehow inconvenient or a problem to the society.”

Since being on Trump’s vice presidential short list, Vance has expressed a more moderate view on abortion.

On July 7 on NBC’s “Meet the Press,” for example, Vance said he supports access to the abortion pill mifepristone after the Supreme Court dismissed the case against it — echoing what Trump said days before during the presidential debate.

Support Provided By: Learn more

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

write a speech about uses and abuses of social media

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Addict Behav Rep
  • v.17; 2023 Jun

Social media use and abuse: Different profiles of users and their associations with addictive behaviours

Deon tullett-prado.

a Victoria University, Australia

Vasileios Stavropoulos

b University of Athens, Greece

Rapson Gomez

c Federation University, Australia

Associated Data

The data is made available via a link document.

Introduction

Social media use has become increasingly prevalent worldwide. Simultaneously, concerns surrounding social media abuse/problematic use, which resembles behavioural and substance addictions, have proliferated. This has prompted the introduction of ‘Social Media Addiction’ [SMA], as a condition requiring clarifications regarding its definition, assessment and associations with other addictions. Thus, this study aimed to: (a) advance knowledge on the typology/structure of SMA symptoms experienced and: (b) explore the association of these typologies with addictive behaviours related to gaming, gambling, alcohol, smoking, drug abuse, sex (including porn), shopping, internet use, and exercise.

A sample of 968 [Mage = 29.5, SDage = 9.36, nmales = 622 (64.3 %), nfemales = 315, (32.5 %)] adults was surveyed regarding their SMA experiences, using the Bergen Social Media Addiction Scale (BSMAS). Their experiences of Gaming, Internet, Gambling, Alcohol, Cigarette, Drug, Sex, Shopping and Exercise addictions were additionally assessed, and latent profile analysis (LPA) was implemented.

Three distinct profiles were revealed, based on the severity of one’s SMA symptoms: ‘low’, ‘moderate’ and ‘high’ risk. Subsequent ANOVA analyses suggested that participants classified as ‘high’ risk indicated significantly higher behaviours related to internet, gambling, gaming, sex and in particular shopping addictions.

Conclusions

Results support SMA as a unitary construct, while they potentially challenge the distinction between technological and behavioural addictions. Findings also imply that the assessment of those presenting with SMA behaviours, as well as prevention and intervention targeting SMA at risk groups, should consider other comorbid addictions.

1. Introduction

Social media – a form of online communication in which users create profiles, generate and share content, while forming online social networks/communities ( Obar & Wildman, 2015 ), is quickly growing to become almost all consuming in the media landscape. Currently the number of daily social media users exceeds 53 % (∼4.5 billion users) of the global population, approaching 80 % among more developed nations ( Countrymeters, 2021 , DataReportal, 2021 ). Due to technological advancements, the rise of ‘digital natives’ (i.e. children and adolescents raised with and familiarised with digital technology) and coronavirus pandemic triggered lockdowns, the frequency and duration of social media usage has been steadily increasing as people compensate for a lack of face to face interaction or grow with Social Media as a normal part of their lives (i.e. ∼ 2 h and 27 min average daily; DataReportal, 2021 , Heffer et al., 2019 , Zhong et al., 2020 , Nguyen, 2021 ). Furthermore, social media is increasingly involved in various domains of life including education, economics and even politics, to the point where engagement with the economy and wider society almost necessitates its use, driving the continued proliferation of social media use ( Calderaro, 2018 , Nguyen, 2021 , Mabić et al., 2020 , Mourão and Kilgo, 2021 ). This societal shift towards increased social media use has had some positive benefits, serving to facilitate the creation and maintenance of social groups, increase access to opportunities for career advancement and created wide ranging and accessible education options for many users ( Calderaro, 2018 , Prinstein et al., 2020 , Bouchillon, 2020 , Nguyen, 2021 ). However, for a minority of users - roughly 5–10 % ( Bányai et al., 2017 , Luo et al., 2021 , Brailovskaia et al., 2021 ) – social media use has become excessive, to the point where it dominates one’s life, similarly to an addictive behaviour - a state known as 'problematic social media use' ( Sun & Zhang, 2020 ). For these users, social media is experienced as the single most important activity in one’s life, while compromising their other roles and obligations (e.g. family, romance, employment; Sun and Zhang, 2020 , Griffiths and Kuss, 2017 ). This is a situation associated with low mood/depression, the compromise of one’s identity, social comparison leading to anxiety and self-esteem issues, work, academic/career difficulties, compromised sleep schedules and physical health, and even social impairment leading to isolation ( Anderson et al., 2017 , Sun and Zhang, 2020 , Gorwa and Guilbeault, 2020 ).

1.1. Problematic social media engagement in the context of addictions

Problematic social media use is markedly similar to the experience of substance addiction, thus leading to problematic social media use being modelled by some as a behavioural addiction - social media addiction (SMA; Sun and Zhang, 2020 ). In brief, an addiction loosely refers to a state where an individual experiences a powerful craving to engage with a behaviour, and inability to control their related actions, such that it begins to negatively impact their life ( Starcevic, 2016 ). Although initially the term referred to substance addictions induced by psychotropic drugs (e.g., amphetamines), it later expanded to include behavioural addictions ( Chamberlain et al., 2016 ). These reflect a fixation and lack of control, similar to those experienced in the abuse of substances, related to one’s excessive/problematic behaviours ( Starcevic, 2016 ).

Indeed, behavioural addictions, such as gaming, gambling and (arguably) social media addiction (SMA) share many common features with substance related addictions ( Zarate et al., 2022 ). Their similarities extend beyond the core addiction manifestations of fixation, loss of control and negative life consequences ( Grant et al., 2010 , Bodor et al., 2016 , Martinac et al., 2019 , Zarate et al., 2022 ). For instance, it has been evidenced that common risk factors/mechanisms (e.g., low impulse control), behavioural patterns (e.g., chronic relapse; sudden “spontaneous” quitting), ages of onset (e.g., adolescence and young adulthood) and negative life consequences (e.g., financial and legal difficulties) are similar between the so-called behavioural addictions and formally diagnosed substance addictions ( Grant et al., 2010 ). Moreover, such commonalities often accommodate the concurrent experience of addictive presentations, and/or even the substitution/flow from one addiction to the next (e.g., gambling and alcoholism; Bodor et al., 2016 , Martinac et al., 2019 , Grant et al., 2010 ).

With these features in mind, SMA has been depicted as characterized by the following six symptoms; A deep preoccupation with social media use (salience), use to either increase their positive feelings and/or buffer their negative feelings (mood modification), the requirement for progressively increasing time-engagement to get the same effect (i.e., tolerance), withdrawal symptoms such as irritability and frustration when access is reduced (withdrawal), the development of tensions with other people due to under-performance across several life domains (conflict) and reduced self-regulation resulting in an inability to reduce use (relapse; Andreassen et al., 2012 , Brown, 1993 , Griffiths and Kuss, 2017 , Sun and Zhang, 2020 ).

This developing model of SMA has been gaining popularity as the most widely used conceptualisation of problematic social media use, and guiding the development of relevant measurement tools ( Andreassen et al., 2012 , Haand and Shuwang, 2020 , Prinstein et al., 2020 ; Van den Eijnden et al., 2016) ). However, SMA is not currently uniformly accepted as an understanding of problematic social media use. Some critics have labelled the SMA model a premature pathologisation of ordinary social media use behaviours with low construct validity and little evidence for its existence, often inviting alternative proposed classifications derived by cognitive-behavioural or contextual models ( Sun & Zhang, 2020 ; Panova & Carbonell, 2018 7; Moretta, Buodo, Demetrovics & Potenza, 2022 ). Furthermore, the causes, risk factors and consequences of SMA, as well as the measures employed in its assessment have yet to be elucidated in depth, with research in the area being largely exploratory in nature ( Prinstein et al., 2020 , Sun and Zhang, 2020 ). In this context, what functional, regular and excessive social media use behaviours may involve has also been debated ( Wegmann et al., 2022 ). Thus, there is a need for further research clarifying the nature of SMA, identifying risk factors and related negative outcomes, as well as potential methods of treatment ( Prinstein et al., 2020 , Sun and Zhang, 2020 , Moretta et al., 2022 ).

Two avenues important for realizing these goals (and the focus of this study) involve: a) profiling SMA behaviours in the broader community, and b) decoding their associations with other addictions. Profiling these behaviours would involve identifying groups of people with particular patterns of use rather than simply examining trends in behaviour across the greater population. This would allow for clearer understandings of the ways in which different groups experience SMA and a more person-centred analysis (i.e., focused on finer understandings of personal experiences, Bányai et al., 2017 ). Moreover, when combined with analyses of association, it can allow for assertions not only about whether SMA associates with a variable, but about which components of the experience of SMA associate with a variable, allowing for more nuanced understandings. One such association with much potential for exploration, is that of SMA with other addictions (i.e., how does a certain SMA type differentially relate with other addictive behaviors, such as gambling and/or substance abuse?). Such knowledge would be useful, due to the shared common features and risk factors between addictions. It would allow for a greater understanding of the likelihood of comorbid addictions, or of flow from one addiction to the next ( Bodor et al., 2016 , Martinac et al., 2019 , Grant et al., 2010 ). However, the various links between different addictions are not identical, with alcoholism (for example) associating less strongly with excessive/problematic internet use than with problematic/excessive (so called “addictive) sex behaviours ( Grant et al., 2010 ). In that line, some studies have suggested the consideration of different addiction subgroups (e.g., substance, behavioural and technology addictions Marmet et al., 2019 ), and/or different profiles of individuals being prone to manifest some addictive behaviours more than others ( Zilberman et al., 2018 ). Accordingly, one may assume that distinct profiles of those suffering from SMA behaviours may be more at risk for certain addictions over others, rather than with addictions in general ( Zarate et al., 2022 ).

Understanding these varying connections could be vital for SMA treatment. Co-occurring addictions often reinforce each-other through their behavioural effects. Furthermore, by targeting only a single addiction type in a treatment, other addictions an individual is vulnerable to can come to the fore ( Grant et al., 2010 , Miller et al., 2019 ). Thus, a holistic view of addictive vulnerability may require consideration ( Grant et al., 2010 , Miller et al., 2019 ). This makes the identification of individual SMA profiles, as well as any potential co-occurring addictions, pivotal for more efficient assessment, prevention and intervention of SMA behaviours.

To the best of the authors’ knowledge, four studies to date have attempted to explore SMA profiles. Three of those have been conducted predominantly with European adolescent samples, and varied in terms of the type and number of profiles detected ( Bányai et al., 2017 , Brailovskaia et al., 2021 , Luo et al., 2021 , Cheng et al., 2022 ). The fourth was conducted with English speaking adults from the United Kingdom and the United States ( Cheng et al., 2022 ). Of extant studies, Bányai et al. (2017) identified three profiles varying quantitively (i.e., in terms of their SMA symptoms’ severity) across a low, moderate and high range. In contrast, Brailovskaia et al., 2021 , Luo et al., 2021 identified four and five profiles that varied both quantitatively and qualitatively in terms of the type of SMA symptoms reported. Brailovskaia et al., (2021) proposed the ‘low symptom’, ‘low withdrawal’ (i.e., lower overall SMA symptoms with distinctively lower withdrawal), ‘high withdrawal’ (i.e., higher overall SMA symptoms with distinctively higher withdrawal) and ‘high symptom’ profiles. Luo et al. (2021) supported the ‘casual’, ‘regular’, ‘low risk high engagement’, ‘at risk high engagement’ and ‘addicted’ user profiles, which demonstrated progressively higher SMA symptoms severity alongside significant differences regarding mood modification, relapse, withdrawal and conflict symptoms, that distinguished the low and high risk ‘high engagement’ profiles. Finally, considering the occurrence of different SMA profiles in adults, Cheng and colleagues, (2022), supported the occurrence of ‘no-risk’, ‘at risk’ and ‘high risk’ social media users applying in both US and UK populations, with the UK sample showing a lower proportion of the ‘no-risk’ profile (i.e. UK = 55 % vs US = 62.2) and a higher percentage of the high risk profile (i.e. UK = 11.9 % vs US = 9.1 %). Thus, considering the number of identified profiles best describing the population of social media users, Cheng and colleagues’ findings (2022) were similar to Bányai and colleagues’ (2017) suggestions for SMA behaviour profiles of adolescents. At this point it should be noted, that none of the four studies exploring SMA behaviours profiles to date has taken into consideration different profile parameterizations, meaning that potential differences in the heterogeneity/ variability of those classified within the same profile were not considered (e.g. some profiles maybe more loose/ inclusive than others; Bányai et al., 2017 , Brailovskaia et al., 2021 , Luo et al., 2021 , Cheng et al., 2022 ).

The lack of convergence regarding the optimum number and the description of SMA profiles occurring, as well as age, cultural and parameterization limitations of the four available SMA profiling studies, invites further investigation. This is especially evident in light of preliminary evidence confirming one’s SMA profile may link more to certain addictions over others ( Zarate et al., 2022 ). Indeed, those suffering from SMA behaviours have been shown to display heightened degrees of alcohol and drug use, a vulnerability to internet addiction in general, while presenting lower proneness towards exercise addiction and tobacco use ( Grant et al., 2010 , Anderson et al., 2017 , Duradoni et al., 2020 , Spilkova et al., 2017 ). In terms of gambling addiction, social media addicts display similar results on tests of value-based decision making as gambling addicts ( Meshi et al., 2019 ). Finally, regarding shopping addiction, the proliferation of advertisements for products online, and the ease of access via social media to online stores could be assumed to have an intensifying SMA effect ( Rose & Dhandayudham, 2014 ). Aside from these promising, yet relatively limited findings, the assessed connections between SMA and other addictions tend to be either addressed in isolation (e.g., SMA with gambling only and not multiple other addiction forms; Gainsbury et al., 2016a , Gainsbury et al., 2016b ) and in a variable (and not person) focused manner (e.g., higher levels of SMA relate with higher levels of drug addiction; Spilkova et al., 2017 ), which overlooks an individual’s profile. These profiles are vitally needed, as knowing the type of individual who may experience a series of disparate addictions is paramount for identifying at risk social media users and populations in need of more focused prevention/intervention programs ( Grant et al., 2010 ). Hence, using person focused methods such as latent profile(s) analysis (LPA) that address the ways in which distinct variations/profiles in SMA behaviours may occur, and how these relate with other addictions is imperative ( Lanza & Cooper, 2016 ).

1.2. Present study

To address this research priority, while considering SMA behaviours as being normally distributed (i.e., a minimum–maximum continuum) across the different profiles of users in the general population, the present Australian study uses a large community sample, solid psychometric measures and a sequence of differing in parameterizations LCA models aiming to: (a) advance past knowledge on the typology/structure of SMA symptom one experiences and: (b) innovatively explore the association of these typologies with a comprehensive list of addictive behaviours related to gaming, gambling, alcohol, smoking, drug abuse, sex (including porn), shopping, internet use, and exercise.

Based on Cheng and colleagues (2022) and Bányai and colleagues (2017), it was envisaged that three profiles arrayed in terms of ascending SMA symptoms’ severity would be likely identified. Furthermore, guided by past literature supporting closer associations between technological and behavioural addictions than with substance related addictions, it was hypothesized that those classified at higher SMA risk profiles would report higher symptoms of other technological and behavioural addictions, such as those related to excessive gaming and gambling, than with drug addiction ( Chamberlain and Grant, 2019 , Zarate et al., 2022 ).

2.1. Participants

The current study was conducted in Australia. Responses initially retrieved included 1097 participants. Of those, 129 were not considered for the current analyses. In particular, 84 respondents were classified as preview-only registrations and did not address any items, 5 presented with systematic response inconsistencies, and thus were considered invalid, 11 were excluded as potential bots, 11 had not provided their informed consent (i.e., did not tick the digital consent box, although they later addressed the survey), and 18 were taken out for not fulfilling age conditions (i.e., being adults), in line with the ethics approval received. Therefore, responses from 968 English-speaking adults from the general community were examined. An online sample of adult, English speaking participants aged 18 to 64 who were familiar with gaming [ N  = 968, M age  = 29.5, SD age  = 9.36, n males  = 622 (64.3 %), n females  = 315, (32.5 %), n trans/non-binary  = 26 (2.7 %), n queer  =  1 (0.1 %), n other  =  1 (0.1 %), n missing  =  3 (0.3 %)] was analysed. According to Hill (1998) random sampling error is required to lie below 4 %, that is satisfied by the current sample’s 3 % (SPH analytics, 2021). See Table 1 for participants’ sociodemographic information.

Socio-demographic and online use characteristics of participants.





EthnicityWhite/Caucasian38061.119361.22271
Black/African American315237.313.2
Asian12419.95918.713.2
Hispanic/Latino355.692.926.4
Other (Aboriginal, Indian, Pacific Islander, Middle eastern, Mixed, other)528.3319.8516.1
Sexual OrientationHeterosexual/Straight52985.52116739.7
Homosexual/Gay335.3134.1412.9
Bisexual487.76520.61135.5
Other121.9268.31238.7
Employment statusFull Time23838.38627.3722.6
Part Time/Casual7312.7601913.2
Self Employed487.7175.426.4
Unemployed12520.16021.2722.6
Student/Other13822.29223.81445.2
Level of EducationElementary/Middle school101.620.600
High School or equivalent16626.77423.51135.5
Vocational/Technical School/Tafe558.8268.3412.9
Some Tertiary Education11318.26921.939.7
Bachelor’s Degree (3 years)137227624.1516.1
Honours Degree or Equivalent (4 years)6911.13511.1516.1
Masters Degree (MS)477.6206.313.2
Doctoral Degree (PhD)40.641.313.2
Other/Prefer not to say213.392.813.2
Marital/Relationship statusSingle40565.116452.12374.2
Partnered6810.96219.7722.6
Married12019.36821.600
Separated152.4144.400
Other/Prefer not to say142.272.213.2

Note: Percentages represent the percentage of that sex which is represented by any one grouping, rather than percentages of the overall population.

2.2. Measures

Psychometric instruments targeting sociodemographics, SMA and a semi-comprehensive range of behavioral, digital and substance addictions were employed. These instruments involved the Bergen Social Media Addiction Scale (BSMAS; Andreassen et al., 2012 ), the Internet Gaming Disorder 9 items Short Form (IGDS-SF9; Pontes & Griffiths, 2015 ), The Internet Disorder Scale (IDS9-SF; ( Pontes & Griffiths, 2016 ), the Online Gambling Disorder Questionnaire (IGD-Q; González-Cabrera et al., 2020 ), the 10-Item Alcohol Use Disorders Identification Test (AUDIT; Saunders et al., 1993 , the Five Item Cigarette Dependance Scale (CDS-5; Etter et al., 2003 ), the 10- item Drug Abuse Screening Test (DAST-10; Skinner, 1982 ), the Bergen-Yale Sex Addiction Scale (BYSAS; Andreassen et al., 2018), the Bergen Shopping Addiction Scale (BSAS; Andreassen et al., 2015) and the 6-item Revised Exercise Addiction Inventory (EAI-R; Szabo et al., 2019 ). Precise details of these measures, including values related to assumptions can be found in Table 2 .

Measure descriptions and internal consistency.

Instrument’s DescriptionReliability in the current data (α and ω)Normality Distribution in the current data
The Bergen Social Media Addiction Scale (BSMAS)The BSMAS measures the severity of one’s experience of Social Media Addiction (SMA) symptoms (i. e. salience, mood, modification, tolerance, withdrawal conflict and relapse; ). These are measured using six questions relating to the rate at which certain behaviours/states are experienced. Items are scored from 1 (very rarely) to 5 (very often) with higher scores indicating a greater experience of SMA Symptoms ( ).α = 0.88.
ω = 0.89.
Skewness = 0.89
Kurtosis = 0.26
The Internet Gaming Disorder 9 items Short Form (IGDS-SF9)The IGDS-SF9 measures the severity of one’s disordered gaming behaviour on each of the 9 DSM-5 proposed criteria (e.g. Have you deceived any of your family members, therapists or others because the amount of your gaming activity?”( ). Items are addressed following a 5-point Likert scale ranging from 1 (Never) to 5 (very often). Responses are accrued informing a total score ranging from 9 to 45 with higher scores indicating higher disordered gaming manifestations.α = 0.88.
ω = 0.89.
Skewness = 0.94
Kurtosis = 0.69
The Internet Disorder Scale – Short form (IDS9-SF)Measures the severity of one’s experience of excessive internet use as measured by nine symptom criteria/items adapted from the DSM-5 disordered gaming criteria (e. g. “Have you deceived any of your family members, therapists or other people because the amount of time you spend online?”; . The nine items are scored via a 5-point Likert scale ranging from 1 (Never) to 5 (very often) with higher scores indicating more excessive internet use.α = 0.90.
ω = 0.90.
Skewness = 0.74
Kurtosis = 0.11
The Online Gambling Disorder Questionnaire (OGD-Q)Measures the degree to which one’s online gambling behaviours have become problematic ( ). It consists of 11 items asking about the rate certain states or behaviours related to problematic online gambling are experienced in the last 12 months (e.g. Have you felt that you prioritized gambling over other areas of your life that had been more important before?). Responses are addressed on a 5-point Likert scale ranging from 0 (never) to 4 (Every day) with a higher aggregate score indicating greater risk of Gambling Addiction.α = 0.95.
ω = 0.95.

Skewness = 3.45
Kurtosis = 13.90
The 10-Item Alcohol Use Disorders Identification Test (AUDIT)Screens potential problem drinkers for clinicians ( ). Comprised of 10 items scored on a 5-point Likert scale, the AUDIT asks participants questions related to the quantity and frequency of alcohol imbibed, as well as certain problematic alcohol related states/behaviours and the relationship one has with alcohol (e.g. Have you or someone else been injured as a result of you drinking?). Items are scored on a 5 point Likert scale, however due to the varying nature of these questions, the labels used on these responses vary. Higher scores indicate a greater risk, with a score of 8 generally accepted as a dependency indicative point.α = 0.89.
ω = 0.91.
Skewness = 2.13
Kurtosis = 4.84
The Five Item Cigarette Dependence Scale (CDS-5)Measures the five DSM-IV and ICD-11 dependence criteria in smokers ( ). It features 5 items enquiring into specific aspects of cigarette dependency such as cravings or frequency of use, answered via a 5-point Likert scale (e. g. Usually, how soon after waking up do you smoke your first cigarette?). Possible response labels vary to follow the different questions’ phrasing/format (e.g. frequencies, subjective judgements, ease of quitting; ).α = 0.68.
ω = 0.87.
Skewness = 1.52
Kurtosis = 2.52
The 10-item Drug Abuse Screening Test (DAST-10)Screens out potential problematic drug users ( ). It features 10 items asking yes/no questions regarding drug use, frequency and dependency symptoms (e.g. Do you abuse more than one drug at a time?). Items are scored “0″ or “1” for answers of “no” or “yes” respectively, with higher aggregate scores indicating a higher likelihood of Drug Abuse and a proposed cut-off score between 4 and 6.α = 0.79.
ω = 0.88.
Skewness = 2.49
Kurtosis = 6.00
The Bergen-Yale Sex Addiction Scale (BYSAS)Measures sex addiction on the basis of the behavioural addiction definition (Andreassen et al., 2018). It features six items enquiring about the frequency of certain actions/states (e.g. salience, mood modification), rated on a 5-point Likert scale ranging from 0 (Very rarely) to 4 (Very often).α = 0.84.
ω = 0.84.
Skewness = 0.673
Kurtosis = 0.130
The Bergen Shopping Addiction Scale (BSAS)Measures shopping addiction on the basis of seven behavioural criteria (Andreassen et al., 2015). These 7 items enquire into the testee’s agreement with statements about the frequency of certain shopping related actions/states (e.g. I feel bad if I for some reason am prevented from shopping/buying things”) rated on a 5-point Likert scale ranging from 1 (Completely disagree) to 5 (Completely agree). Greater aggregate scores indicate an increased risk of shopping addiction.α = 0.88.
ω = 0.89.
Skewness = 0.889
Kurtosis = 0.260
The 6-item Revised Exercise Addiction Inventory (EAI-R)Assesses exercise addiction, also on the basis of the six behavioural addiction criteria through an equivalent number of items ( ). It comprises six statements about the relationship one has with exercise (e.g. Exercise is the most important thing in my life) rated on a 5-point likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly agree) and higher aggregate scores indicating a higher risk.α = 0.84.
ω = 0.84.
Skewness = 0.485
Kurtosis = -0.451

Note Table 2 : Streiner’s (2003) guidelines are used when measuring internal reliability, with Cronbachs Alpha scores in the range of 0.60–0.69 labelled ‘acceptable’, ranges between 0.70 and 0.89 labelled ‘good’ and ranges between 0.90 and 1.00 labelled ‘excellent’. Acceptable values of skewness fall between − 3 and + 3, and kurtosis is appropriate from a range of − 10 to + 10 ( Brown, 2006 ). OGD-G kurtosis (13.90) and skewness (3.45) exceeded the recommended limits ( Brown, 2006 ). However, LPA does not assume data distribution linearity, normality and or homogeneity ( Rosenberg et al., 2019 ). Considering aim B, related to detecting significant reported differences on measures for gaming, sex, shopping, exercise, gambling, alcohol, drug, cigarette and internet addiction symptoms respectively, anova results were derived after bootstrapping the sample 1000 times to ensure that normality assumptions were met. Case bootstrapping calculates the means of 1000 resamples of the available data and computes the results analysing these means, which are normally distributed ( Tong, Saminathan, & Chang, 2016 ).

2.3. Procedure

Approval was received from the Victoria University Human Research Ethics Committee (HRE20-169). Data was collected in August 2019 to August 2020 via an online survey link distributed via social media (i.e., Facebook; Instagram; Twitter), digital forums (i.e. reddit) and the Victoria University learning management system. Familiarity with gaming was preferred, so that associations with one’s online gaming patterns were studied. The link first took potential participants to the Plain Language Information Statement (PLIS) which informed on the study requirements and participants’ anonymity and free of penalty withdrawal rights. Digital provision of informed consent (i.e., ticking a box) was required by the participants before proceeding to the survey.

2.4. Statistical analyses

Statistical analyses were conducted via: a) R-studio for the latent profile(s) analyses (LPA) and; b) Jamovi for descriptive statistics and profiles’ comparisons. Regarding aim A, LPA identified naturally homogenous subgroups within a population ( Rosenberg et al., 2019 ). Through the TIDYLPA CRAN R package, a number of models varying in terms of their structure/parameterization and the number of ‘profiles’ were tested using the six BSMAS criteria/items as indicators ( Rosenberg et al., 2019 ; see Table 3 ).

LCA model parameterization characteristics.

Model NumberMeansVariancesCovariancesInterpretation
Class-Invariant Parameterization
(CIP)
VaryingEqualZeroDifferent classes/profiles have different means on BSMAS symptoms. Despite this, the differences of the minimum and maximum rates for the six BSMAS symptoms do not significantly differ across the classes/profiles. Finally, there is no covariance in relation to the six BSMAS symptoms across the profiles.
Class-Varying Diagonal Parameterization
(CVDP)
VaryingVaryingZeroDifferent classes/profiles have different means on BSMAS symptoms but similar differences between their minimum and maximum scores. Additionally, there is an existing similar pattern of covariance considering the six BSMAS symptoms across the classes.
Class-Invariant Unrestricted Parameterization
(CIUP)
VaryingEqualEqualDifferent classes in the model have different means on the six BSMAS symptoms. The range between the minimum and maximum scores of the six BSMAS symptoms is dissimilar across the profiles. Last, there is differing covariance based on the six BSMAS symptoms across the classes.
Class-Varying Unrestricted Parameterization
(CVUP)
VaryingVaryingVaryingDifferent classes in the model have different means on the six BSMAS symptom. The range between the minimum and maximum scores of the six BSMAS symptoms is dissimilar across the profiles. Last, there is differing covariance based on the six BSMAS symptoms across the classes.

Subsequently, the constructed models were compared regarding selected fit indices (i.e., Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), bootstrapped Lo-Mendel Rubin test (B-LMR or LRT), entropy and the N_Min; Rosenberg et al., 2019 ) 1 . This involved 1: Dismissing any models with N -Min’s equalling 0, as each profile requires at least one participant, 2: Dismissing models with entropy scores below 0.64 ( Tein et al., 2013 ), 3: Dismissing models with nonsignificant BLMR value, and 4: assessing the remaining models on their AIC/BIC looking for an elbow point in the decline or the lowest values.

Regarding aim B of the study, ANOVA with bootstrapping (1000x) was employed to detect significant profile differences regarding one’s gaming, sex, shopping, exercise, gambling, alcohol, drug, cigarette and internet addiction symptoms respectively.

All analyses’ assumptions were met with one exception 2 . The measure of Online Gambling disorder experience violated guidelines for the acceptable departure from normality and homogeneity ( Kim, 2013 ). Given this violation, results regarding gambling addiction should be considered with some caution.

3.1. Aim A: LPA of BSMAS symptoms

The converged models’ fit, varying by number of profiles and parametrization is displayed in Table 4 , with the CIP parameterisation presenting as the optimum (i.e. lower AIC and BIC, and 1–8 profiles converging; all CVDP, CIUP, CVUP models did not converge except the CVUP one profile). Subsequently, the CIP models were further examined via the TIDYLPA Mclust function (see Table 5 ). AIC and BIC decreased as the number of profiles increased. This flattened past 3 profiles (i.e., elbow point; Rosenberg et al., 2019 ). Furthermore, past 3 profiles, N -min reached zero, indicating profiles with zero participants in them – thus reducing interpretability. Lastly, the BLRT test reached non significance once the model had 4 profiles, again indicating the 3-profile model as best fitting. Therefore, alternative CIP -models were rejected in favour of the 3-profile one. This displayed a level of classification accuracy well above the suggested cut off point of 0.76 (entropy = 0.90; Larose et al., 2016 ), suggesting over 90 % correct classification ( Larose et al., 2016 ). Regarding the profiles’ proportions, counts revealed 33.6 % as profile 1, 52.4 % as profile 2, 14 % as profile 3.

Initial model testing.

ModelClassesAICBIC
CIP118137.518196.0
215787.615880.2
315040.515167.3
415054.615215.4
515068.715263.7
614548.814778.0
714562.814826.1
814350.114647.5
CVUP115218.215349.8

Fit indices of cip models with 1–8 classes.

ModelClassesAICBICEntropyn_minBLRT_p
CIP118137.618196.111
CIP215780.515873.10.890.350.01
CIP315025.315152.10.900.140.01
CIP415039.415200.27901
CIP515053.715248.70.701
CIP614777.715006.80.7700.01
CIP714557.614820.90.800.01
CIP814449.914747.20.8100.01

Table 6 and Fig. 1 present the profiles’ raw mean scores across the 6 BSMAS items whilst Table 7 and Fig. 2 present the standardised mean scores.

Raw Mean Scores and Standard Error of the 6 BSMAS Criteria Across the Three Classes/Profiles.

Symptom
Class
SalienceToleranceMood ModificationRelapseWithdrawalConflict
12.982.872.812.161.741.79
21.361.251.361.251.081.08
33.83.953.883.463.583.02
SE (Equal across classes)0.070.070.080.080.090.08

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Raw symptom experience of the three classes.

Standardised mean scores of the 6 bsmas criteria Across the Three Classes/Profiles.

Symptom
Class
SalienceToleranceMood ModificationRelapseWithdrawalConflict
10.580.560.480.260.080.21
2−0.71−0.74−0.65−0.53−0.56−0.53
31.261.421.301.381.881.48

Note: For standard errors, see Table 6 .

An external file that holds a picture, illustration, etc.
Object name is gr2.jpg

Standardized symptom experience of the three classes.

Profile 1 scores varied from 1.74 to 2.98 raw and between 0.08 and 0.58 standard deviations above the sample mean symptom experience. In terms of plateaus and steeps, profile 1 displayed a raw score plateaus across symptoms 1–3 (salience, tolerance, mood modification), a decline in symptom 4 (relapse), and another plateau across symptoms 5–6 (withdrawal and conflict). It further displayed a standardized score plateau around the level of 0.5 standard deviations across symptoms 1–3 and a decline across symptoms 4–6. Profile 2 varied consistently between raw mean scores of 1 and 1.36 across the 6 SMA symptoms, and between −0.74 and −0.53 standard deviations from the sample mean with general plateaus in standardized score across symptoms 1–3 and 4–6. Finally, profile 3 mean scores varied between 3.02 and 3.95 raw and 1.26 to 1.88 standardized. Plateaus were witnessed in the raw scores across symptoms 1–3 (salience, tolerance, mood modification), a decline at symptom 4 (relapse), a relative peak at symptom 5 (withdrawal), and a further decline across symptom 6 (conflict). However, the standardized scores for profile 3 were relatively constant across the first four symptoms, before sharply reaching a peak at symptom 5 and then declining once more. Accordingly, the three profiles were identified as severity profiles ‘Low’ (profile 2), ‘Moderate’ (profile 1) and ‘High’ (profile 3) risk. Table 8 , Table 9 provide the profile means and standard deviations, as well as their pairwise comparisons across the series of other addictive behaviors assessed.

Post Hoc Descriptives across a semi-comprehensive list of addictions.

Comparison/ClassMeanStandard DeviationN
Low16.2166.353501
Moderate19.1866.655322
High22.2168.124134


Low3.8775.175503
Moderate4.4916.034324
High6.6108.018136


Low9.2644.134507
Moderate9.0283.725325
High9.5513.955136


Low1.5611.513506
Moderate1.7541.787325
High2.0441.881136


Low5.5684.640505
Moderate7.1154.898323
High9.6875.769134


Low11.5654.829503
Moderate14.8045.173321
High17.9937.222134


Low13.8126.467500
Moderate14.6466.009322
High15.7937.470135


Low12.2613.178502
Moderate14.2706.190315
High16.9489.836135


Low17.0227.216501
Moderate21.1656.554321
High27.9717.340136

Post Hoc Comparisons of the SMA profiles revealed across the addictive behaviors measured.

Comparison/ClassMean DifferenceSEtp
Low vs moderate−2.9710.481−6.183< 0.001
Low vs High−6.6500.654−10.164< 0.001
Moderate vs High−3.6790.692−5.320< 0.001


Low vs moderate−0.6140.423−1.4510.315
Low vs High−2.7340.574−4.761< 0.001
Moderate vs High−2.1200.607−3.4920.001


Low vs moderate0.2370.2830.8370.680
Low vs High−0.2870.384−0.7480.735
Moderate vs High−0.5240.406−1.2900.401


Low vs moderate−0.1930.118−1.6280.234
Low vs High−0.4830.161−3.0050.008
Moderate vs High−0.2900.170−1.7080.203


Low vs moderate−1.5460.349−4.431< 0.001
Low vs High−4.1180.476−8.653< 0.001
Moderate vs High−2.5720.503−5.111< 0.001


Low vs moderate−3.2390.381−8.495< 0.001
Low vs High−6.4280.519−12.387< 0.001
Moderate vs High−3.1890.549−5.809< 0.001


Low vs moderate−0.8340.462−1.8040.169
Low vs High−1.9810.628−3.1560.005
Moderate vs High−1.1470.663−1.7280.195


Low vs moderate−2.0090.405−4.966< 0.001
Low vs High−4.6870.546−8.591< 0.001
Moderate vs High−2.6780.579−4.626< 0.001


Low vs moderate−4.1430.502−8.256< 0.001
Low vs High−10.9490.679−16.131< 0.001
Moderate vs High−6.8050.718−9.476< 0.001

3.2. Aim 2: BSMAS profiles and addiction risk/personal factors.

Table 8 , Table 9 display the Jamovi outputs for the BSMAS profiles and their means and standard deviations, as well as their pairwise comparisons across the series of other addictive behaviors assessed using ANOVA. Cohen’s (1988) benchmarks were used for eta squared values, with > 0.01 indicating small, >0.059 medium and > 0.138 large effects. ANOVA results were derived after bootstrapping the sample 1000 times to ensure that normality assumptions were met. Case bootstrapping calculates the means of 1000 resamples of the available data and computes the results analysing these means, which are normally distributed ( Tong et al., 2016 ). SMA profiles significantly differed across the range of behavioral addiction forms examined with more severe SMA profiles presenting consistently higher scores with a medium effect size regarding gaming ( F  = 57.5, p  <.001, η 2  = 0.108), sex ( F  = 39.53, p  <.001, η 2  = 0.076) and gambling ( F  = 40.332, p  <.001, η 2  = 0.078), and large effect sizes regarding shopping ( F  = 90.06, p  <.001, η 2  = 0.159) and general internet addiction symptoms ( F  = 137.17, p  <.001, η2 = 0.223). Only relationships of ‘medium’ size or greater were considered further in this analysis, though small effects were found with alcoholism ( F  = 11.34, p  <.001, η 2  = 0.023), substance abuse ( F  = 4.83, p  =.008, η 2  = 0.01) and exercise addiction ( F  = 5.415, p  =.005, η2 = 0.011). Pairwise comparisons consistently confirmed that the ‘low’ SMA profile scored significantly lower than the ‘moderate’ and the ‘high’ SMA profile’, and the ‘moderate’ SMA profile being significantly lower than the ‘high’ SMA profile across all addiction forms assessed (see Table 8 , Table 9 ).

4. Discussion

The present study examined the occurrence of distinct SMA profiles and their associations with a range of other addictive behaviors. It did so via uniquely combining a large community sample, measures of established psychometric properties addressing both SMA and an extensive range of other proposed substance and behavioral addictions, to calculate the best fitting model in terms of parameterization and profile number. A model of the CIP parameterization with three profiles was supported by the data. The three identified SMA profiles ranged in terms of severity and were labeled as ‘low’ (52.4 %), ‘moderate’ (33.6 %) and ‘high’ (14 %) SMA risk. Membership of the ‘high’ SMA risk profile was shown to link with significantly higher reported experiences of Internet and shopping addictive behaviours, and moderately with higher levels of addictive symptoms related to gaming, sex and gambling.

4.1. Number and variations of SMA profiles

Three SMA profiles, entailing ‘low’ (52.4 %), ‘moderate’ (33.6 %) and ‘high’(14 %) SMA risk were supported, with symptom 5 – withdrawal – displaying the highest inter-profile disparities. These results help clarify the number of SMA profiles in the population, as past findings were inconsistent supporting either 3, or 4 or 5 SMA profiles ( Bányai et al., 2017 , Brailovskaia et al., 2021 , Luo et al., 2021 ), as well as the nature of the differences between these profiles (i.e. quantitative: “how much/high one experiences SMA symptoms” or qualitative: “the type of SMA symptoms one experiences”). Our findings are consistent with the findings of Bányai and colleagues (2017) and Cheng and colleagues (2022) indicating a unidimensional experience of SMA (i.e., that the intensity/severity an individual reports best defines their profile membership, rather than the type of SMA symptoms) with three profiles ranging in severity from ‘low’ to ‘moderate’ to ‘high’ and those belonging at the higher risk profiles being the minority. Conversely, these results stand in opposition with two past studies identifying profiles that varied qualitatively (i.e., specific SMA symptoms experienced more by certain profiles) and suggesting the occurrence of 4 and 5 profiles respectively ( Brailovskaia et al., 2021 , Luo et al., 2021 ). Such differences might be explained by variations in the targeted populations of these studies. Characteristics such as gender, nationality and age all have significant effects on how and why social media is employed ( Andreassen et al., 2016 ; Hsu et al., 2015 ; Park et al., 2015 ). Given that the two studies in question utilized European, adolescent samples, the difference in the culture and age of our samples may have produced our varying results, ( Brailovskaia et al., 2021 , Luo et al., 2021 ). Comparability issues may also explain these results, given the profiling analyses implemented in the studies of Brailovskaia and colleagues, (2021), as well as Luo and colleagues (2021) did not extensively consider different profiles parameterizations, as the present study and Cheng et al. (2022) did. Furthermore, the results of this study closely replicated those of the Cheng et al., (2022) study, with both studies identifying a near identical pattern of symptom experience across three advancing levels of severity. This replication of results may indicate their accuracy, strengthening the validity of SMA experience models involving 3 differentiated profiles of staggered severity. Both our findings and Cheng et al.’s findings indicate profiles characterized by higher levels of cognitive symptoms (salience, withdrawal and mood modification) for each class when compared to their experience of behavioral symptoms (Relapse, withdrawal, conflict; Cheng et al., 2022 ). Further research may focus on any potentially mediating/moderating factors that may be interfering, and potentially further replicate such results, proving their reliability. Furthermore, given that past studies (with different results) utilized European, adolescent samples, cultural and age comparability limitations need to be considered and accounted for in future research ( Bányai et al., 2017 , Brailovskaia et al., 2021 ; Cheng et al., 2022 ).

Regarding withdrawal being the symptom of highest discrepancy between profiles, findings suggest that it may be more SMA predictive, and thus merit specific assessment or diagnostic attention, aligning with past literature ( Bányai et al., 2017 , Luo et al., 2021 , Brailovskaia et al., 2021 , Smith and Short, 2022 ). Indeed, the experience of irritability and frustration when abstaining from usage has been shown to possess higher differentiation power regarding diagnosing and measuring other technological addictions such as gaming, indicating the possibility of a broader centrality to withdrawal across the constellation of digital addictions ( Gomez et al., 2019 ; Schivinski et al., 2018 ).

Finally, the higher SMA risk profile percentage in the current study compared with previous research [e.g., 14 % in contrast to the 4.5 % ( Bányai et al., 2017 ), 4.2 % ( Luo et al., 2021 ) and 7.2 % ( Brailovskaia et al., 2021 )] also invites significant plausible interpretations. The data collection for the present Australian study occurred between August 2019 to August 2020, while Bányai and their colleagues (2017) collected their data in Hungary in March 2015, and Brailovskaia and their colleagues (2021) in Lithuania and Germany between October 2019 and December 2019. The first cases of the COVID-19 pandemic outside China were reported in January 2020, and the pandemic isolation measures prompted more intense social media usage, to compensate for their lack of in person interactions started unfolding later in 2020 ( Ryan, 2021 , Saud et al., 2020 ). Thus, it is likely that the higher SMA symptom scores reported in the present study are inflated by the social isolation conditions imposed during the time the data was collected. Furthermore, the present study involves an adult English-speaking population rather than European adolescents, as the studies of Bányai and their colleagues (2017) and Brailovskaia and their colleagues (2021). Thus, age and/or cultural differences may explain the higher proportion of the high SMA risk profile found. For instance, it is possible that there may be greater SMA vulnerability among older demographics and/or across countries. The explanation of differences across counties is reinforced by the findings of Cheng and colleagues (2022) who assessed and compared UK and US adult populations, the first is less likely, as younger age has been shown to relate to higher SMA behaviors ( Lyvers et al., 2019 ). Overall, the present results closely align with that of Cheng and colleagues (2022), who also collected their data during a similar period (between May 18, 2020 and May 24, 2020) from English speaking countries (as the present study did). They, in line with our findings, also supported the occurrence of three SMA behavior profiles, with the low risk profile exceeding 50 % of the general population and those at higher risk ranging above 9 %.

4.2. Concurrent addiction risk

Considering the second study aim, ascending risk profile membership was strongly related to increased experiences of internet and shopping addiction, while it moderately connected with gaming, gambling and sex addictions. Finally, it weakly associated with alcohol, exercise and drug addictions. These findings constitute the first semi-comprehensive cross-addiction risk ranking of SMA high-risk profiled individuals, allowing the following implications.

Firstly, no distinction was found between the so called “technological” and other behavioral addictions, potentially contradicting prior theory on the topic ( Gomez et al., 2022 ). Typically, the abuse of internet gaming/pornography/social media, has been classified as behavioral addiction ( Enrique, 2010 , Savci and Aysan, 2017 ). However, their shared active substance – the internet – has prompted some scholars to suggest that these should be classified as a distinct subtype of behavioral addictions named “technological/ Internet Use addictions/disorders” ( Savci & Aysan, 2017 ). Nevertheless, the stronger association revealed between the “high” SMA risk profile and shopping addictions (not always necessitating the internet), compared to other technology related addictions, challenges this conceptual distinction ( Savci & Aysan, 2017 ). This finding may point to an expanding intersection between shopping and SMA, as an increasing number of social media platforms host easily accessible product and services advertising channels (e.g., Facebook property and car selling/marketing groups, Instagram shopping; Rose & Dhandayudham, 2014 ). In turn, the desire to shop may prompt a desire to find these services online, share shopping endeavors with others or find deals one can only access through social media creating a reciprocal effect ( Rose & Dhandayudham, 2014 ). This possibility aligns with previous studies assuming reciprocal addictive co-occurrences ( Tullett-Prado et al., 2021 ). This relationship might also be exacerbated by shared causal factors underpinning addictions in general, such as one’s drive for immediate gratification and/or impulsive tendencies ( Andreassen et al., 2016 ; Niedermoser et al., 2021 ). Although such interpretations remain to be tested, the strong SMA and shopping addiction link evidenced suggests that clinicians should closely examine the shopping behaviors of those suffering from SMA behaviours, and if comorbidity is detected – address both addictions concurrently ( Grant et al., 2010 , Miller et al., 2019 ). Conclusively, despite some studies suggesting the distinction between technological, and especially internet related (e.g., SMA, internet gaming), addictions and other behavioral addictions ( Gomez et al., 2022 , Zarate et al., 2022 ), the current study’s high risk SMA profile associations appear not to differentiate based on the technological/internet nature that other addictions may involve.

Secondly, results suggest a novel hierarchical list of the types of addictions related to the higher SMA risk profile. While previous research has established links between various addictive behaviors and SMA (i.e., gaming and SMA; Wang et al., 2015 ), these have never before - to the best of the authors’ knowledge – been examined simultaneously allowing their comparison/ranking. Therefore, our findings may allow for more accurate predictions about the addictive comorbidities of SMA, aiding in SMA’s assessment and treatment. For example, Internet, shopping, gambling, gaming and sex addictions were all shown to more significantly associate with the high risk SMA profile than exercise and substance related addictive behaviors ( King et al., 2014 ; Gainsbury et al., 2016a ; Gainsbury et al., 2016b ; Rose and Dhandayudham, 2014 , Kamaruddin et al., 2018 , Leung, 2014 ). Thus, clinicians working with those with SMA may wish to screen for gaming and sex addictions. Regardless of the underlying causes, this hierarchy provides the likelihood of one addiction precipitating and perpetuating another in a cyclical manner, guiding assessment, prevention, and intervention priorities of concurrent addictions.

Lastly, these results indicate a lower relevance of the high risk SMA profile with exercise/substance addictive behaviors. Considering excessive exercise, our study reinforces literature indicating decreased physical activity among SMA and problematic internet users in general ( Anderson et al., 2017 , Duradoni et al., 2020 ). Naturally, those suffering from SMA behaviours spend large amounts of time sedentary in front of a screen, precluding excessive physical activities. Similarly, the lack of a significant relationship between tobacco abuse and SMA has also been identified priori, perhaps due to the cultural divide between social media and smoking in terms of their acceptance by wider society and of the difference in their users ( Spilkova et al., 2017 ). Contrary to expectations, there were weak/negligible associations between the high SMA risk profile with substance and alcohol abuse behaviours. This finding contradicts current knowledge supporting their frequent comorbidity ( Grant et al., 2010 , Spilkova et al., 2017 ; Winpenny et al., 2014 ). This finding may potentially be explained by individual differences between these users, as while one can assume many traits are shared between those vulnerable to substances and SMA, these may be expressed differently. For example, despite narcissism being a common addiction risk factor, its predictive power is mediated by reward sensitivity in SMA, where in alcoholism and substances, no such relationship exists ( Lyvers et al., 2019 ). Perhaps the constant dopamine rewards and the addictive reward schedule of social media targets this vulnerability in a way that alcoholism does not. Overall, one could assume that the associations between SMA and less “traditionally” (i.e., substance related; Gomez et al., 2022 ) viewed addictions deserves more attention. Thus, future research is recommended.

4.3. Limitations and future direction

The current findings need to be considered in the light of various limitations. Firstly, limitations related to the cross-sectional, age specific and self-report surveyed data are present. These methodological restrictions do not allow for conclusions regarding the longitudinal and/or causal associations between different addictions, nor for generalization of the findings to different age groups, such as adolescents. Furthermore, the self-report questionnaires employed may accommodate subjectivity biases (e.g., subjective and/or false memory recollections; Hoerger & Currell, 2012 ; Sun & Zhang, 2020 The latter risk is reinforced by the non-inclusion of social desirability subscales in the current study, posing obstacles in ensuring participant responses are accurate.

Additionally, there is a conceptual overlap between SMA and Internet Addiction (IA), which operates as an umbrella construct inclusive of all online addictions (i.e., irrespective of the aspect of the Internet being abused; Anderson et al., 2017 , Savci and Aysan, 2017 ). Thus, caution is warranted considering the interpretation of the SMA profiles and IA association, as SMA may constitute a specific subtype included under the IA umbrella ( Savci & Aysan, 2017 ). However, one should also consider that: (a) SMA, as a particular IA subtype is not identical to IA ( Pontes, & Griffiths, 2014 ); and (b) recent findings show that IA and addictive behaviours related to specific internet applications, such as SMA, could correlate with different types of electroencephalogram [EEG] activity, suggesting their neurophysiological distinction (e.g. gaming disorder patients experience raised delta and theta activity and reduced beta activity, while Internet addiction patients experience raised gamma and reduced beta and delta activity; Burleigh et al., 2020 ). Overall, these advocate in favour of a careful consideration of the SMA profiles and IA associations.

Finally, the role of demographic differences, related to one’s gender and age, which have been shown to mediate the relationship between social media engagement and symptoms of other psychiatric disorders ( Andreassen et al., 2016 ) have not been attended here.

Thus, regarding the present findings and their limitations, future studies should focus on a number of key avenues; (1) achieving a more granular understanding of SMA’s associations with comorbid addictions via case study or longitudinal research (e.g., cross lag designs), (2) further clarifying the nature of the experience of SMA symptoms, (3) investigating the link between shopping addiction and SMA, as well as potential interventions that target both of these addictions simultaneously and, (4) attending to gender and age differences related to the different SMA risk profiles, as well as how these may associate with other addictions.

5. Conclusion

The present study bears significant implications for the way that SMA behaviours are assessed among adults in the community and subsequently addressed in adult clinical populations. By profiling the ways in which SMA symptoms are experienced, three groups of adult social media users, differing regarding the reported intensity of their SMA symptoms were revealed. These included the ‘low’ (52.4 %), ‘moderate’ (33.6 %) and ‘high’ (14 %) SMA risk profiles. The high SMA risk profile membership was strongly related to increased rates of reported internet and shopping related addictive behaviours, moderately associated with gaming, gambling and sex related addictive behaviours and weakly associated with alcohol, exercise and drug related addictive behaviours, to the point that such associations were negligible at most. These results enable a better understanding of those experiencing higher SMA behaviours, and the introduction of a risk hierarchy of SMA-addiction comorbidities that needs to be taken into consideration when assessing and/or treating those suffering from SMA symptoms. Specifically, SMA and its potential addictive behaviour comorbidities may be addressed with psychoeducation and risk management techniques in the context of SMA relapse prevention and intervention plans, with a greater emphasis on shopping and general internet addictive behaviours. Regarding epidemiological implications, the inclusion of 14 % of the sample in the high SMA risk profile implies that while social media use can be a risky experience, it should not be over-pathologized. More importantly, and provided that the present findings are reinforced by other studies, SMA awareness campaigns might need to be introduced, while regulating policies should concurrently address the risk for multiple addictions among those suffering from SMA behaviours.

Note 1: Firstly, results were compared across all converged models. In brief, the AIC and BIC are measures of the prediction error which penalize goodness of fit by the number of parameters to prevent overfit, models with lower scores are deemed better fitting ( Tein et al., 2013 ). Of the 16 possible models, the parameterization with the most consistently low AIC’s and BIC’s across models with 1–8 profiles were chosen, eliminating 8 of the possible models. Subsequently, the remaining models were more closely examined through TIDYLPA using the compare solutions command, with the. BLMR operating as a direct comparison between 2 models (i.e. the model tested and a similar model with one profile less) on their relative fit using likelihood ratios. A BLMR based output p value will be obtained for each comparison pair with lower p-values corresponding to the greater fit among the models tested (i.e. if BLMR p >.05, the model with the higher number of profiles needs to be rejected; Tein et al., 2013). Entropy is an estimate of the probability that any one individual is correctly allocated in their profile/profile. Entropy ranges from 0 to 1 with higher scores corresponding with a better model ( Tein et al., 2013 ; Larose et al., 2016 ). Finally, the N_min represents the minimum proportion of sample participants in any one presentation profile and aids in determining the interpretability/parsimony of a model. If N_min is 0, then there is a profile or profilees in the model empty of members. Thus, the interpretability and parsimony of the model is reduced ( CRAN, 2021 ). These differing fit indices were weighed up against eachother in order to identify the best fitting model (Akogul & Erisoglu, 2017). This best fitting model was subsequently applied to the datasheet, and then the individual profilees examined through the use of descriptive statistics in order to identify their characteristics.

Note 2: With regards to the assumptions of the LPA Model, as a non-parametric test, no assumptions were made regarding the distribution of data. With regards to the subsequent ANOVA analyses, 2 assumptions were made as to the nature of the distribution. Homogeneity of variances and Normality. Thus, the distribution of the data was assessed via Jamovi. Skewness and Kurtosis for all measures employed in the ANOVA analyses. Skewness ranged from 0.673 to 2.49 for all variables bar the OGD-Q which had a skewness of 3.45. Kurtosis ranged from 0.11 to 6 for variables bar the OGD-Q which had a kurtosis of 13.9. Thus, all measures excepting the OGD-Q sat within the respective acceptable ranges of + 3 to −3 and + 10 to −10 recommended by Brown and Moore (2012).

Dr Vasileios Stavropoulos received funding by:

The Victoria University, Early Career Researcher Fund ECR 2020, number 68761601.

The Australian Research Council, Discovery Early Career Researcher Award, 2021, number DE210101107.

Ethical Standards – Animal Rights

All procedures performed in the study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors. Thus, the present study was approved by the Human Ethics Research Committee of Victoria University (Australia).

Informed consent

Informed consent was obtained from all individual participants included in the study.

Confirmation statement

Authors confirm that this paper has not been either previously published or submitted simultaneously for publication elsewhere.

Publication

Authors confirm that this paper is not under consideration for publication elsewhere. However, the authors do disclose that the paper has been considered elsewhere, advanced to the pre-print stage and then withdrawn.

Authors assign copyright or license the publication rights in the present article.

Availability of data and materials

Data is deposited as a supplementary file with the current document.

CRediT authorship contribution statement

Deon Tullett-Prado: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation. Vasileios Stavropoulos: Supervision, Resources, Funding acquisition, Project administration. Rapson Gomez: Supervision, Resources. Jo Doley: Supervision, Resources.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Biographies

Deon Tullett-Prado: Deon Tullett-Prado is a PhD candidate and emerging researcher in the area of behavioral addictions and in particular Internet Gaming Disorder. His expertise involves advanced statistical analysis skills and innovative techniques regarding population profiling.

Dr Vasileios Stavropoulos: Dr Vasileios Stavropoulos is a member of the Australian Psychological Society (APS) and a registered psychologist endorsed in Clinical Psychology with the Australian Health Practitioner Regulation Authority (AHPRA). Vasileios' research interests include the areas of Behavioral Addictions and Developmental Psychopathology. In that context, Vasileios is a member of the European Association of Developmental Psychology (EADP) and the EADP Early Researchers Union. Considering his academic collaborations, Vasileios maintains his research ties with the Athena Studies for Resilient Adaptation Research Team of the University of Athens, the International Gaming Centre of Nottingham Trent University, Palo Alto University and the Korean Advanced Institute of Science and Technology. Vasileios has received the ARC DECRA award 2021.

Dr Rapson Gomez: Rapson Gomez is professor in clinical psychology who once directed clinical training at the School of Psychology, University of Tasmania (Hobart, Australia). Now he focuses on research using innovative statistical techniques with a particular focus on ADHD, biological methods of personality, psychometrics and Cyberpsychology.

Dr Jo Doley: A lecturer at Victoria University, Dr Doley has a keen interest in the social aspects of body image and eating disorders. With expertise in a variety of quantitative methodologies, including experimental studies, delphi studies, and systematic reviews, Dr Doley has been conducting research into the ways that personal characteristics like sexual orientation and gender may impact on body image. Furthermore, in conjunction with the cyberpsychology group at VU they have been building a new expertise on digital media and it’s potential addictive effects.

Appendix A Supplementary data to this article can be found online at https://doi.org/10.1016/j.abrep.2023.100479 .

Appendix A. Supplementary material

The following are the Supplementary data to this article:

Data availability

  • Anderson E.L., Steen E., Stavropoulos V. Internet use and Problematic Internet Use: A systematic review of longitudinal research trends in adolescence and emergent adulthood. International Journal of Adolescence and Youth. 2017; 22 (4):430–454. doi: 10.1080/02673843.2016.1227716. [ CrossRef ] [ Google Scholar ]
  • Andreassen C.S., Torsheim T., Brunborg G.S., Pallesen S. Development of a Facebook addiction scale. Psychological reports. 2012; 110 (2):501–517. doi: 10.2466/02.09.18.PR0.110.2.501-517. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andreassen C.S., Billieux J., Griffiths M.D., Kuss D.J., Demetrovics Z., Mazzoni E., et al. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: A large-scale cross-sectional study. Psychology of Addictive Behaviors. 2016; 30 (2):252. doi: 10.1037/adb0000160. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bányai F., Zsila Á., Király O., Maraz A., Elekes Z., Griffiths M.D., et al. Problematic social media use: Results from a large-scale nationally representative adolescent sample. PloS one. 2017; 12 (1):e0169839. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bodor D., Tomić A., Ricijaš N., Filipčić I. Impulsiveness in alcohol addiction and pathological gambling. Alcoholism and psychiatry research: Journal on psychiatric research and addictions . 2016; 52 (2):149–158. doi: 10.20471/apr.2016.52.02.05. [ CrossRef ] [ Google Scholar ]
  • Bouchillon B.C. Social networking for interpersonal life: A competence-based approach to the rich get richer hypothesis. Social Science Computer Review. 2020; 0894439320909506 doi: 10.1177/0894439320909506. [ CrossRef ] [ Google Scholar ]
  • Brailovskaia J., Truskauskaite-Kuneviciene I., Kazlauskas E., Margraf J. The patterns of problematic social media use (SMU) and their relationship with online flow, life satisfaction, depression, anxiety and stress symptoms in Lithuania and in Germany. Current Psychology. 2021; 1–12 doi: 10.1007/s12144-021-01711-w. [ CrossRef ] [ Google Scholar ]
  • Brown R.I.F. Some contributions of the study of gambling to the study of other addictions. Gambling behavior and problem gambling. 1993; 1 :241–272. [ Google Scholar ]
  • Brown T.A. Confirmatory factor analysis for applied research. The Guilford Press; 2006. [ Google Scholar ]
  • Burleigh T.L., Griffiths M.D., Sumich A., Wang G.Y., Kuss D.J. Gaming disorder and internet addiction: A systematic review of resting-state EEG studies. Addictive Behaviors. 2020; 107 (106429):32283445. doi: 10.1016/j.addbeh.2020.106429. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Calderaro A. In: Outhwaite W., Turner S., editors. Volume Set. SAGE Publications; 2018. Social media and politics. (The SAGE Handbook of Political Sociology: Two). [ CrossRef ] [ Google Scholar ]
  • Chamberlain S.R., Grant J.E. In: A transdiagnostic approach to obsessions, compulsions and related phenomena. Cambridge. Fontenelle L.F., Yücel M., editors. University Press; 2019. Behavioral addictions. [ CrossRef ] [ Google Scholar ]
  • Chamberlain S.R., Lochner C., Stein D.J., Goudriaan A.E., van Holst R.J., Zohar J., et al. Behavioural addiction—a rising tide? European neuropsychopharmacology. 2016; 26 (5):841–855. doi: 10.1016/j.euroneuro.2015.08.013. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cheng C., Ebrahimi O.V., Luk J.W. Heterogeneity of prevalence of social media addiction across multiple classification schemes: Latent profile analysis. Journal of Medical Internet Research. 2022; 24 (1):e27000. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Countrymeters. (2021). World Population. Retrieved from https://rb.gy/v6cqlq.
  • CRAN. (2021). Introduction to tidyLPA. CRAN. https://cran.r-project.org/web/packages/tidyLPA/vignettes/Introduction_to_tidyLPA.html .
  • DataReportal. (2021). DIGITAL 2021 OCTOBER GLOBAL STATSHOT REPORT. Retrieved from https://datareportal.com/reports/digital-2021-october-global-statshot.
  • Duradoni M., Innocenti F., Guazzini A. Well-being and social media: A systematic review of Bergen addiction scales. Future Internet. 2020; 12 (2):24. doi: 10.3390/fi12020024. [ CrossRef ] [ Google Scholar ]
  • Enrique E. Addiction to new technologies and to online social networking in young people: A new challenge. Adicciones. 2010; 22 (2) doi: 10.20882/adicciones.196. DOI. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Etter J.F., Le Houezec J., Perneger T. A Self-Administered Questionnaire to Measure Dependence on Cigarettes: The Cigarette Dependence Scale. Neuropsychopharmacol. 2003; 28 :359–370. doi: 10.1038/sj.npp.1300030. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant J.E., Potenza M.N., Weinstein A., Gorelick D.A. Introduction to behavioral addictions. The American journal of drug and alcohol abuse. 2010; 36 (5):233–241. doi: 10.3109/00952990.2010.491884. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Griffiths M.D., Kuss D. Adolescent social media addiction (revisited) Education and Health. 2017; 35 (3):49–52. [ Google Scholar ]
  • Gomez R., Stavropoulos V., Brown T., Griffiths M.D. Factor structure of ten psychoactive substance addictions and behavioural addictions. Psychiatry Research. 2022; 114605 doi: 10.1016/j.psychres.2022.114605. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gainsbury S.M., King D.L., Russell A.M., Delfabbro P., Derevensky J., Hing N. Exposure to and engagement with gambling marketing in social media: Reported impacts on moderate-risk and problem gamblers. Psychology of Addictive Behaviors. 2016; 30 (2):270. doi: 10.1037/adb0000156. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gainsbury S.M., Delfabbro P., King D.L., Hing N. An exploratory study of gambling operators’ use of social media and the latent messages conveyed. Journal of Gambling Studies. 2016; 32 (1):125–141. doi: 10.1007/s10899-015-9525-2. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gomez R., Stavropoulos V., Beard C., Pontes H.M. Item response theory analysis of the recoded internet gaming disorder scale-short-form (IGDS9-SF) International Journal of Mental Health and Addiction. 2019; 17 (4):859–879. doi: 10.1007/s11469-018-9890-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • González-Cabrera J., Machimbarrena J.M., Beranuy M., Pérez-Rodríguez P., Fernández-González L., Calvete E. Journal of Clinical Medicine . Vol. 9. MDPI AG; 2020. Design and Measurement Properties of the Online Gambling Disorder Questionnaire (OGD-Q) in Spanish Adolescents; p. 120. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gorwa R., Guilbeault D. Unpacking the social media bot: A typology to guide research and policy. Policy & Internet. 2020; 12 (2):225–248. doi: 10.1002/poi3.184. [ CrossRef ] [ Google Scholar ]
  • Haand R., Shuwang Z. The relationship between social media addiction and depression: A quantitative study among university students in Khost, Afghanistan. International Journal of Adolescence and Youth. 2020; 25 (1):780–786. doi: 10.1080/02673843.2020.1741407. [ CrossRef ] [ Google Scholar ]
  • Heffer, T., Good, M., Daly, O., MacDonell, E., & Willoughby, T. (2019). The longitudinal association between social-media use and depressive symptoms among adolescents and young adults: An empirical reply to Twenge et al.(2018).  Clinical Psychological Science ,  7 (3), 462-470. DOI: https://doi.org/10.1177/216770261881272.
  • Hoerger M., Currell C. In: APA handbook of ethics in psychology, Vol. 2. Practice, teaching, and research. Knapp S.J., Gottlieb M.C., Handelsman M.M., VandeCreek L.D., editors. American Psychological Association; 2012. Ethical issues in Internet research; pp. 385–400. [ CrossRef ] [ Google Scholar ]
  • Hsu M.H., Chang C.M., Lin H.C., Lin Y.W. Determinants of continued use of social media: the perspectives of uses and gratifications theory and perceived interactivity. Information Research. 2015 [ Google Scholar ]
  • Kamaruddin N., Rahman A.W.A., Handiyani D. Pornography addiction detection based on neurophysiological computational approach. Indonesian Journal of Electrical Engineering and Computer Science. 2018; 10 (1):138–145. [ Google Scholar ]
  • Kim H.Y. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis. Restorative Dentistry & Endodontics. 2013; 38 (1):52–54. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • King D.L., Delfabbro P.H., Kaptsis D., Zwaans T. Adolescent simulated gambling via digital and social media: An emerging problem. Computers in Human Behavior. 2014; 31 :305–313. doi: 10.1016/j.chb.2013.10.048. [ CrossRef ] [ Google Scholar ]
  • Lanza S.T., Cooper B.R. Latent profile analysis for developmental research. Child Development Perspectives. 2016; 10 (1):59–64. doi: 10.1111/cdep.12163. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Larose C., Harel O., Kordas K., Dey D.K. Latent class analysis of incomplete data via an entropy-based criterion. Statistical Methodology. 2016; 32 :107–121. doi: 10.1016/j.stamet.2016.04.004. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leung L. Predicting Internet risks: A longitudinal panel study of gratifications- sought, Internet addiction symptoms, and social media use among children and adolescents. Health Psychology and Behavioral Medicine: An Open Access Journal. 2014; 2 (1):424–439. doi: 10.1080/21642850.2014.902316. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luo T., Qin L., Cheng L., Wang S., Zhu Z., Xu J., et al. Determination the cut-off point for the Bergen social media addiction (BSMAS): Diagnostic contribution of the six criteria of the components model of addiction for social media disorder. Journal of Behavioral Addictions. 2021 doi: 10.1556/2006.2021.00025. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lyvers M., Narayanan S.S., Thorberg F.A. Disordered social media use and risky drinking in young adults: Differential associations with addiction-linked traits. Australian journal of psychology. 2019; 71 (3):223–231. doi: 10.1111/ajpy.12236. [ CrossRef ] [ Google Scholar ]
  • Mabić, M., Gašpar, D., & Bošnjak, L. L. (2020). Social Media and Employment–Students' vs. Employers' Perspective. In  Proceedings of the ENTRENOVA-ENTerprise REsearch InNOVAtion Conference , 6(1), 482-492.
  • Marmet S., Studer J., Wicki M., Bertholet N., Khazaal Y., Gmel G. Unique versus shared associations between self-reported behavioral addictions and substance use disorders and mental health problems: A commonality analysis in a large sample of young Swiss men. Journal of Behavioral Addictions. 2019; 8 (4):664–677. doi: 10.1556/2006.8.2019.70. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Martinac M., Karlović D., Babić D. Neuroscience of Alcohol. Academic Press; 2019. Alcohol and Gambling Addiction; pp. 529–535. [ CrossRef ] [ Google Scholar ]
  • Meshi D., Elizarova A., Bender A., Verdejo-Garcia A. Excessive social media users demonstrate impaired decision making in the Iowa Gambling Task. Journal of Behavioral Addictions. 2019; 8 (1):169–173. doi: 10.1556/2006.7.2018.138. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller W.R., Forcehimes A.A., Zweben A. Guilford Publications; 2019. Treating addiction: A guide for professionals. [ Google Scholar ]
  • Moretta T., Buodo G., Demetrovics Z., Potenza M.N. Tracing 20 years of research on problematic use of the internet and social media: Theoretical models, assessment tools, and an agenda for future work. Comprehensive Psychiatry. 2022; 112 doi: 10.1016/j.comppsych.2021.152286. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mourão R.R., Kilgo D.K. Black Lives Matter Coverage: How Protest News Frames and Attitudinal Change Affect Social Media Engagement. Digital Journalism. 2021; 1–21 doi: 10.1080/21670811.2021.1931900. [ CrossRef ] [ Google Scholar ]
  • Nguyen M.H. LAB University of Applied Sciences; case: 2021. The impact of social media on students’ lives. [ Google Scholar ]
  • Niedermoser D.W., Petitjean S., Schweinfurth N., Wirz L., Ankli V., Schilling H., et al. Shopping addiction: A brief review. Practice Innovations. 2021 doi: 10.1037/pri0000152. [ CrossRef ] [ Google Scholar ]
  • Obar, J. A., & Wildman, S. S. (2015). Social media definition and the governance challenge- an introduction to the special issue. Obar, JA and Wildman, S.(2015). Social media definition and the governance challenge: An introduction to the special issue. Telecommunications policy , 39(9), 745-750. DOI: 10.1016/j.telpol.2015.07.014.
  • Panova T., Carbonell X. Is smartphone addiction really an addiction? Journal of behavioral addictions. 2018; 7 (2):252–259. doi: 10.1556/2006.7.2018.49. 2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Park C., Jun J., Lee T. Consumer characteristics and the use of social networking sites: A comparison between Korea and the US. International Marketing Review. 2015; 32 (3/4):414–437. doi: 10.1108/IMR-09-2013-0213. [ CrossRef ] [ Google Scholar ]
  • Pontes H.M., Griffiths M.D. Internet addiction disorder and internet gaming disorder are not the same. Journal of Addiction Research & Therapy. 2014; 5 (4) doi: 10.4172/2155-6105.1000e124. [ CrossRef ] [ Google Scholar ]
  • Pontes H.M., Griffiths M.D. Measuring DSM-5 Internet Gaming Disorder: Development and validation of a short psychometric scale. Computers in Human Behavior. 2015; 45 :137–143. doi: 10.1016/j.chb.2014.12.006. [ CrossRef ] [ Google Scholar ]
  • Pontes H.M., Griffiths M.D. The development and psychometric properties of the Internet Disorder Scale–Short Form (IDS9-SF). Addicta . The Turkish Journal on Addictions. 2016; 3 (2) doi: 10.1016/j.addbeh.2015.09.003. [ CrossRef ] [ Google Scholar ]
  • Prinstein M.J., Nesi J., Telzer E.H. Commentary: An updated agenda for the study of digital media use and adolescent development–future directions following Odgers & Jensen (2020) Journal of Child Psychology and Psychiatry. 2020; 61 (3):349–352. doi: 10.1111/jcpp.13190. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosenberg J.M., Beymer P.N., Anderson D.J., Van Lissa C.J., Schmidt J.A. tidyLPA: An R package to easily carry out latent profile analysis (LPA) using open- source or commercial software. Journal of Open Source Software. 2019; 3 (30):978. doi: 10.21105/joss.00978. [ CrossRef ] [ Google Scholar ]
  • Rose S., Dhandayudham A. Towards an understanding of Internet-based problem shopping behaviour: The concept of online shopping addiction and its proposed predictors. Journal of Behavioral Addictions. 2014; 3 (2):83–89. doi: 10.1556/JBA.3.2014.003. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ryan J.M. Routledge; 2021. Timeline of COVID-19. COVID-19: Global pandemic, societal responses, ideological solutions , xiii-xxxii. [ Google Scholar ]
  • Savci M., Aysan F. Technological addictions and social connectedness: Predictor effect of internet addiction, social media addiction, digital game addiction and smartphone addiction on social connectedness. Dusunen Adam: Journal of Psychiatry & Neurological Sciences. 2017; 30 (3):202–216. doi: 10.5350/DAJPN2017300304. [ CrossRef ] [ Google Scholar ]
  • Saud M., Mashud M.I., Ida R. Usage of social media during the pandemic: Seeking support and awareness about COVID-19 through social media platforms. Journal of Public Affairs. 2020; 20 (4):e2417. [ Google Scholar ]
  • Saunders J.B., Aasland O.G., Babor T.F., La Fuente De, Grant M. Development of the alcohol use disorders identification test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption-II. Addiction. 1993; 88 (6):791–804. doi: 10.1111/j.1360-0443.1993.tb02093.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schivinski B., Brzozowska-Woś M., Buchanan E.M., Griffiths M.D., Pontes H.M. Psychometric assessment of the internet gaming disorder diagnostic criteria: An item response theory study. Addictive Behaviors Reports. 2018; 8 :176–184. doi: 10.1016/j.abrep.2018.06.004. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Skinner H.A. The drug abuse screening test. Addictive Behaviors. 1982; 7 (4):363–371.https. doi: 10.1016/0306-4603(82)90005-3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith T., Short A. Needs affordance as a key factor in likelihood of problematic social media use: Validation, Latent Profile Analysis and comparison of TikTok and Facebook problematic use measures. Addictive Behaviors. 2022; 107259 doi: 10.1016/j.addbeh.2022.107259. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spilkova J., Chomynova P., Csemy L. Predictors of excessive use of social media and excessive online gaming in Czech teenagers. Journal of Behavioral Addictions. 2017; 6 (4):611–619. doi: 10.1556/2006.6.2017.064. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Starcevic V. Behavioural addictions: A challenge for psychopathology and psychiatric nosology. Australian & New Zealand Journal of Psychiatry. 2016; 50 (8):721–725. doi: 10.1177/0004867416654009. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sun Y., Zhang Y. A review of theories and models applied in studies of social media addiction and implications for future research. Addictive Behaviors. 2020; 106699 doi: 10.1016/j.addbeh.2020.106699. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Szabo A., Pinto A., Griffiths M.D., Kovácsik R., Demetrovics Z. The psychometric evaluation of the Revised Exercise Addiction Inventory: Improved psychometric properties by changing item response rating. Journal of Behavioral Addictions. 2019; 8 (1):157–161. doi: 10.1556/2006.8.2019.06. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tein J.Y., Coxe S., Cham H. Statistical power to detect the correct number of classes in latent profile analysis. Structural equation modeling: a multidisciplinary journal. 2013; 20 (4):640–657. doi: 10.1080/10705511.2013.824781. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tong L.I., Saminathan R., Chang C.W. Uncertainty assessment of non-normal emission estimates using non-parametric bootstrap confidence intervals. Journal of Environmental Informatics. 2016; 28 (1):61–70. doi: 10.3808/jei.201500322. [ CrossRef ] [ Google Scholar ]
  • Tullett-Prado D., Stavropoulos V., Mueller K., Sharples J., Footitt T.A. Internet Gaming Disorder profiles and their associations with social engagement behaviours. Journal of Psychiatric Research. 2021; 138 :393–403. doi: 10.1016/j.jpsychires.2021.04.037. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van den Eijnden R.J., Lemmens J.S., Valkenburg P.M. The social media disorder scale. Computers in Human Behavior. 2016; 61 :478–487. doi: 10.1177/0004867416654009. [ CrossRef ] [ Google Scholar ]
  • Wang C.W., Ho R.T., Chan C.L., Tse S. Exploring personality characteristics of Chinese adolescents with internet-related addictive behaviors: Trait differences for gaming addiction and social networking addiction. Addictive Behaviors. 2015; 42 :32–35. doi: 10.1016/j.addbeh.2014.10.039. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wegmann E., Billieux J., Brand M. Mental health in a digital world . Academic Press; 2022. Internet-use disorders: A theoretical framework for their conceptualization and diagnosis; pp. 285–305. [ Google Scholar ]
  • Winpenny E.M., Marteau T.M., Nolte E. Exposure of children and adolescents to alcohol marketing on social media websites. Alcohol and Alcoholism. 2014; 49 (2):154–159. doi: 10.1093/alcalc/agt174. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zarate D., Ball M., Montag C., Prokofieva M., Stavropoulos V. Unravelling the Web of Addictions: A Network Analysis Approach. Addictive Behaviors Reports. 2022; 100406 doi: 10.1016/j.abrep.2022.100406. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhong B., Huang Y., Liu Q. Mental health toll from the coronavirus: Social media usage reveals Wuhan residents’ depression and secondary trauma in the COVID-19 outbreak. Computers in Human Behavior. 2020; 114 doi: 10.1016/j.chb.2020.106524. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zilberman N., Yadid G., Efrati Y., Neumark Y., Rassovsky Y. Personality profiles of substance and behavioral addictions. Addictive Behaviors. 2018; 82 :174–181. doi: 10.1016/j.addbeh.2018.03.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]

You are using an outdated browser. Please upgrade your browser to improve your experience.

PEN America

New Report: To Reduce Online Abuse, Social Media Platforms Must Overhaul Flawed System for Users to Report Harassment and Threats

PEN America and Meedan Recommend Product Design Fixes to Reduce Online Abuse while Safeguarding Free Expression

FOR IMMEDIATE RELEASE

(NEW YORK)—Millions of social media users face harmful harassment, intimidation, and threats to their free expression online but encounter a “deeply flawed” reporting system that fails at every level to safeguard them and hold abusers to account, according to a new report by global nonprofits PEN America and Meedan.

In exposing these failures by Facebook, Twitter, TikTok, Instagram, YouTube and other social platforms, the report outlines a series of product design fixes that would help make reporting abuse online more transparent, efficient, equitable, and effective.

The report, Shouting into the Void: Why Reporting Abuse to Social Media Platforms is So Hard and How to Fix It , highlights the dangerous repercussions of such abuse for social media users, especially for women, people of color, and LGBTQ+ people, as well as journalists, writers and creators, all of whom face more severe levels of abuse online than the general population. Given how effective it is in stifling free expression, online abuse is often deployed to suppress dissent and undermine press freedom.

Viktorya Vilk, director for digital safety and free expression at PEN America and the report co-author, said: “The mechanisms for reporting abuse are deeply flawed and further traumatize and disempower those facing abuse. Protecting users should not be dependent on the decision of a single executive or platform. We think our recommendations can guide a collective response to reimagine reporting mechanisms—that is, if social media platforms are willing to take up the challenge to empower users and reduce the chilling effects of online abuse.”

Kat Lo, Meedan’s content moderation lead and co-author, said: “Hateful slurs, physical threats, sexual harassment, cyber mobs, and doxxing (maliciously exposing private information, such as home addresses) can lead to serious consequences, with individuals reporting anxiety, depression, and even self-harm and suicidal thoughts. Abuse can put people at physical risk, leading to offline violence, and drive people out of professions, depriving them of their livelihood. Reporting mechanisms are one of the few options that targets of abuse have to seek accountability and get redress—when blocking and muting simply aren’t enough.”

The two organizations drew on years of work training and supporting tens of thousands of writers, journalists, artists, and creators who have faced online harassment. Researchers for PEN America, which champions free expression, and Meedan, which builds programs and technology to strengthen information environments, centered their research and recommendations on the experiences of those disproportionately attacked online for their identities and professions: writers, journalists, content creators, and human rights activists, and especially women, LGBTQ+ individuals, people of color, and individuals who belong to religious or ethnic minorities.

Interviews were conducted with nearly two dozen creative and media professionals, most based in the United States, from 2021 to this April.

Author and Youtube creator Elisa Hansen described the difficult process of reporting the flood of abusive comments she sees in response to videos she releases on the platform: “Sometimes there are tens of thousands of comments to sift through. If I lose my place, or the page reloads, I have to start at the top again (where dozens of new comments have already been added), trying to spot an ugly needle in a blinding wall-of-text haystack: a comment telling us we deserve to be raped and should just kill ourselves. Once I report that, the page has again refreshed, and I’m ready to tear my hair out because I cannot find where I left off and have to comb through everything again.” 

She said: “It’s easy for people to say “just ignore the hate and harassment,” but I can’t. If I want to keep the channel safe for the audience, the only way is to find every single horrible thing and report it. It’s bad enough how much that vicious negativity can depress or even frighten me, but that the moderation process makes me have to go through everything repeatedly and spend so much extra and wasted time makes it that much worse.”

While the report acknowledges recent modest improvements to reporting mechanisms, it also states that this course correction by social platforms has been fragile, insufficient, and inconsistent. The report notes, for example, that Twitter had gradually been introducing more advanced reporting features, but that progress ground to a halt once Elon Musk bought the platform and–among other actions–drastically reduced the Trust and Safety staff overseeing content moderation and user reporting. “This pattern is playing out across the industry,” the report states.

The report found social media platforms are failing to protect and support their users in part  because the mechanisms to report abuse are often “profoundly confusing, time-consuming, frustrating, and disappointing.”

The findings in the report are further supported by polls. A Pew Research Center poll found 80 percent of respondents said social media companies were doing only a “fair to poor job” in addressing online harassment. And a 2021 study by the Anti-Defamation League and YouGov found that 78 percent of Americans want companies to make it easier to report hateful content and behavior.

The fact that people who are harassed online experience trauma and other forms of psychological harm can make the troublesome reporting process all the more frustrating.

“The experience of using reporting systems produces further feelings of helplessness. Rather than giving people a sense of agency, it compounds the problem,” said Claudia Lo, a design researcher at Wikimedia.

The research uncovered evidence that users often do not understand how reporting actually works, including where they are in the process, what to expect after they submit a report, and who will see their report. Users often do not know if a decision has been reached regarding their report or why. They are consistently confused about how platforms define specific harmful tactics and therefore struggle to figure out if a piece of content violates the rules. Few reporting systems currently take into account coordinated or repeated harassment, leaving users with no choice but to report dozens or even hundreds of abusive comments and messages piecemeal.

Mikki Kendall, an author and diversity consultant who writes about race, feminism, and police violence, points out that some platforms that say they prohibit “hate speech” provide “no examples and no clarity on what counts as hate speech.” Natalie Wynn, creator of the YouTube channel Contrapoints, explained: “If there is a comment calling a trans woman a man, is that hate speech or is it harassment? I don’t know. I kind of don’t know what to click and so I don’t do it, and just block.”

The report was supported through the generosity of grants from the Democracy Fund and Craig Newmark Philanthropies.

About PEN America

PEN America stands at the intersection of literature and human rights to protect open expression in the United States and worldwide. We champion the freedom to write, recognizing the power of the word to transform the world. Our mission is to unite writers and their allies to celebrate creative expression and defend the liberties that make it possible. To learn more visit PEN.org

About Meedan

Meedan is a global technology not-for-profit that builds software and programmatic initiatives to strengthen journalism, digital literacy, and accessibility of information online and off. We develop open-source tools for creating and sharing context on digital media through annotation, verification, archival, and translation.

 Contact: Suzanne Trimel, [email protected] , 201-247-5057

Join PEN America Today

Defend free expression, support persecuted writers, and promote literary culture.

write a speech about uses and abuses of social media

Support for the freedom to read with exclusive designs by Todd Parr, Mike Curato, Art Spiegelman,  and more!

Are you an artist at risk or know someone who is?

CONTACT ARC

Hong Kong skyline

PEN America: Wall Street Journal Termination of Journalist Selina Cheng “Risks Sending Troubling Signal”

write a speech about uses and abuses of social media

Russian Court Sentences Evan Gershkovich to 16 Years in Prison

write a speech about uses and abuses of social media

PEN AMERICA SPEAKS: HOW WE DEFENDED AND CELEBRATED FREE EXPRESSION THE WEEK OF JULY 15

write a speech about uses and abuses of social media

The State of Book Bans: Wisconsin’s Battle with “Parental Rights”

  • New Hampshire
  • North Carolina
  • Pennsylvania
  • West Virginia
  • Online hoaxes
  • Coronavirus
  • Health Care
  • Immigration
  • Environment
  • Foreign Policy
  • Kamala Harris
  • Donald Trump
  • Mitch McConnell
  • Hakeem Jeffries
  • Ron DeSantis
  • Tucker Carlson
  • Sean Hannity
  • Rachel Maddow
  • PolitiFact Videos
  • 2024 Elections
  • Mostly True

Mostly False

  • Pants on Fire
  • Biden Promise Tracker
  • Trump-O-Meter
  • Latest Promises
  • Our Process
  • Who pays for PolitiFact?
  • Advertise with Us
  • Suggest a Fact-check
  • Corrections and Updates
  • Newsletters

Get PolitiFact in your inbox.

  • Weekly Email Newsletter
  • Daily Email Newsletter

Donald Trump fact-check: 2024 RNC speech in Milwaukee full of falsehoods about immigrants, economy

Read in Español

Former President Donald Trump raises his fist July 18, 2024, during his speech the Republican National Convention in Milwaukee. (AP)

Former President Donald Trump raises his fist July 18, 2024, during his speech the Republican National Convention in Milwaukee. (AP)

MILWAUKEE — Former President Donald Trump closed the Republican National Convention by accepting the presidential nomination and offering a speech that began somber and turned combative.

First, he recounted surviving an assassination attempt five days earlier in Butler, Pennsylvania.

"You’ll never hear it from me again a second time, because it’s too painful to tell," Trump told a hushed audience. "I stand before you in this arena only by the grace of Almighty God." When Trump said, "I'm not supposed to be here tonight," the audience chanted, "Yes you are! Yes you are!" Onstage, Trump  kissed the firefighter’s uniform of Corey Comperatore, whom Trump’s would-be assassin killed.

After about 20 minutes, Trump’s speech shifted. He countered Democrats’ claims that he endangers democracy, praised the federal judge who dismissed the classified documents case against him and called the legal charges "partisan witch hunts." 

Though he criticized the policies of his opponent, Democratic President Joe Biden, Trump said he’d avoid naming him. 

Trump occasionally offered conciliatory notes, but more often repeated questionable assertions we’ve repeatedly fact-checked . Here are some.

Immigrants are "coming from prisons, they’re coming from jails, they're coming from mental institutions and insane asylums." 

When Trump said earlier this year that Biden is letting in "millions" of immigrants from jails and mental institutions we rated it Pants on Fire . Immigration officials arrested about 103,700 noncitizens with criminal convictions (whether in the U.S. or abroad) from fiscal years 2021 to 2024, federal data shows. That accounts for people stopped at and between ports of entry.

Not everyone was let in. The term "noncitizens" includes people who may have legal immigration status in the U.S., but are not U.S. citizens.

The data reflects the people that the federal government knows about but it’s inexhaustive. Immigration experts said despite those data limitations, there is no evidence to support Trump’s statement. Many people in Latin American countries face barriers to mental health treatment, so if patients are coming to the U.S., they are probably coming from their homes, not psychiatric hospitals.

"Caracas, Venezuela, really dangerous place, but not anymore. Because in Venezuela, crime is down 72%"

Although Venezuelan government data is unreliable, some data from independent organizations shows that violent deaths have recently decreased, but not by 72%. From 2022 to 2023, violent deaths dropped by 25%, according to the independent Venezuelan Observatory of Violence. 

Criminologists attribute this decline to Venezuela’s poor economy and the government’s extrajudicial killings. They said there is no evidence that Venezuela’s government is emptying its prisons and sending criminals to the United States. 

El Salvador murders are down 70% "because they're sending their murderers to the United States of America."

There has been a significant drop in crime in El Salvador, but it is not because the country is sending prisoners to the U.S. 

According to data from El Salvador’s National Police, in 2023, the country reported a 70% drop in homicides compared with 2022, as Trump noted. 

But it’s been well reported — by the country’s government , international organizations and news organizations — that El Salvador’s President Nayib Bukele has aggressively cracked down on crime. There is no evidence that Bukele’s effort involves sending prisoners to the U.S.

El Salvador has been under a state of emergency , because of gang violence and high crime rates, since March 2022. On July 10 , the Legislative Assembly voted to extend its use. 

The order suspends "a range of constitutional rights, including the rights to freedom of association and assembly, to privacy in communications, and to be informed of the reason for arrest, as well as the requirement that anyone be taken before a judge within 72 hours," according a Human Rights Watch report .

The state of emergency has led multiple international human rights groups and governments, including the U.S. , to condemn human rights abuses in El Salvador such as arbitrary killings, forced disappearances and torture. 

Trump claims Bukele is "sending all of his criminals, his drug dealers, his people that are in jails. He's sending them all to the United States." But El Salvador’s prison population has drastically increased in recent years, according to InSight Crime , a think tank focused on crime and security in the Americas. 

In 2020, El Salvador’s prison population stood at around 37,000. In 2023, it was more than 105,000 — around 1.7% of the country’s population, InSight Crime said .

"Behind me and to the right was a large screen that was displaying a chart of border crossings under my leadership, the numbers were absolutely amazing."

As he recounted the story of his attempted assassination, Trump mentioned a chart of illegal border crossings from fiscal year 2012 to 2024. We fact-checked the false and misleading annotations on the chart.

For example, a red arrow on the chart claims to show when "Trump leaves office. Lowest illegal immigration in recorded history." But the arrow points to a decline in immigration encounters at the beginning of the coronavirus pandemic, when migration overall plummeted as nations imposed lockdowns. Trump left office nine months later, when illegal immigration encounters were on the rise.

@politifact Here are 3 false claims Donald Trump made during his speech at the #RNC . #GOP #Trump ♬ News ... News style BGM(1148093) - MASK G

Later in the RNC speech, Trump said, "Under my presidency, we had the most secure border."

That’s Mostly False . Illegal immigration during Trump’s administration was higher than it was during both of former President Barack Obama’s terms.

Illegal immigration between ports of entry at the U.S. southern border dropped in 2017, Trump’s first year in office, compared with previous years. But illegal immigration began to rise after that. It dropped again when the COVID-19 pandemic started and immigration decreased drastically worldwide.

In the months before Trump left office, as some pandemic travel restrictions eased, illegal immigration was rising again. A spike in migrants , especially unaccompanied minors , started in spring 2020 during the Trump administration and generally continued to climb each month.

It’s difficult to compare pre-COVID-19 data with data since, because of changes in data reporting. But, accounting for challenges in data comparisons, a PolitiFact review found an increase of 300% in illegal immigration from Trump’s first full month in office, February 2017, to his last full month, December 2020.

The jobs that are created under Biden, "107% of those jobs are taken by illegal aliens."

Mostly False .

This Republican talking point paints the Biden years as being better for foreign-born workers than native-born Americans. But it is wrong.

Since Biden took office in early 2021, the number of foreign-born Americans who are employed has risen by about 5.6 million. But over the same period, the number of native-born Americans employed has increased by almost 7.4 million.

The unemployment rate for native-born workers under Biden is comparable to what it was during the final two prepandemic years of Trump’s presidency.

Trump: "There's an interesting statistic, the ears are the bloodiest part. If something happens with the ears, they bleed more than any other part of the body."

Mostly True. 

Trump said that in reference to the injury he sustained to the top of his right ear during the assassination attempt at his July 13 rally. 

Although the ears do bleed heavily, PolitiFact could not identify statistical evidence that they are the "bloodiest part" of the body.

The ear gets most of its blood from a branch of the external carotid artery. An injury to an artery is prone to heavier bleeding, according to a study published in the European Journal of Trauma and Emergency Surgery . 

But other parts of the upper body might bleed more from an external injury, doctors said.

"The scalp is perhaps the most ‘bloody’ part of the body if injured or cut," Céline Gounder, a physician, senior fellow at KFF and editor-at-large for public health at KFF Health News, told PolitiFact in an email. "But, in general, the head/neck is the ‘bloodiest’ part of the body. The ear is part of that."

"​​An injury similar to what Trump sustained to the ear would bleed less if inflicted on a part of the body below the neck," Gounder added. (KFF is the health policy research, polling, and news organization that includes KFF Health News.)

During my presidency, we had "the best economy in the history of our country, in the history of the world. … We had no inflation, soaring incomes." 

One of the strongest ways to assess the economy is the unemployment rate, which fell during Trump’s presidency to levels untouched in five decades. But his successor, Joe Biden, matched or exceeded those levels.

Another measure, the annual increases in gross domestic product, were broadly similar under Trump to what they were during the final six years under his predecessor, Barack Obama. And GDP growth under Trump was well below that of previous presidents.

Wage growth increased under Trump, but to say they soared is an exaggeration. Adjusted for inflation, wages began rising during the Obama years and kept increasing under Trump. But these were modest compared with the 2% a year increase seen in the 1960s. 

Another metric — the growth rate in personal consumption per person, adjusted for inflation — wasn’t higher under Trump than previous presidents. For many families, this statistic serves an economic activity bottom line, determining how much they can spend on food, clothing, housing, health care and travel. 

In Trump’s three years in office through January 2020, real consumption per person grew by 2% a year. Of the 30 nonoverlapping three-year periods from 1929 to the end of his presidency, Trump’s periods ranked in the bottom third.

As for inflation being zero, that’s also wrong. It was low, ranging from 1.8% to 2.4% increases year over year in 2017, 2018 and 2019. This is roughly the range the Federal Reserve likes to see. During the coronavirus pandemic-dominated year of 2020, inflation fell to 1.2%, because demand plummeted as entertainment and travel collapsed.

"Our current administration, groceries are up 57%, gasoline is up 60% and 70%, mortgage rates have quadrupled." 

Mostly False.

There is an element of truth, because prices have risen for all of these. But Trump exaggerated the percentages.

The price of groceries has risen by 21.5% in the more than three and a half years since Biden was inaugurated in January 2021. 

Gasoline prices are up 55% over the same period. 

Mortgage rates haven’t quadrupled. But they have more than doubled, because of Federal Reserve rate increases to curb inflation. The average 30-year fixed- mortgage rate mortgage was 2.73% in January 2021, but 6.89% in July 2024. 

"Our crime rate is going up." 

He’s wrong on violent crimes, but has a point for some property crimes.

Federal data shows the overall number of violent crimes, including homicide, has declined during Joe Biden’s presidency. Property crimes have risen, mostly because of motor vehicle thefts.

The FBI data shows the overall violent crime rate — which includes homicide, rape, robbery and aggravated assault per 100,000 population — fell by 1.6% from 2021 to 2022, the most recent year with full-year FBI data. 

Private sector analyses show continued crime declines. For instance, the Council on Criminal Justice , a nonpartisan think tank, samples reports from law enforcement agencies in several dozen cities to gauge crime data more quickly than the FBI. The council’s data shows the declining violent crime trends continued into 2023.

Property crime has increased under Biden, although three of the four main categories the FBI tracks — larceny, burglary and arson — were at or below their prepandemic level by 2022. The main exception has been motor vehicle theft, which rose 4% from 2020 to 2021 and 10.4% from 2021 to 2022.

The Biden administration is "the only administration that said we're going to raise your taxes by four times what you're paying now." 

Biden is proposing a tax increase of roughly 7% over the next decade, not 300%, as Trump claims.

About 83% of the proposed Biden tax increase would be borne by the top 1% of taxpayers, a level that starts at just under $1 million a year in income.

Taxpayers earning up to $60,400 would see their yearly taxes decline on average, and taxpayers earning $60,400 to $107,300 would see an annual increase of $20 on average.

The IRS hired "88,000 agents" to go after Americans. 

Mostly False. 

The figure, which has been cited as 87,000 in past statements , is related to hires the IRS approved in 2022 that included information technology and taxpayer services, not just enforcement staff. Many of those hires would go toward holding staff numbers steady in the face of a history of budget cuts at the IRS and a wave of projected retirements. 

The U.S. Treasury Department previously said that people and small businesses that make less than $400,000 per year would see no change, although audits of corporations and high-net-worth people would rise. House Republicans passed a bill in 2023 to rescind the funding for the hires. Passage by the Democratic Senate majority is unlikely. President Joe Biden has vowed to veto the bill if it reaches his desk. 

"Democrats are going to destroy Social Security and Medicare, because all of these people, by the millions, they’re coming in. They’re going to be on Social Security and Medicare and other things, and you’re not able to afford it. They are destroying your Social Security and your Medicare."

Most immigrants in the U.S. illegally are ineligible for Social Security. Some people who entered the U.S. illegally and were granted humanitarian parole — a temporary permission to stay in the country — for more than one year, may be eligible for Social Security for up to seven years, the Congressional Research Service said. 

Immigrants in the U.S. illegally also are generally ineligible to enroll in federally funded health care coverage such as Medicare and Medicaid. (Some states provide Medicaid coverage under state-funded programs regardless of immigration status. Immigrants are eligible for emergency Medicaid regardless of status.)

It’s also wrong to say that immigration will destroy Social Security. The program’s fiscal challenges stem from a shortage of workers compared with beneficiaries. Immigrants who are legally qualified can receive Social Security retirement benefits only after they’ve worked and paid Social Security taxes for 10 years . So, for at least 10 years, these immigrants will be paying into the system before they draw any benefits.

Immigration is far from a fiscal fix-all for Social Security’s challenges. But having more immigrants in the United States would increase the worker-to-beneficiary ratio, potentially for decades, thus extending the program’s solvency, economic experts say.

Trump: "They spent $9 billion on eight chargers."

The Bipartisan Infrastructure Law , which Biden signed in November 2021, allocated $7.5 billion to electric vehicle charging. Trump exaggerated the program and charger costs.

The Federal Highway Administration told PolitiFact that as of this April, the infrastructure funding has created seven open charging stations with 29 spots for electric vehicles to charge. They were installed across five states — Hawaii, Maine, New York, Ohio and Pennsylvania — the administration said in a statement.

Transportation Secretary Pete Buttigieg said in a May CBS interview that the Biden administration’s goal is to install 500,000 EV chargers by 2030. 

"And the very first handful of chargers are now already being physically built. But again, that's the absolute very, very beginning stages of the construction to come," Buttigieg said.

The cost for equipment and installation of high-speed EV chargers can range from $58,000 to $150,000 per charger, depending on wattage and other factors.

The federally funded EV charging program started slowly. The Energy Department said initial state plans were approved in September 2022. Since April, federally funded charging stations have opened in Rhode Island , Utah and Vermont .

"I will end the electric vehicle mandate on Day 1." False .

There is no electric vehicle mandate to begin with.

The Biden administration has set a goal — not a mandate — to have electric vehicles comprise half of all new vehicle sales by 2030.

Later in his speech, Trump said: "I am all for electric. … But if somebody wants to buy a gas-powered car… or a hybrid, they are going to be able to do it. And we’re going to make that change on Day 1. " The Biden administration has introduced new regulations on gasoline-powered cars but those policies do not ban gasoline-powered cars. They can continue to be sold, even after 2030.

"Under the Trump administration, just three and a half years ago, we were energy independent." 

There are various definitions of "energy independence," but during Trump’s presidency, the U.S. became a net energy exporter and began producing more energy than it consumed. Both milestones hadn’t been achieved in decades.

However, that achievement built on more than a decade of improvements in shale oil and gas production, along with renewable energies. The U.S. also did not achieve net exporter status for crude oil, which produces the type of energy that voters hold politicians most accountable for: gasoline.

Even during a period of greater energy independence, the U.S. energy supply is still sensitive to global developments, experts told PolitiFact in 2023 . Because many U.S. refineries cannot process the type of crude oil produced in the U.S., they need to import a different type of oil from overseas to serve the domestic market. 

"They used COVID to cheat."

Pants on Fire!

During the pandemic, multiple states altered rules to ease mail-in voting for people concerned about contracting COVID-19 at indoor polling places. Changes included mailing ballots to all registered voters, removing excuse requirements to vote by mail and increasing the number of ballot drop boxes. State officials used legal methods to enact these changes, and the new rules applied to all voters, regardless of party affiliation. 

The 2020 election was certified by every state and confirmed by more than 60 court cases nationwide. 

During his presidency, we had "the biggest regulation cuts ever." 

We tracked Trump’s progress on his campaign promise to "enact a temporary ban on new regulations" and rated that a Compromise .

Near the end of Trump’s presidency, an expert told us that overall the amount of federal regulations was roughly unchanged since Trump took office.

Russia’s war in Ukraine and Hamas’ attack on Israel "would have never happened if I were president."

This is unsubstantiated and ignores the complexities of global conflict. There’s no way to assess whether Russian President Vladimir Putin wouldn’t have invaded Ukraine in February 2022 if Trump were still president, or whether Hamas wouldn’t have attacked Israel in October 2023. Experts told PolitiFact that there’s a limit to how much influence U.S. presidents have over whether a foreign conflict erupts into war. "American presidents have scant control over foreign decisions about war and peace unless they show their willingness to commit American power," said Richard Betts, a Columbia University professor emeritus of war and peace studies and of international and public affairs.

During the Trump administration, there were no new major overseas wars or invasions. But during his presidency, there were still conflicts within Israel and between Russia and Ukraine . For example, Russia was intervening militarily in the Ukraine’s Donbas region throughout Trump’s administration. Trump also supported weakening NATO, reducing expectations among allies that the U.S. would intervene militarily if they were attacked. Although there’s no way to know how the war in Israel would have played out, experts said the prospect of the Abraham Accords — the peace effort between Israel and Arab nations led by the Trump administration — likely helped drive Hamas’ attack. "There’s no doubt in my mind that the prospect of the Abraham Accords being embraced by countries such as Saudi Arabia was one of the main causes of the Oct. 7 attack," Ambassador Martin Kimani, the executive director of NYU’s Center on International Cooperation said.

When the U.S. withdrew from Afghanistan, we "left behind $85 billion worth of military equipment." 

This is an exaggeration. When the Taliban toppled Afghanistan’s civilian government in 2021, it inherited military hardware  the U.S. gave it. But the hardware’s value did not amount to $85 billion.

A 2022 independent inspector general report informed Congress that about $7 billion of U.S.-funded equipment remained in Afghanistan and in the Taliban’s hands. According to the report , "The U.S. military removed or destroyed nearly all major equipment used by U.S. troops in Afghanistan throughout the drawdown period in 2021." We rated a similar claim False in 2021.

When he was president, "Iran was broke."

Half True . 

Iran’s foreign currency reserves fell from $128 billion in 2015 to $15 billion in 2019, a dramatic drop in absolute dollars. The decline is widely believed to be a consequence of the tightened U.S. sanctions under Trump, and although Iran’s foreign currency reserves have grown since then, it's nowhere near pre-2019 levels.

The Federal Reserve Bank of St. Louis pegged Iran’s foreign currency reserves in 2024 around $36 billion.

PolitiFact Chief Correspondent Louis Jacobson, Senior Correspondent Amy Sherman, Staff Writers Kwasi Gyamfi Asiedu, Maria Briceño, Madison Czopek, Marta Campabadal Graus, Ranjan Jindal, Mia Penner, Samantha Putterman, Sara Swann, Loreben Tuquero, Maria Ramirez Uribe, Researcher Caryn Baird, KFF Health News Senior Editor Stephanie Stapleton and KFF Health News Senior Correspondent Stephanie Armour contributed to this story. 

Our convention fact-checks rely on both new and previously reported work. We link to past work whenever possible. In some cases, a fact-check rating may be different tonight than in past versions. In those cases, either details of what the candidate said, or how the candidate said it, differed enough that we evaluated it anew. 

RELATED: In Context: Trump recounts assassination attempt. Here’s what he said at the RNC

RELATED: A guide to Trump’s 2nd term promises: immigration, economy, foreign policy and more

RELATED: 2024 RNC fact-check: What Trump VP pick J.D. Vance got right, wrong in Milwaukee speech

Our Sources

See sources linked in story.

Browse the Truth-O-Meter

More by politifact staff.

write a speech about uses and abuses of social media

Advertisement

Supported by

27 Facts About J.D. Vance, Trump’s Pick for V.P.

Mr. Vance spilled scores of details about his life in his coming-of-age memoir. We’ve collected the highlights.

  • Share full article

J.D. Vance holds hands with his wife, Usha Vance, on the floor of the convention hall. He is taking a selfie with a supporter as others look on.

By Shawn McCreesh

Follow the latest news from the Republican National Convention .

J.D. Vance, Donald J. Trump’s choice for vice president, has not lived an unexamined life. Here are 27 things to know about him, drawn from his best-selling 2016 memoir, “Hillbilly Elegy,” and the many other things he has said or written since.

1. His name was not always James David Vance. At birth, it was James Donald Bowman. It changed to James David Hamel after his mother remarried, and then it changed one more time.

2. He longed for a role model. His father left when he was 6. “It was the saddest I had ever felt,” he wrote in his memoir. “Of all the things I hated about my childhood,” he wrote, “nothing compared to the revolving door of father figures.”

3. He had a fraught relationship with his mother, who was married five times. One of the most harrowing scenes in the book occurs when he’s a young child, in a car with his mother, who often lapsed into cycles of abuse. She sped up to “what seemed like a hundred miles per hour and told me that she was going to crash the car and kill us both,” he writes. After she slowed down, so she could reach in the back of the car to beat him, he leaped out of the car and escaped to the house of a neighbor, who called the police.

4. He was raised by blue-dog Democrats. He spent much of his childhood with his grandfather and grandmother — papaw and mamaw, in his hillbilly patois. He described his mamaw’s “affinity for Bill Clinton” and wrote about how his papaw swayed from the Democrats only once, to vote for Ronald Reagan. “The people who raised me,” he said in one interview, “were classic blue-dog Democrats, union Democrats, right? They loved their country, they were socially conservative.”

5. As a teenager, he loved Black Sabbath, Eric Clapton and Led Zeppelin. But then his biological father, who was deeply religious, re-entered his life. “When we first reconnected, he made it clear that he didn’t care for my taste in classic rock, especially Led Zeppelin,” he wrote. “He just advised that I listened to Christian rock instead.”

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

COMMENTS

  1. Principles for Social Media Use by Law Enforcement

    Social media is a powerful tool for connection and civic involvement, serving myriad purposes. It facilitates community-building, connecting like-minded people and fostering alliance development, including on sensitive or controversial topics; it helps grassroots movements find financial and other support; it promotes political education; it ...

  2. Social media abuse News, Research and Analysis

    Articles on Social media abuse. Displaying 1 - 20 of 28 articles. Mikoto ... New research shows that antisemitic posts surged as the 'free speech absolutist' took over the social media giant ...

  3. Misinformation, manipulation, and abuse on social media in the era of

    Contributions. In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and ...

  4. You're Not Powerless in the Face of Online Harassment

    Summary. If you or someone you know is experiencing online harassment, remember that you are not powerless. There are concrete steps you can take to defend yourself and others. First, understand ...

  5. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  6. Racism, Hate Speech, and Social Media: A Systematic Review and Critique

    In a review and critique of research on race and racism in the digital realm, Jessie Daniels (2013) identified social media platforms—specifically social network sites (SNSs)—as spaces "where race and racism play out in interesting, sometimes disturbing, ways" (Daniels 2013, 702).Since then, social media research has become a salient academic (sub-)field with its own journal (Social ...

  7. Social Media Surveillance by the U.S. Government

    Social media has become a significant source of information for U.S. law enforcement and intelligence agencies. The Department of Homeland Security, the FBI, and the State Department are among the many federal agencies that routinely monitor social platforms, for purposes ranging from conducting investigations to identifying threats to screening travelers and immigrants.

  8. The harmful effects of online abuse

    A look at how the offline harm of online abuse is real and widespread with potentially severe consequences. Watch now. Add to list. 22:16. Monica Lewinsky. The price of shame. 22 minutes 16 seconds. 13:36. Sebastián Bortnik. The conversation we're not having about digital child abuse. 13 minutes 36 seconds.

  9. What Is the Best Way to Stop Abusive Language Online?

    In the statement, Richard Masters, the Premier League's chief executive, said the league would continue to push social media companies to make changes to prevent online abuse. "Racist ...

  10. Hate speech in social media: How platforms can do better

    The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists' conversations are dangerous and hateful.

  11. Online harassment is common on social media, but in other places too

    But harassment often occurs in other online locations, too. Overall, three-quarters of U.S. adults who have recently faced some kind of online harassment say it happened on social media. But notable shares say their most recent such experience happened elsewhere, including on forum or discussion sites (25%), texting or messaging apps (24% ...

  12. Social media and political violence

    Social media companies are part of the potential solution, too. ... But companies and purported "free speech absolutists" including X ... Write an article and join a growing community of more ...

  13. The Struggle for Human Attention: Between the Abuse of Social Media and

    Abstract. Human attention has become an object of study that defines both the design of interfaces and the production of emotions in a digital economy ecosystem. Guided by the control of users' attention, the consumption figures for digital environments, mainly social media, show that addictive use is associated with multiple psychological ...

  14. Hate speech in social media: How platforms can do better

    The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists' conversations are dangerous and hateful.

  15. How should social media platforms combat misinformation and hate speech

    Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and ...

  16. The Evolving Free-Speech Battle Between Social Media and the Government

    The judge wrote, in a preliminary injunction, "If the allegations made by plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States ...

  17. The Law of Students' Rights to Online Speech: The Impact of Students

    Regulating the content of student speech on social media is particularly tricky and subject to abuse because digital speech is vulnerable to cultural and contextual misreading. Social media platforms are common sources of controversial, provocative, and resistant speech. Students are no exception to the use of that platform.

  18. Online Harassment

    The National Domestic Violence Hotline conducted a survey in 2022 of 960 survivors of domestic abuse and found that 100% of respondents had experienced at least one form of online harassment or abuse. In June 2022, President Joe Biden issued a Presidential Memorandum that established the White House Task Force to Address Online Harassment and ...

  19. Why people are becoming addicted to social media: A qualitative study

    Social media addiction (SMA) led to the formation of health-threatening behaviors that can have a negative impact on the quality of life and well-being. Many factors can develop an exaggerated tendency to use social media (SM), which can be prevented in most cases. This study aimed to explore the reasons for SMA.

  20. Supreme Court's Social Media Ruling Tilts Toward Free Speech

    The US Supreme Court this month declined to rule on whether Florida and Texas laws limiting social media platforms' content moderation violates the First Amendment, sending the issue back to the lower courts. But in doing so, its guidance strongly suggested that modern day social media communications—including how they are shaped by the platforms where they appear—receive full, time ...

  21. A new approach to regulating speech on social media: Treating users as

    Workers are regularly protected from employer reprisal for whistleblowing and other forms of speech. These work laws address the competing interests and rights of employers, workers, and co ...

  22. PDF Sample Social Media Messages

    Instagram, LinkedIn, Snapchat, and Twitter. Here are some sample social media messages to get you started! NPW Sample Twitter Messages • @samhsagov's NSDUH revealed an estimated 8.5M adults (aged 18 and older) had both a mental. illness and at least one substance use disorder in the past year. Together we can raise awareness

  23. Minutes after Trump shooting, misinformation started flying. Here are

    Mentions of Trump on social media soared up to 17 times the average daily amount in the hours after the shooting, according to PeakMetrics, a cyber firm that tracks online narratives. Many of those mentions were expressions of sympathy for Trump or calls for unity. But many others made unfounded, fantastical claims.

  24. Shouting into the Void: Why Reporting Abuse to Social Media Platforms

    When the Pew Research Center asked people in the U.S. how well social media companies were doing in addressing online harassment on their platforms, nearly 80 percent said that companies were doing "an only fair or poor job." 10 Emily Vogels, "A majority say social media companies are doing an only fair or poor job addressing online ...

  25. Fact-checking JD Vance's past statements and relationship with ...

    Vance, 39, won his Senate seat in 2022 with Trump's backing. He would be one of the youngest vice presidents in U.S. history.

  26. Social media use and abuse: Different profiles of users and their

    1.1. Problematic social media engagement in the context of addictions. Problematic social media use is markedly similar to the experience of substance addiction, thus leading to problematic social media use being modelled by some as a behavioural addiction - social media addiction (SMA; Sun and Zhang, 2020).In brief, an addiction loosely refers to a state where an individual experiences a ...

  27. New Report: To Reduce Online Abuse, Social Media Platforms Must

    The report, Shouting into the Void: Why Reporting Abuse to Social Media Platforms is So Hard and How to Fix It, highlights the dangerous repercussions of such abuse for social media users, especially for women, people of color, and LGBTQ+ people, as well as journalists, writers and creators, all of whom face more severe levels of abuse online ...

  28. Students Target Teachers in Group TikTok Attack, Shaking Their School

    Seventh and eighth graders in Malvern, Pa., impersonating their teachers posted disparaging, lewd, racist and homophobic videos in the first known mass attack of its kind in the U.S.

  29. Donald Trump fact-check: 2024 RNC speech in Milwaukee full of

    The state of emergency has led multiple international human rights groups and governments, including the U.S., to condemn human rights abuses in El Salvador such as arbitrary killings, forced ...

  30. 27 Facts About J.D. Vance, Trump's Pick for V.P

    Mr. Vance spilled scores of details about his life in his coming-of-age memoir. We've collected the highlights. By Shawn McCreesh Follow the latest news from the Republican National Convention ...