Don’t Click: A Survey of Youth Experiences with Hate & Violent Extremism Online

Table of Contents

Note: The Don’t Click Initiative’s materials and resources can be viewed here along with the Break the Hate documentary following the story of this initiative. 

Forward

Hate and extremism are prevalent online and pose unique and substantial risks for youth who, like myself, use various online platforms daily.

Harmful ideas, violence, and trauma can become deeply rooted in young people who experience hateful and extremist rhetoric, comments, and content on online platforms. The result can be victimization and the adoption of some of these views

To deal with these threats and negative impacts, youth themselves must act. Youth voices and perspectives on this problem are often absent from research and policy discussions. As such, young people supporting young people is the basis of the Don’t click! Initiative. The starting point for this initiative was the research discussed in this report.

The research project and survey results discussed in this publication seek to better understand how and where youth encounter hateful and extremist ideas and sentiments. It also seeks to identify ideas youth have for addressing these problems.

I felt that for any counter-hate initiative to be effective, resonant, and taken seriously by those who have power it was necessary to be grounded in evidence. This survey is a cornerstone of this research and part of the Don’t Click! Initiative.

Working with peers, researchers at the Organization for the Prevention of Violence (OPV), teachers, administrators, school boards, the Edmonton Police Service (EPS), and the City of Edmonton, this survey amassed more than 850 respondents and collected new insight into the dangers of hate and extremism online for youth.

This research contributes to an understanding of where youth encounter online hate and extremism, how they responded when they encountered it, and going forward, what the next steps are in addressing these problems. I believe, as a society, it is essential to support youth in their endeavours to address the problems of significance and listen to their voices and solutions to issues they experience firsthand.

The Don’t Click! initiative has blossomed into an active youth group based in Edmonton. With the support of EPS, the City of Edmonton, REACH Edmonton, and the OPV, this group of committed youth have developed resources and workshops to educate young people across Alberta on the importance of digital literacy, digital citizenship, empathy, and ways to properly combat hate and extremism online.

-Henry Kerr

Introduction

This report discusses the results of a youth survey on hate and violent extremism and is part of the Don’t Click! Youth Initiative. It was conducted with the support of the Organization for the Prevention of Violence, based in Edmonton, Alberta.

859 respondents with an average age of 16, all located in Alberta, completed the survey between December 2020 and June 2021. It sought to identify where youth encountered hateful and extremist content online, how they reacted to it, and what they believed were the most appropriate responses to these problems.

The definition of hate used for the survey and provided to respondents was “the use of hostile language or violence towards individuals based on their culture, race, religion, disability, sex, gender or sexual orientation.” By comparison, violent extremism was defined in the survey as “the beliefs and actions of people who support or use violence in the name of a cause.”

Of note, and significant concern is the finding that 86 percent of respondents reported encountering hate online, and 72 percent reported encountering violent extremism. A more detailed discussion of survey results follows.

Youth are exposed to a diverse, ever-changing, and largely unregulated world of entertainment, information, disinformation, online social networks, and all of the benefits and hazards this entails.

In 2018 the Pew Research Center interviewed roughly 750 teens (13-17 years old) and found that 95% of them had a smartphone and roughly half were online “almost constantly.”1 Several years later, in the context of a pandemic that has necessitated shifts to online schooling and socialization, that dependence has most certainly grown. When asked if their school-aged children were spending more or less time online during the Covid-19 pandemic, 72% of parents indicated increased usage.2

Youth are at the forefront of a global revolution in accessing information, interacting with others, and subsequently shaping how they situate themselves in the world. There are, of course, unknown hazards and outcomes in this revolution, one of which is interacting with disinformation, hatred, bigotry, and extremist content generated by a growing and diffuse host of bad actors on the Internet.

One study conducted by a technology company using Artificial Intelligence to detect online content during the Covid-19 pandemic found a 70% increase in hate “between children and teens” in online chat and a 40% increase in “toxicity” among young people in video game chats.3

Hate and extremism online have been extensively studied. For example, around the spread of specific ideologies and belief systems, the role of online networks in driving hate-motivated violence, and the negative effects of communities targeted by hate.

However, several questions remain around the impacts of the phenomena on youth – the short, immediate, and long-term outcomes on, inter alia, their mental health, social networks and identities, and beliefs.

Additionally, there has been little meaningful engagement with youth that seeks to elicit their ideas for how best to tackle these issues. This survey and its results represent a modest attempt to address some of our gaps in knowledge and understanding. 

Previous Studies and Findings on Youth Engagement with Hate and Extremism

The current heightened awareness and attention around extremism and hate, the proliferation of the use of social media – particularly amongst youth – has led to a series of previous, topical studies.

In particular, the potential for youth to encounter or be the target or victim of social media propaganda tools designed to propagate extremist or hateful content is highly salient given the disproportionate amount of time this demographic spends online.

For example, during the height of the Islamic State of Iraq and the Levant’s (ISIL) power in the early to mid-2010s, the group weaponized social media to disseminate their propaganda and recruit young individuals from the West.4

To date, given the growing importance of this topic, there have been several extensive surveys conducted on the rate of exposure and impact of online violent extremist and hate content on youth. Many of these are part of a broader body of literature that investigates criminal behaviors online, particularly cyber-bullying. The key difference between cyberbullying and hate or extremist material is that the former targets a single individual while the latter targets a collective identity.5

Unsurprisingly, a growing body of evidence suggests youth exposure to hate and extremism, both on and offline, is harmful. This can include short-term emotional impacts like feelings of fear, loneliness, and unhappiness,6 and longer-term impacts like normalizing and eventually reinforcing hateful beliefs and discrimination against targeted groups.7 For individuals within the targeted group, exposure to such content can lead to modified behaviors due to fear and emotional distress,8 and defensive attitudes that may impede social relationships in the future.9

Many early studies in this area of research approached the issue from the routine activity theory (RAT), which purports that youth are more likely to be victimized in spaces that include a motivated offender and little, if any, capable guardianship.10

Researchers argue that the Internet provides an opportunistic environment where motivated offenders can easily access their targets in a mostly unsupervised and unmoderated space.

This is supported in studies examining the use of racial or ethnic-based language between moderated and unmoderated online teen chatrooms. These studies found that negative racial or ethnic language appears to permeate most in unmoderated chatrooms.11

What’s more, the Internet provides a unique environment where young individuals do not necessarily need to come into direct contact with a motivated offender. Instead, targets need only read or passively consume materials posted at some time before their exposure.12

By the early 2010s, researchers found that the rate of exposure to extremist or hate-related content increases the more the individual uses technology,13 especially as the individual enters spaces online with higher proportions of extremist or hate-related actors, groups, and content.14 The likelihood of accessing online hate sites is especially acute in older adolescents and youth who are male15 and in those who exhibit high levels of sensation or risk-seeking behavior.16

Other studies have found that women and people of colour, in general, are more likely to perceive hate speech and hate crimes as more serious and harmful than men and Caucasians.17

Later studies specifically investigated the rate of online exposure to extremist and hate materials online. In one pioneering study, Hawdon et al. (2014) surveyed 1,032 American youth between 15–30.18

They collected basic demographic data, targets of the hate speech, the website where it was encountered, how the individual found the website, and how disturbing the respondent found the material. Of this total, 53% said they had encountered online hate speech, and nearly 16% of those were the target of hate speech. 

Most respondents did not deliberately access the page but instead found it accidentally. Those who accidentally found the content were also more likely to indicate they were disturbed by it. 

As for the websites where respondents encountered the content, Facebook and YouTube were most frequently indicated, followed by general message boards, Tumblr, Blogs, and image boards (such as 4chan). Hawdon also included predictive analyses of the likelihood of exposure by several demographic factors. 

Of all ethnic and race variables, those who identified as “White” were significantly less likely to encounter online hate materials. In contrast, Hispanic respondents were nearly 1.7 times as likely to encounter it. Younger male respondents were also more likely to encounter hate material. 

Other studies have since followed in the footsteps of Hawdon et al. (2014; 2015). For example, one European survey found that 20% of youth in Europe had been exposed to or encountered hateful messages and content online.19

Tellingly, the study found that the rate of exposure has steadily increased compared to initial findings in 2010. A subsequent study by Blaya (2019) concluded from a survey of 1,189 youth between the ages of 11-20 in France that nearly one-third of respondents were exposed to hate content online.20

Nienierza et al. (2021) examined the experience of 1,061 German adolescents. They focused not only on the rate of exposure to extremist content but also the ability of youth to identify the content as extremist.21

This study fills a gap in the literature that, for the most part, assumes that youth understand and can identify dangerous content when they see it. The survey results indicate that consistent with previous studies, roughly one-third of respondents encountered extremist content – most frequently on social networking sites. However, only 59% of the respondents understood the researchers’ definition of extremist content and exposure to content. 

Another study conducted by Harriman et al. (2020) of adolescents from two high schools in Massachusetts (USA) found that exposure to hate messages is significantly correlated with the use of Twitter and Houseparty. The study also found that communication with a stranger (vs. a youth’s peer group) increased the risk of exposure to hateful content.22

Methodology

This study created an original survey consisting of 19 questions to investigate youth online behavior as it relates to exposure to hate and extremism. 

The survey recruited Alberta youth based on a convenience sampling method. Before dissemination, the survey received approval from the Edmonton Public School Board (EPSB), and a link to the survey was posted and advertised on a variety of online school websites and in-person announcements. These included school zone (an online hub of information for students), individual school announcement boards, in classrooms via teachers sharing the survey, on school Instagram pages, and on the Instagram pages of other online accounts supporting the project. 

Students from across the Edmonton Public School Board Division had the opportunity to participate, as did two secondary schools in Calgary. 

The survey was live from December 7th, 2020, until June 6th, 2021. For respondents under 18, the survey required prior parental approval. Parents were required to first read the questions in the survey and type their names to indicate that consent had been given. This consent allowed the youth to begin answering questions. Participants were also informed that all questions in the survey were optional, answers were anonymous, and all participants were given a list of community resources if they needed support. 

The 19 questions surveyed a range of responses from youth:

  • basic demographic information (age, gender, identification as a visible minority or member of the LGBTQ2S+ community);
  • social media and online platforms used on a regular basis;
  • which social media and online platforms they have encountered hate or extremist content;
  • whether they have encountered hate or extremist content on the Dark Web; 
  • whether they personally experienced hate on or offline;
  • how they responded to hate or extremist content online in the past;
  • and a range of questions that ask what they believe youth can do to combat hate and extremism online.

Once the survey results were anonymized and cleaned, the researchers used Stata/SE 17.0 to obtain descriptive statistics and conduct analyses.  

Results

In total, 859 youth completed the survey. Descriptive statistics of survey respondent demographics will be presented first. 

Survey Respondent Demographics

The average age of the respondents was 16, with the oldest being 23 and the youngest being 12 years old. Most respondents (91%) were between the ages of 15 and 17; 6% were older than 18, and 3% were under 15. 

In terms of gender, 53% identified as female, 44% identified as male, and 3% identified as something other than female or male (this includes non-binary, transgender, gender-fluid, and agender). 

Of 859 respondents, 801 reported whether they identified as a member of the LGBTQ2S+ community, and 760 stated whether they identified as a visible minority. Accordingly, 74% of respondents stated that they do not identify as members of the LGBTQ2S+ community, 19% reported they do, and 7% declined to answer. 

Alternatively, a near-even split of 43% of respondents reported identifying as a visible minority, and 45% reported identifying as a non-minority. In total, 12% declined to answer. 

It is important to pay attention to intersectional trends in surveys such as this. 

Of all female respondents who answered the visible minority question, 49% reported identifying as such; similarly, 48% of male respondents reported identifying as a visible minority. Conversely, nearly 60% of respondents who reported identifying as something other than male or female also identify as a visible minority. 

Of all female respondents who answered the LGBTQ2S+ question, 25% report identifying as a member of the community; 9% of male respondents report identifying as a member of the community. Conversely, 100% of respondents who identified as something other than male or female also identify as a member of the LGBTQ2S+ community. 

Social Media and Online Platform Use

Respondents were asked which social media or online platforms they used daily. Importantly, because this survey was conducted within the last two years, these results provide updated insights into the social media usage trends of youth in the context of extremism and hate.  

As seen in Figure 1, Instagram is the most frequently used online social media platform amongst all youth surveyed in this study. This is followed closely by YouTube and TikTok.

TikTok is of particular interest because it is one of the newer social media platforms. It also gained considerable traction amongst youth during the Covid-19 pandemic. 

For example, although TikTok only launched for iOS and Android devices in 2017 outside of mainland China, by 2020, it was ranked as the third fastest-growing brand; and in 2021, Cloudflare ranked it as the most popular website, surpassing even Google.23

Figure 1: Social Media/Online Platforms Used Daily

We can also look at the differences in social media and online platform usage by gender, provided in Figure 2. Although Instagram is the most widely used social media platform overall, this varies by gender. 

According to Figure 2, Instagram is the most widely used specifically amongst those who identify as female, whereas for those who identify as male, YouTube is the most widely used online platform daily. 

For those who identify as something other than male or female, TikTok is the most widely used daily. Additionally, those who identify as male tend to use Discord, Reddit, and Steam or other online gaming platforms much more than those who identify as female. Those who identify as female are much more likely to use TikTok and Snapchat daily. 

Figure 2: Daily Use of Social Media/Online Platforms by Gender

Additionally, youth tend to use a variety of social media or online platforms simultaneously, rather than just one. Of 862 respondents who reported using at least one social media or online platform daily, 81% reported using between 2-5 simultaneously. 

Encountering Hateful or Extremist Content 

Respondents were asked whether they had come across extremist or hate content on any of the platforms they use. Based on previous survey designs in the literature and to avoid inconsistent interpretations of the terms, the survey provided a definition for each. 

Violent extremism is defined in the survey as the beliefs and actions of people who support or use violence in the name of a cause. Hate is defined in the survey as the use of hostile language of violence towards individuals based on their culture, race, religion, disability, sex, gender, or sexual orientation. 

One limitation of this survey is that, unlike some previous studies (e.g., Nienierza et al. 2021), there were no research questions related to the ability of respondents to accurately identify hateful or violent extremist content. 

While the survey provided definitions, it cannot be determined if respondents read these definitions previous to the completion of the survey or interpreted them in a manner that allowed them to identify content accurately. 

In general, most respondents reported encountering extremist or hate content online. Of the 859 respondents, 72% reported coming across extremist content on one or more of the platforms they use; and 86% reported coming across hateful content on one or more of the platforms they use. 

In total, 68% of respondents reported encountering both extremist and hate content, while only 10% reported not encountering either extremist or hate content at all.

Figure 3: Percentage of Respondents Who Reported Encountering Violent Extremist Content
Figure 4: Percentage of Respondents Who Reported Encountering Hateful Content

Respondents were also asked to indicate on which platforms they encountered extremist or hate content. The results are presented in Figures 5 and 6. 

The results indicate that, for some platforms, there are more daily users who do not encounter extremist or hate content than those who do. 

For extremist content, this includes YouTube, Discord, Snapchat, Reddit (by a slim majority), Steam/other gaming platforms, and Facebook. 

For hate content, this includes YouTube (by a small majority), Discord, Snapchat, and Steam/other gaming platforms. 

On the other hand, there are several platforms where more daily users encounter extremist or hate content than those who do not. 

For extremist content, this includes Instagram, TikTok, Twitter, and 4Chan/8Chan/8Kun.

For hate content, this includes Instagram (by a large majority), TikTok, Twitter, Reddit, and 4Chan/8Chan/8Kun.

Figure 5: Daily Users Who Encountered Extremist Content
Figure 6: Daily Users Who Encountered Hate Content

We can also examine the correlation coefficients between exposure rates to hate or extremist content and reported gender, age, minority status, and identification as LGBTQ2S+. The results of a Pearson’s correlation test are presented in Table 1.

In columns (1) and (2), those who identify as female, LGBTQ2S+, and visible minorities report encountering significantly more extremist and hateful content online. The correlation is larger in females and LGBTQ2S+ for encountering hate content than for extremist content. 

Table 1: Pearson’s Correlation Coefficients
(1) Encounter Extremist Content (2) Encounter Hateful Content (3) Encounter Extremist Content (4) Encounter Hateful Content
Female
0.0773**
0.1546***
-
-
Male
-0.0897***
-0.1498***
-
-
Other
0.0337
-0.0082
-
-
Age
0.0531
0.0661*
-
-
LGBT2QS+
0.0639*
0.0967***
-
-
Visible Minority
0.0933**
0.0998***
-
-
Female + Minority
-
-
0.0602
0.0512
Male + Minority
-
-
0.1274**
0.1366**
Female + LGBTQ2S+
-
-
0.0308
0.0824*
Male + LGBTQ2S+
-
-
0.0461
0.0772

There is a statistically significant and negative correlation between those who identify as male and encountering both extremist and hateful content. In other words, overall, males report encountering less of each compared to females. Additionally, there is a small but statistically significant and positive correlation between age and encountering hateful content. 

Intersectionality in columns (3) and (4) also appears to moderate the exposure to this type of content. 

Specifically, although being a female is positively correlated with encountering both extremist and hateful content in columns (1) and (2), there is no statistically significant correlation between being a minority female and encountering either type of content. 

On the other hand, despite a negative correlation between males and encountering extremist and hateful content in columns (1) and (2), there is a statistically significant and positive correlation between being a minority male and encountering both extremist and hateful content. 

Aside from the correlation coefficient between females and encountering hateful content in column (2), the coefficients for minority males and encountering both extremist and hateful content are the largest of all correlations tested. 

Similarly, there is a statistically significant correlation between being a female and LGBTQ2S+ and encountering hateful content, but not extremist content. 

Finally, although it does not reach minimum statistical significance thresholds, it is worth noting the moderately sized positive correlation between male LGBTQ2S+ members and encountering hateful content compared to extremist content. 

This study is also one of the first to investigate youth usage of the Dark Web related to encountering extremist or hateful content. The results are presented in Figure 7. In general, most respondents report they have never been on the Dark Web. However, of the 218 respondents who report using the Dark Web, 30% encountered hateful content, and 27% encountered extremist content. 

Figure 7: The Dark Web and Extremist/Hate Content

Respondents were also asked whether they had personally experienced hate online or offline. Of 859 respondents, 490 (57%) reported personally experiencing hate online or offline (or both). Specifically, 37% report personally experiencing hate both online and offline. 

These responses are broken down by gender, LGBTQ2S+, and minority status in Table 2. What emerges is a troubling trend – across all gender, minority, and LGBTQ2S+ categories, more respondents report personally experiencing hate online, offline (or both) than report not personally experiencing hate at all. 

Table 2: Distribution of Personal Experiences of Hate (Online, Offline, or Both)
I have not personally experienced hate online or offline I have personally experienced hate online, offline, or both
Other
7
19
Female
182
186
Female + LGBTQ2S+
35
68
Female + Minority
69
128
Male
177
198
Male + LGBTQ2S+
8
25
Male + Minority
66
92
LGBT2QS+
50
113
Visible Minority
138
242

How Youth Responded to the Content They Encountered

The survey also asked youth how they responded to the content they encountered. The respondents were given a range of possible responses in two broad categories. 

The first category includes platform-specific responses – reporting the content to the platform, challenging the post via platform tools, and blocking the user. The second category includes turning to social support systems like talking to friends, parents, or teachers. Possible responses also include learning more about the content they encountered, reporting the content to the police, and doing nothing. 

The distribution of responses is shown in Figure 8. The most frequently selected response is turning to social support systems like talking to friends, parents, or teachers. 

Figure 8: How Youth Responded to the Content They Encountered

We can also generate Pearson’s correlation coefficients between responses and gender, minority status, and LGBTQ2S+ identification. The results are presented in Table 3. 

Interestingly, being a male is negatively correlated with all responses except doing nothing, in which there is a strong positive statistically significant relationship. However, when the respondent is a minority male or an LGBTQ2S+ male, there is a positive and statistically significant correlation across all actions except doing nothing. 

The strongest positive correlation is between being an LGBTQ2S+ male and talking to friends, parents, or teachers; meanwhile, the strongest negative correlation is between being a male and blocking the user. 

Table 3: Pearson’s Correlation Coefficients
(1) Talked to Friends, Parents or Teachers (2) Blocked the User (3) Reported to the Platform (4) Challenged the Post (5) Learned More About It (6) Nothing
Female
0.1727***
0.1791***
0.1382***
-0.0183
-0.0398
-0.1073***
Male
-0.1957***
-0.2051***
-0.1741***
-0.0306
-0.0731**
0.1312***
Other
0.0635*
0.0678**
0.0932***
0.1052***
0.0738**
-0.0628*
Age
0.0456
0.0310
0.0395
0.0510
0.0651*
0.0153
LGBT2QS+
0.1790***
0.1870***
0.1639***
0.1575***
0.1117***
-0.1111***
Visible Minority
0.0958***
0.0143
0.0655*
0.0878**
0.0791**
-0.0366
Female + Minority
0.1084*
0.0304
-0.0405
0.0485
0.1062**
-0.0212
Male + Minority
0.0449
0.1171**
0.0719
0.1357**
0.0544
-0.0547
Female + LGBTQ2S+
0.0952*
0.1398***
0.0979**
0.1727***
0.1192**
-0.0829*
Male + LGBTQ2S+
0.2285***
0.1036*
0.1434***
0.0798
0.0523
-0.0616
***p<0.01, **p<0.05, *p<0.1

What Can Youth Do About Extremist and Hateful Content Online? 

The final set of survey questions asked respondents what youth can do about extremist and hateful content online. 

Respondents were asked whether they believe that youth can play a meaningful role in the United Nations Security Council (UNSC) Resolution 2250 call, which urges youth to play a larger role in “developing strategies to counter the violent extremist narratives that can incite terrorist acts.” 

Of the 827 responses, 590 (or 71%) agreed or strongly agreed that youth can play a meaningful role; 184 (or 22%) were neutral, and only 53 (or 6%) disagreed or strongly disagreed. 

Figure 9 displays the results for the question that asked what youth can do to counter violent extremism and hate online. 

The response (“Challenge hate when you hear it or see it”) was a response specific to Hate content and, therefore, only contains one bar. 

The most widely selected solution was to “Educate youth about the problem” of extremist or hate content, followed by “Hav[ing] more conversations between people of different backgrounds” and “Ask[ing] platforms to do more against this content online.” A small minority selected that “Nothing” could be done about extremist and hate content online.

Figure 9: What Can Youth Do About Extremist and Hate Content Online?

Discussion and Suggestions for Further Research 

Amongst the youth surveyed for this study, an overwhelming majority had come across extremist or hate content online or had been a subject of it personally, whether that was online or offline. 

Females, visible minorities, and members of the LGBTQ2S+ community were most likely to report encountering this type of content. 

Although our findings align with previous work, which found that males tend to report encountering this content at lower rates than females, the findings here about how intersectionality impacts these relationships shed new light on youth online interaction patterns with extremist and hateful content.  

LGBTQ2S+ and minority males are some of the most vulnerable to encountering both hateful and extremist content. In contrast, minority and LGBTQ2S+ females often report encountering this content at lower levels. 

More work needs to be done to determine how intersectionality impacts these patterns of interaction and what kind of short-term and long-term impacts this has on youth. 

This survey is also one of the first to ask youth what they believe can be done about online extremist and hate content. Encouragingly, an overwhelming majority of the respondents indicated they believe something can be done about it. 

These results are important because they indicate that youth believe there are viable solutions to this problem regardless of one’s relative exposure to this type of content. Moving forward, engaging with youth who make up a large share of social media and online platform users will be a crucial step for companies in addressing this problem. 

However, youth also overwhelmingly selected that community efforts like challenging hate, educating youth, and helping friends who may hold these beliefs are important solutions, indicating that online solutions to the problem are only part of the puzzle. As such, more work should be done to address meaningful youth engagement with online solutions to the problem and community-level solutions.

Conclusion 

The Don’t Click! survey represents a unique and timely example of youth-led research on the subjects of hate and violent extremism online. Given local trends related to the growth in hate crimes and incidents in countries like Canada and the well-documented growth in ideologically motivated violent extremism (IMVE) in Western states, its findings are perhaps unsurprising. 

With nearly 9 in 10 respondents indicating they had encountered hate online and more than two-thirds indicating the same in relation to violent extremism, it is clear these experiences are part of life for youth online. 

Given the potential detrimental impacts of this exposure on youth, especially among those who identify as members of a visible minority or LGBTQ2S+ communities, it is vital that educators, parents, and policymakers – together with regulators and private Internet companies – take measures to safeguard youth. 

Another takeaway from this research is that youth have opinions on tackling this problem. As the most prolific users of the Internet, their opinions should be given priority in helping guide efforts to create a safer and more equitable online environment. 

Since the completion of this survey, the Don’t Click! Initiative has evolved and emerged as a group of committed young people looking for solutions to problems identified in the research.

At the forefront of this approach is the belief that digital literacy and citizenship are effective means for youth to tackle hate and extremism online. Youth-led efforts, for example, workshops delivered by youth, to youth, can impact these areas. These are precisely the kinds of efforts that should be supported by international organizations, policymakers at multiple levels of government, school boards, and parents.   

Endnotes

    1. Teens, Social Media & Technology 2018 | Pew Research Center
    2. Parents, their children and school during the pandemic | Pew Research Center
    3. L1ght, Rising Levels of Hate Speech & Online Toxicity During This Time of Crisis
    4. Gates, Scott, and Sukanya Podder. “Social Media, Recruitment, Allegiance and the Islamic State.” Perspectives on Terrorism 9, no. 4 (2015): 107–16.
    5. Hawdon, James, Atte Oksanen, and Pekka Räsänen. (2015). “Online Extremism and Online Hate: Exposure Among Adolescents and Young Adults in Four Nations,” Nordicom Information 37 (3-4): p. 30.
    6. Tynes, B. 2006. “Children, Adolescents, and the Culture of Online Hate.” In Handbook of Children, Culture, and Violence edited by N. Dowd, D. Singer, and R. F. Wilson, 267–89. Thousand Oaks, CA: Sage; Keipi, Teo, Pekka Räsänen, Atte Oksanen, James Hawdon, and Matti Näsi. “Exposure to Online Hate Material and Subjective Well- Being: A Comparative Study of American and Finnish Youth.” Online Information Review 42, no. 1 (February 12, 2018): 2–15.
    7. Foxman, A. H., & Wolf, C. (2013). Viral hate: Containing its spread on the Internet. New York, NY: St. Martin’s Press; Harel, Tal Orian, Jessica Katz Jameson, and Ifat Maoz. “The Normalization of Hatred: Identity, Affective Polarization, and Dehumanization on Facebook in the Context of Intractable Political Conflict.” Social Media + Society 6, no. 2 (April 1, 2020): 1-10.
    8. Cowan, Gloria, and Jon Mettrick. “The Effects of Target Variables and Setting on Perceptions of Hate Speech1.” Journal of Applied Social Psychology 32, no. 2 (2002): 277–99; Garnets, L., Herek, G.M., & Levy, B. (1992). “Violence and victimization of lesbians and gay men: Mental health consequences,” In G.M. Herek & K.T. Berill (Eds.), Hate Crimes: Confronting Violence Against Lesbians and Gay Men, (pp. 207-226). Newbury Park, CA: Sage; Matsuda, Mari J. “Public Response to Racist Speech: Considering the Victim’s Story.” Michigan Law Review 87, (1989): 2320-2381.
    9. Leets, Laura. “Experiencing Hate Speech: Perceptions and Responses to Anti-Semitism and Antigay Speech.” Journal of Social Issues 58, no. 2 (January 2002): 341–61.
    10. Cohen, Lawrence E., & Felson, Marcus. “Social change and crime rate trends: A routine activity approach.” American Sociological Review 44(4), (1979): 588-608
    11. Tynes, Brendesha, Lindsay Reynolds, and Patricia M. Greenfield. “Adolescence, Race, and Ethnicity on the Internet: A Comparison of Discourse in Monitored vs. Unmonitored Chat Rooms.” Journal of Applied Developmental Psychology 25, no. 6 (November 2004): 667–84.
    12. Hawdon, James, Atte Oksanen, and Pekka Räsänen. (2015). “Online Extremism and Online Hate: Exposure Among Adolescents and Young Adults in Four Nations,” Nordicom Information 37 (3-4): 29-37.
    13. Ybarra, M. L., K. J. Mitchell, and J. D. Korchmaros. “National Trends in Exposure to and Experiences of Violence on the Internet Among Children.” PEDIATRICS 128, no. 6 (December 1, 2011): 1376–86.
    14. Hawdon, James; Oksanen, Atte & Räsänen, Pekka. (2014). “Victims of online hate groups: American youth’s exposure to online hate speech,” In James Hawdon, John Ryan, & Marc Lucht (Eds.), The causes and consequences of group violence: From bullies to terrorists (pp. 165-182). Lanham, MD.: Lexington Books. 
    15. Ybarra, M. L., K. J. Mitchell, and J. D. Korchmaros. “National Trends in Exposure to and Experiences of Violence on the Internet Among Children.” PEDIATRICS 128, no. 6 (December 1, 2011): 1376–86. 
    16. Slater, M. D. “Alienation, Aggression, and Sensation Seeking as Predictors of Adolescent Use of Violent Film, Computer, and Website Content.” Journal of Communication 53 (2003): 105–21.
    17. Cowan, Gloria, and Jon Mettrick. “The Effects of Target Variables and Settting on Perceptions of Hate Speech1.” Journal of Applied Social Psychology 32, no. 2 (2002): 277–99; Craig, Kellina M., and Craig R. Waldo. “‘So, What’s a Hate Crime Anyway?’ Young Adults’ Perceptions of Hate Crimes, Victims, and Perpetrators.” Law and Human Behavior 20, no. 2 (April 1996): 113–29; Inman, Mary L., and Robert S. Baron. “Influence of Prototypes on Perceptions of Prejudice.” Journal of Personality and Social Psychology 70, no. 4 (April 1996): 727–39.
    18. Hawdon, James; Oksanen, Atte & Räsänen, Pekka. (2014). “Victims of online hate groups: American youth’s exposure to online hate speech,” In James Hawdon, John Ryan, & Marc Lucht (Eds.), The causes and consequences of group violence: From bullies to terrorists (pp. 165-182). Lanham, MD.: Lexington Books.
    19. Mascheroni, Giovanna, and Kjartan Ólafsson. (2014). “Net Children Go Mobile: Risks and Opportunities. Second Edition,” Milano: Educatt.
    20. Blaya, Catherine. (2019). Cyberhaine, Les Jeunes et la Violence sur Internet. Paris: Nouveaux Mondes. 
    21. <liNienierza, Angela, Carsten Reinemann, Nayla Fawzi, Claudia Riesmeyer, and Katharina Neumann. “Too Dark to See? Explaining Adolescents’ Contact with Online Extremism and Their Ability to Recognize It.” Information, Communication & Society 24, no. 9 (July 4, 2021): 1229–46.

 

  1. Harriman, Nigel, Neil Shortland, Max Su, Tyler Cote, Marcia A. Testa, and Elena Savoia. “Youth Exposure to Hate in the Online Space: An Exploratory Analysis.” International Journal of Environmental Research and Public Health 17, no. 22 (November 2020): 8531. 
  2. https://morningconsult.com/fastest-growing-brands-2020/
    https://www.nbcnews.com/tech/tech-news/tiktok-surpasses-google-popular-website-year-new-data-suggestsrcna9648