Note to Reader:
This transparency report is the first of its kind for Twitch: it takes a hard look at how we think about safety; the product choices we made to create a safe space for all our communities, and how our safety staff, community moderators, and technological solutions help enforce the rules we set. The result is a wide-ranging overview of service-specific data intended to help give readers a meaningful understanding of safety-related matters on Twitch and the progress we are making. Future reports will build upon our endeavor of making Twitch an even safer place.
Twitch is a service that was built to encourage users to feel comfortable expressing themselves and entertain one another but we also want our community to always be and feel safe. Our Community Guidelines attempt to balance the importance of letting users express themselves, while also conveying clearly the rules of what is not allowed on the service; such as saying anything that is harmful to others (or illegal). Our goal is to foster a community that supports and sustains creators, provides a welcoming and entertaining environment for viewers, and eliminates illegal, negative and harmful interactions.
At Twitch, we believe everyone in our community - creators, viewers, moderators, and Twitch - plays a big role in promoting the health and safety of our community. Through the Community Guidelines, we try to make clear what expression and behavior are allowed on the service, and what is not. We then rely on community moderation actions and user reporting, along with technological solutions, such as machine learning and proactive detection, to ensure the Community Guidelines are upheld. Creators and moderators (colloquially known as “mods”) also use tools that we and third-parties provide, such as AutoMod, Mod View, and moderation bots, to enforce Twitch service wide standards, or to set higher standards in their own channels.
Twitch is a live-streaming service. The vast majority of the content that appears on Twitch is gone the moment it’s created and seen. That fact requires us to think about safety and community health in ways that are different from other services that are primarily based on pre-recorded and uploaded content. Content moderation solutions that work for uploaded, video-based services do not work, or work differently, on Twitch. Through experimentation and investment, we have learned that for Twitch, user safety is best protected, and most scalable, when we employ a range of tools and processes, and when we partner with, and empower, our community members.
The result is a layered approach to safety - one that combines the efforts of both Twitch (through tooling and staffing) and members of the community, working together. It starts with new and existing creators learning our Community Guidelines, which seek to balance user expression with community safety, and set the expectations for behavior on Twitch. Each creator then applies Twitch service wide standards, or may set an even higher bar if they choose, in their channel. We provide creators with tools to set, communicate and enforce the minimum required standards of behavior in their channel. We also provide viewer-level controls that enable viewers to control the content they see browsing the service. At the same time, Twitch applies various machine learning algorithms to proactively detect and remove certain kinds of harmful content before users ever encounter it. Finally, we empower users to report harmful or inappropriate behavior to Twitch directly. These reports are reviewed and acted on by a team of skilled specialists who can apply service-wide enforcement actions.
We will discuss each of these pieces in detail (from bottom of the pyramid to top) and how they fit together in the following section.
Nguyên tắc cộng đồng
Twitch’s Community Guidelines are the foundation of our safety ecosystem on Twitch. These guidelines set the guardrails for all user generated content and activity on the service. Because the Community Guidelines communicate the expectations for behavior on Twitch, clarity is important - we have tried to maximize clarity by adding descriptions and specific examples of prohibited behavior (and specific exceptions) wherever possible. We also recognize that Twitch’s community culture is constantly changing - which leads us to review and update our Community Guidelines to meet the community’s needs. We believe that by setting clear expectations that are updated as necessary, Twitch users will understand the boundaries we have set, and feel free and confident in expressing themselves within those boundaries. We also believe that clear, relevant Community Guidelines are an important foundation for establishing consistency in our enforcement actions to keep the community safe.
Service Level Safety
Service-level safety encompasses all the work we do to uphold the Community Guidelines across Twitch. It is composed of three parts: machine detection, user reporting and review and enforcement.
Machine Detection: Over the last two years, we have implemented “machine detection” technologies that scan content on the service to remove harmful or inappropriate content, or flag it for review by human specialists. Examples of this are nudity, sexual content, gore and extreme violence. Twitch is predominantly a live-streaming service, and most of the content that is streamed is not recorded or uploaded. Because content is viewed as it is created, live-streaming provides a particularly challenging environment for machine detection to keep up. Nevertheless, we have found ways to use machine detection to bolster proactive moderation on Twitch, and we will continue to invest in these technologies to improve them.
User Reporting: Community reports are a crucial part of maintaining the safety and trust of our community and upholding our Community Guidelines. We believe user reporting is particularly effective on Twitch because the vast majority of the content on Twitch - video and chat - is public. We encourage creators, moderators, and viewers to report content that violates our Community Guidelines so we can take appropriate service-wide action. User reports are sent to our team of content moderation professionals to review.
Review and Enforcement: At Twitch, we have a group of highly trained and experienced professionals who review user reports, and content that is flagged by our machine detection tools. These content moderation professionals work across multiple locations, and support over 20 languages, in order to provide 24/7/365 capacity to review reports as they come in across the globe. Reports are prioritized so that the most harmful behavior can be dealt with most quickly. Review time for any given report is dependent on a number of factors including the severity of the report, the availability of evidence to support the report, and the current volume of the report queue. We also employ a team of experienced investigators to delve into the most egregious reports, and work with law enforcement as necessary.
We recognize that our content moderation professionals spend a lot of time reviewing content that runs the gamut from negative to extremely disturbing, and we take their health and safety as seriously as we take the health and safety of the Twitch community. To fulfill this commitment, we have invested in tooling to reduce the harmful effects of certain content on reviewers - for example, by having the system our reviewers use automatically show potentially harmful videos in black-and-white, at lower resolution, and/or muted. We also provide programs and benefits to our reviewers that are designed to protect their mental well-being.
On Twitch, we believe safety should also be a reflection of the creator. We enable our creators to set their own standards of acceptable and unacceptable community behavior, with our Community Guidelines providing a baseline standard that all communities are required to uphold. To foster a culture of accountability, creators can leverage other members of their community and create a team of moderators, who assist the creator by moderating chat in the creator’s channel. (Mods can be easily identified in chat by the green sword icon that appears next to their username.) Moderators play many roles, from welcoming new viewers to the channel, to answering questions, to modeling and enforcing community standards. We provide both creators and their mods with a powerful suite of tools such as AutoMod, Chat Modes, and Mod View to make their roles as easy and intuitive as possible. These tools provide the ability to automatically filter chat, allow creators and mods to see (and delete) questionable chat messages before they are displayed on the channel, give users “time outs” (lock them out of chat for a period of time) or permanently block them from the channel.
In addition to the tools that we provide creators and their mods, we also want viewers to be able to customize the safety of their experience. To enable that, we provide viewers with features - such as content warnings, chat filters, and blocking tools - that they can use to customize content and interactions they encounter across the service.
Advertising is an important part of Twitch, and brands that advertise on Twitch want to know how we are making our users safer, and promoting a more positive and less harmful environment. As a condition of advertising with us, they want to ensure that their brand is not being associated with content or conduct that doesn’t align with their brand values. We address these goals in several ways. First, we only serve advertising on channels that are run by streamers who have demonstrated a track record of streaming responsibly, and have provided us with pre-screened identity information. Further, we allow our advertisers to target the placement of their ads - toward streamers who stream particular games that fit the advertiser’s brand values, or to streamers streaming games that are rated as suitable for a general audience (based on ESRB or PEGI ratings). Advertisers can also make sure that their ads are not shown on channels with content that is flagged by the streamers as “Mature”, or when the streamer is playing a game that is not aligned with the advertiser’s brand values - such as first-person shooter games, or games rated for mature audiences.
Community Guidelines: In 2020, we’ve updated several key areas of our Community Guidelines with the intent to provide more clarity and make the policies easier to apply. To accomplish this we provide descriptions of prohibited behavior, further clarified with examples and exceptions. Key updates in 2020 included:
In our updates to the Nudity and Attire and the Harassment and Hateful Conduct policies, we started by convening and gathering feedback from focus groups made up of a diverse set of Twitch creators. We also reviewed draft guidelines with our Safety Advisory Council - an eight-member group of creators, academics, and NGO leaders. These steps helped to clarify our guidelines and better reflect the standards and ideals of the Twitch community. We recognize that our service, our community, and the world we live in are not static, and as such we will continue to review and evolve our standards and expectations, and update our Community Guidelines to reflect this evolution.
Operational Capacity: We are committed to ensuring that review of safety reports happens in a timely manner and have invested heavily in increasing our capacity. Over the past year alone, we have made a 4X increase in the number of content moderation professionals available to respond to user reports.
On Twitch, we empower creators to build communities that are unique and personal, but paired with that freedom is the expectation that those communities must be healthy and abide by the Twitch Community Guidelines. To accomplish this, many Twitch creators ask trusted members of their communities to help moderate chat in the creator’s channel. These channel moderators (“mods”) and moderation tools are the foundation of chat moderation in every creator’s Twitch channel. To make this model work, we invest heavily to provide our creators and their mods with tools that are flexible and powerful enough to enforce both Twitch’s Community Guidelines and the channel-specific standards established by the creator. Our suite of moderation tools supports two objectives: identifying potentially harmful content for moderator review, and scaling moderator controls to support fast-moving Twitch chat messages.
Creators and their mods can use tools- provided by Twitch - to manage who can chat in their channel and what content can be seen in chat. To manage who is actively participating in their community, creators and their mods can remove bad actors from chat by issuing temporary and permanent bans - these bans delete a chatter’s recent messages from the channel, and prevent them from sending further messages in the channel during the time they are suspended. Creators and mods can also change certain settings to restrict who can chat to more trusted groups such as followers or subscribers only. Mods can send their own chat messages, which carry their green Moderator badge, to guide the tone of the chat. To control what messages can be seen in chat, creators and mods utilize two core features: AutoMod and Blocked Terms. When enabled, AutoMod pre-screens chat messages and holds messages that contain content detected as risky, preventing them from being visible on the channel unless they are approved by a mod. Blocked Terms allow creators to further tailor AutoMod by adding custom terms or phrases that will be always blocked in their channel. These features are best utilized through Mod View, a customizable channel interface that provides mods with a toolbelt of ‘widgets’ for moderation tasks like reviewing messages held by AutoMod, keeping tabs on actions taken by other mods on their team, changing moderation settings, and more.
It’s important to remember that actions taken by a creator and their moderator(s) can only affect a user’s access in that channel. Channel bans, time-outs and chat deletion only apply within a channel, and do not affect the user’s access to other channels or other parts of the Twitch service. However, creators and moderators (or any Twitch user) can report conduct that violates our Community Guidelines through the Twitch reporting tool, which can then be actioned on a service-wide basis by Twitch moderation staff.
The following sections provide additional information regarding how creators and their moderators set and enforce standards for the chat in their own channels.
The overwhelming majority of user interaction on Twitch occurs in channels that are moderated by channel moderators, AutoMod, or both.
In (H1) 2020, 65% of live content viewed on Twitch (measured in terms of minutes watched) occurred on channels that had Twitch’s AutoMod feature actively monitoring chat for harmful messages; in H2 2020, this increased to 71%. We believe this substantial increase can largely be attributed to having AutoMod enabled by default for new channels in H2 2020 that did not have any assigned users as channel moderators. As shown in the chart above, the percentage of hours watched in channels that had at least one active moderator increased slightly, from 82% in H1 to 86% in H2 - we believe this high percentage shows that larger channels are very likely to have active moderators. Most importantly, throughout 2020, over 92% of live content viewed on Twitch occurred in channels with chat that was moderated either by active moderators, or AutoMod, or both; and that coverage increased to over 95% in H2.
The vast majority of content removals on Twitch are removals of chat messages by channel moderators acting within individual channels. Twitch provides tools such as customizable Blocked Terms and AutoMod (described in more detail above), which allow channels to apply filters that proactively screen messages out of chat before they are seen. Channel moderators can also actively monitor chat and delete harmful or disruptive messages within seconds after they are posted.
In H1 2020, 24.4 billion chat messages were sent on Twitch, and in H2, that number increased to 32.6 billion (a 33% increase). 61.5 million messages were proactively removed from chat using Blocked Terms and AutoMod in H12020; this number increased to 98.8 million in H2 (an increase of 61%). In H1 2020, channel moderators manually deleted 15.9 million chat messages; in H2 2020, this number increased to 31.5 million (an increase of 98%). These increases in the amount of objectionable chat content removed can be partially explained by the 33% growth in the number of chat messages sent on Twitch, and the 40% overall growth in the number of channels on Twitch, between H1 and H2 2020. We believe the remainder of the increase is due to the increase in moderation coverage discussed above, and to the March launch of the ModView dashboard, which makes it easier for mods to remove content.
Channel Enforcement Actions
In addition to deleting messages, channel moderators can choose to remove harmful and disruptive users from a channel, either using a temporary timeout, or an outright ban, to prevent any future harm that they might cause in the channel. We have recently enhanced these tools to allow users who have been banned from a channel to appeal that decision to the channel moderator, and be reinstated if the moderator agrees.
In H1 2020, creators and their mods imposed 2.3 million permanent channel bans; in H2 2020, this number increased to 3.9 million channel bans (an increase of 72%). Similarly, temporary channel timeouts increased from 3.2 million in H1 2020 to 4.5 million in H2 2020 (an increase of 40%). The overall increase in channel bans and timeouts can largely be attributed to the 29% increase in the number of unique channels streaming - from 14.6 million channels in H1 2020 to 18.8 million.
Here again, it’s worth noting that Twitch is a live-streaming service, and the vast majority of the content on Twitch is ephemeral. For this reason, we do not focus on “content removal” as the primary means of enforcing streamer adherence to our Community Guidelines. Rather, live content is flagged by either machine detection or user reports, to our team of content moderation professionals, who then issue “enforcements” (typically a warning or timed channel suspension) for verified violations. If there happens to be recorded content that accompanies a violation, that content is removed. But most enforcements do not require content removal, because apart from the report, there is no longer a record of the violation - the live, violative content is already gone. For this reason, we believe the most appropriate measure of our safety efforts is enforcements - and that is how we have oriented the following sections of this report.
For clarity, please note that the statistics regarding enforcements in the following sections do not include, and are not duplicative of, the channel-level enforcements discussed in the previous section.
Increase in Hours Watched and Reports Made on Twitch
As shown in the chart above, user reports for all types of violations increased from 5.9M during H1 2020 to 7.4M during H2 2020 (a 25% increase). Over the same period, Twitch experienced rapid growth, as evidenced by a 22% increase in usage (measured as hours watched). While the absolute number of user reports increased from H1 to H2, the number of reports per thousand hours watched only increased from 0.74 to 0.76 (shown as the green line in the chart above). We interpret this to mean that while Twitch usage has grown rapidly in 2020, the rate of occurrence of reported behavior has remained relatively flat.
User reports are prioritized (based on a number of factors, including the classification and severity of the reported behavior, and whether or not the behavior is illegal) and sent to our content moderation team for review. If the reviewer agrees that the report demonstrates a violation of the Community Guidelines, then the reviewer will issue an enforcement action against the violator’s account. Depending on the nature of the violation, we take a range of actions including issuing a warning, a temporary suspension (1-30 days), and for the most serious offenses, an indefinite suspension from Twitch. If any content that contains the violation has been recorded on the service, we will remove it.
We also run an appeals process, so that if a user believes an enforcement is mistaken, unwarranted or unfair, they can appeal the enforcement. Appeals are managed by a separate group of specialists within our content moderation team.
For more information on account enforcement, see our Account Enforcements page.
Total enforcement actions increased from 788 thousand in 1H 2020 to 1.1 million in 2H 2020, a 41% increase. The majority of this increase can be attributed to the 22% increase in overall growth in Twitch usage over that period. This is shown by the increase in the number of enforcement actions per thousand hours watched (“KHW”): 0.099 per KHW in 1H 2020, up to 0.114 per KHW in 2H 2020 (as shown by the green line in the chart above). We believe this 15% increase in the total number of enforcement actions per KWH is within normal operating variances, and overall is not attributable to any particular cause(s). We would also expect to see a slight up-tick in reporting, as we have made efforts throughout the year to educate the Twitch community on their options for reporting. In the sections below, we describe the causes of increases in certain types of enforcements.
In addition to the enforcement actions listed above, on three occasions in 2020, Twitch programmatically identified large numbers of bot accounts. These accounts, which are typically used to artificially inflate view counts, were identified and terminated. These actions are not included in the figures listed above because they do not stem from reports or machine detection of harmful content but these three programmatic bulk actions resulted in the issuance of 5.8 million additional enforcements.
In the sections below, we provide data and analysis on the various types of enforcement actions that Twitch has issued in 2020.
We do not tolerate conduct or speech that is hateful or harassing, or that encourages or incites others to engage in hateful or harassing conduct. This includes unwanted sexual advances and solicitations, inciting targeted community abuse, and expressions of hatred based on an identity-based protected characteristic.
User reports for hateful conduct, sexual harassment and harassment increased by 19% from H1 to H2 2020, and reports per thousand hours watched were slightly down 0.219 per KHW in H2 (compared to 0.224 in H1). However, as shown in the chart below, enforcement actions in these categories have increased by 214% and 158%, on both an absolute basis, and on a per-KWH basis, respectively.
The primary reasons for the increase in enforcement rate in H2 2020 are: (1) we made improvements to the user reporting and enforcement processes in early H2 that enabled our content moderation teams to identify and enforce more of these types of reports; and (2) from May - August 2020, we significantly increased our capacity to review user reports, which allowed us to respond to more reports of harassment and hateful conduct more quickly. We will continue to invest in enforcement tools and capacity that make it easier and faster to review reports of harassing and hateful behavior going forward. Additionally, in January 2021, we implemented a revised set of Community Guidelines regarding hateful conduct, sexual harassment and harassment, which we expect will further enhance our efforts - and those of the community - to keep these kinds of behaviors off of Twitch.
In an effort to limit community exposure to content that may be illegal, upsetting or damaging, we prohibit media and conduct that focuses on extreme gore or violence, sexual violence, violent threats, self-harm behaviors, animal cruelty, dangerous or distracted driving, and other illegal, disturbing or frightening content/conduct.
The number of enforcement actions for this type of behavior increased from 3,825 enforcements in H1 2020, to 7,429 enforcement actions in H2 2020 (an increase of 94%). As with enforcements for Hateful Conduct, Sexual Harassment and Harassment (detailed in the preceding section), this sizable jump in the number of enforcements is due to actions we took at the beginning of H2 to improve user reporting, moderation tools, and review capacity. Also during H2 2020, we made further improvements in our machine detection system for this type of content. We will continue to invest in all of these areas in 2021.
Twitch does not allow content that depicts, glorifies, encourages, or supports terrorism, or violent extremist actors or acts. This includes threatening to or encouraging others to commit acts that would result in serious physical harm to groups of people or significant property destruction. This also includes displaying or linking to terrorist or extremist propaganda, even for the purposes of denouncing such content.
We receive few reports in this category, and issue few enforcements, as the numbers in the chart above show. Nevertheless, we consider this type of conduct to be of the highest severity. In October, Twitch released a revised policy regarding Terrorism and Extremist Content, providing increased clarity on how we define terrorist organizations and how our internal safety teams categorize related content. These clarifications broadened the definition of content that fits in this category (including behaviors in this category that were previously categorized as other types of abuse), resulting in the substantial increase in enforcements of this kind (in percentage terms, if not in absolute number). In 2020, we have not had any instances of live-streamed terrorist activity in Twitch. The enforcements issued in this category have been for showing terrorist propaganda (77 enforcements in 2020), and for glorifying or advocating acts of terrorism, extreme violence or large-scale property destruction (10 enforcements in 2020).
We limit community exposure to content that is not appropriate for a diverse audience. This includes restricting content that involves nudity, insufficient coverage of the body, inappropriate attire or is sexual in nature.
In late October, we implemented improvements to our proactive detection of nudity, which resulted in an increased volume of enforcements in this category.
Twitch prohibits disruptive activities such as spamming, because these types of activities violate the integrity of Twitch services, and diminish users’ experiences on Twitch. We also do not allow other dishonest or inappropriate behaviors such as: impersonation, broadcasting others against their wishes, ban evasion, misuse of Twitch tools, intentionally miscategorizing a stream, cheating on a game or playing a prohibited game, inappropriate usernames, and underaged user accounts.
Overall enforcements in this category increased from 734 thousand in H1 2020 to 987 thousand in H2 2020 (an increase of 34%). This increase was higher than the 22% rate of growth of Twitch usage between 1H and 2H 2020. As shown by the green line in the chart above, the number of enforcements per thousand hours watched increased from 0.092 in H1 to 0.102 in H2 (an increase of 10%). Because of the variety of different violations that make up this category, there is a similarly long list of causes of the increase.
Twitch has made strides in a number of different areas to combat these kinds of behaviors in 2020. For example, Whispers, which is Twitch’s one-to-one private chat function, has historically been the primary mode for spam abuse due to the fact that these messages are private, and not subject to channel moderation. To combat spam in Whispers, in 2020 we launched multiple machine detection models that either block spam messages entirely, or flag the message to the recipient and urge the recipient to file a user report. The success of these models caused a 70% decline in Whisper-related spam reports from H1 2020 to H2 2020. We are continuing to develop proactive detection and other technological solutions to reduce the prevalence of spam, ban evasion, inappropriate usernames, and other violations.
Twitch’s Law Enforcement Response team is responsible for handling all cases related to any harm against a child, escalation of violent threats or terrorist acts to appropriate authorities, any other legally required reporting to law enforcement, and requests for user data from law enforcement agencies. Our content moderation team escalates these types of cases to our Law Enforcement Response team.
We do not tolerate child sexual exploitation. When we are made aware of media depicting child sexual exploitation, or grooming behavior, we remove the content, investigate, and report to authorities via the National Center for Missing & Exploited Children. We also work directly with aligned organizations throughout the world - like INHOPE and ICMEC - to address and prevent child exploitation media and grooming from occurring on Twitch.
NCMEC reporting increased 66% between H1 and H2. This increase is driven by improvements to Twitch’s investigation process for related escalations that allowed internal teams to more holistically identify patterns of behavior and therefore make an increased volume of NCMEC reports.
Whenever and wherever Twitch identifies credible threats of violence, Twitch will proactively send user data to appropriate law enforcement agencies. Twitch had 38 such cases in 2020.
Escalations to Law Enforcement decreased 27% in H2. We believe this is largely due to a decrease in public gatherings caused on account of COVID-19. The lack of public gatherings means there are fewer places and events for people to direct violent threats toward.
Twitch complies with data requests from law enforcement around the world in relation to crimes they may be investigating. We do so using our criminal subpoena and MLAT (“Mutual Legal Assistance Treaty”) process which requires all data requests to be served through our process server, CSC.
Subpoenas and preservation requests processed by Twitch increased 37% in H2. This is within the expected volumes of valid Subpoenas/Preservation Holds we have received from Law Enforcement.