[go: up one dir, main page]

0% found this document useful (0 votes)
45 views23 pages

DSA Draft Transparency Report - 25 October 2023

Tik tok first DSA transparency report

Uploaded by

samy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views23 pages

DSA Draft Transparency Report - 25 October 2023

Tik tok first DSA transparency report

Uploaded by

samy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

hh

At TikTok, our mission is to inspire creativity and bring joy. The safety and well-being of our community is our priority,
and we have more than 40,000 trust and safety professionals globally working to protect our users. TikTok has a
strong track record in proactive transparency reporting; we have been publishing transparency reports since 2019.
We also report on our efforts to combat disinformation on our platform under the Code of Practice on Disinformation
on a six monthly basis. Building on our transparency efforts and in line with our obligations under the Digital Services
Act (DSA), we are pleased to publish our first DSA transparency report for the reporting period of 1 September 2023
to 30 September 2023.

We have a number of measures designed to keep users safe across priority areas, including from illegal and other
harmful content. We are pleased to report on the numbers underlying these measures including the additional
reporting option we have implemented to allow people to report content in the European Union they believe is illegal.
Some key points in our report are:

● TikTok takes the vast majority of action proactively against illegal and other harmful content under our
Policies (defined below) compared to following a report from users. During September 2023, we removed
4m items of violative content under our Policies, which is 7 times more than the volume of violative content
removed following a user report.
● Since the introduction of our additional reporting option for illegal content, we received around 35,000 illegal
content reports corresponding to approximately 24,000 unique pieces of content. We estimate 28% of
those were found to violate our Policies or local law and were actioned.
● TikTok has more than 6,000 people dedicated to moderating content, covering at least one official language
for each of the 27 European Union Member States.

Providing transparency to our community about how we keep them safe has no finish line. We are proud of our efforts
in this first DSA transparency report, but we acknowledge, as we are reporting on one month of metrics (September
2023) within a three week turn-around period, that we still have more work to do and we have set out the limitations
to our reporting in each of the annexes. We are working hard to address these points ahead of our next DSA
transparency report.

Report index

1. Content moderation (+ Annex A)

2. Illegal content reports (+ Annex B)

3. TikTok’s content moderators (+ Annex C)

4. Orders from government authorities (+ Annexes D and E)

5. Complaints and disputes (+ Annex F)

6. Suspensions

7. Monthly active recipients (+ Annex G)

TikTok’s DSA Transparency Report September 2023 2


Section 1. Content moderation

TikTok strives to foster an open and inclusive environment where people can create, find community, and be
entertained. To maintain that environment, we take action upon content and accounts that violate our Terms of
Service, Community Guidelines, or Advertising Policies (together, our Policies). We are committed to being
transparent with our community about the moderation actions we take. The number and type of restrictions we
impose as part of our content moderation activities are available at Annex A.

Our Policies are the starting point when it comes to how we form and operate our content moderation strategies and
practices and they contain provisions which prohibit various forms of illegal and other harmful content. We use a
combination of automation and human moderation to identify, review, and action content that violates our Policies.

Key principles

We operate our content moderation processes using automation and human moderation in accordance with the
following four pillars, which provide that we will:

1. Remove violative content from the platform that breaks our rules (noting that we do not allow several types
of mature content themes, including gory, gruesome, disturbing, or extremely violent content);
2. Age-restrict mature content (that does not violate our Community Guidelines but which contains mature
themes) so it is only viewed by adults (18 years and older);
3. Maintain For You Feed eligibility standards to help ensure any content that may be promoted by the
recommendation system is appropriate for a broad audience; and
4. Empower our community with information, tools, and resources.

Automated Review

We place considerable emphasis on proactive detection to remove violative content. Content that is uploaded to the
platform is typically first reviewed by our automated moderation technology, which aims to identify content that
violates our Policies before it is viewed or shared by other people on the platform or reported to us. While undergoing
this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either be automatically
removed from the platform or flagged for further review by our human moderation teams. In line with our safeguards
to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.

We use a variety of automated tools, including:

● Computer Vision models, which help to detect objects (for example visual signals, emblems, logos objects
that are known to be associated with extremist and hate groups) so it can be determined whether the
content likely contains material which violates our Policies.
● Keyword lists and models are used to review text and audio content to detect material in violation of our
Policies. We work with various external experts, like our fact-checking partners, to inform our keyword lists.
● Where we have previously detected content that violates our Policies, we use de-duplication and hashing
technologies that enable us to recognise copies or near copies of such content. This is used to prevent
further re-distribution of violative content on the platform. We work with external groups, such as Tech
Against Terrorism on hate or violent extremist content, who help us to more quickly detect and remove
violative content that has already been identified off the platform.

We are continuing to invest in improving the precision of our automated moderation systems so that we can more
effectively remove violative content at scale, while also reducing the number of incorrect removals. If users or
advertisers believe we have made a mistake, they can appeal the removal of their content.

When assessing the effectiveness of our automated moderation technologies, we consider that the relevant measure
is the data we hold about the outcomes of user appeals of the automated removal of videos and ads. We consider

TikTok’s DSA Transparency Report September 2023 3


that the appropriate indicator of accuracy is the proportion of videos and ads which are appealed by users and are
not reinstated on the platform. We consider that the appropriate indicator of error is the proportion of videos and ads
that were reinstated following an appeal. For September 2023, the accuracy rate for our automated moderation
technologies for video and ads was approximately 94% and the error rate was approximately 6%.

Human moderation

In order to support fair and consistent review of potentially violative content, moderators work alongside our
automated moderation systems and take into account additional context and nuance which may not always be
picked up by technology.

Human moderation also helps improve our automated moderation systems by providing feedback for the underlying
machine learning models to strengthen our ongoing detection capabilities. This continuous improvement helps to
reduce the volume of potentially distressing videos that moderators view and enables them to focus more on content
that requires a greater understanding of context and nuance (such as misinformation, hate speech and harassment).
The responsibilities of our content moderators includes:

● Reviewing content flagged by technology: When our automated moderation systems identify potentially
problematic content but cannot make an automated decision to remove it, they send the content to our
moderation teams for further review. To support this work, we have developed technology that can identify
potentially violative items – for example, emblems associated with extremist groups – in video frames, so
that content moderators can carefully review the video and the context in which it appears. This technology
improves the efficiency of moderators by helping them more adeptly identify violative images or objects,
quickly recognise violations, and make decisions accordingly.
● Reviewing reports from its community: We offer our community easily accessible in-app and online
reporting tools so they can flag any content or account they feel is in violation of our Policies or is illegal.
These reports are an important component of our content moderation process, however, the vast majority
of removed content is identified proactively before it is reported to us (See Annex A for more information).
● Reviewing popular content: We manually review video content when it reaches certain levels of popularity
in terms of the number of video views, reducing the risk of violative content being shown in the
For You Feed or otherwise being widely disseminated.
● Assessing appeals: If someone disagrees with our decision to remove their content or account, they can
appeal the decision for reconsideration. These appeals may be sent to moderators to decide if the content
should be reinstated on the platform or the account reinstated.

Section 2. Illegal content reports

Our Policies apply to all accounts and content on the platform, and they often align with, and sometimes go beyond,
local law requirements. While we primarily enforce our Policies at our own initiative through automated and human
moderation, users can also use the reporting functions to alert TikTok to content they believe violates our Policies.
The number of reports made in the European Union to TikTok during the period of September 2023 is at Annex B.
The DSA envisages that trusted flaggers will also be able to submit illegal content reports. However, we have not
received any illegal content reports from trusted flaggers as they have not been designated under the DSA.

As part of our requirements under the DSA, we have introduced an additional reporting channel for our community in
the European Union to ‘Report Illegal Content,’ which enables users to alert us to content they believe breaches the
law. When users report suspected illegal content, they will be asked to select a category of illegal content they are
reporting under. Reporters are also asked to provide additional information, such as: the country in question; if
possible, the specific law in question; and a clear explanation as to why they think the content violates the law. If the
report is incomplete (for example, it does not provide enough information for us to assess if the content is illegal) or
materially unsubstantiated, the report may be rejected. The reporter will be notified of this decision and provided with
an opportunity to re-submit their report with more information. This helps us properly and effectively consider and
respond to each report.

TikTok’s DSA Transparency Report September 2023 4


Illegal content reports are assessed through a combination of automation or human review. TikTok will review the
content against our Policies and where a violation is detected, the content may be removed globally. If it is not
removed, our illegal content moderation team will further review the content to assess whether it is unlawful in the
relevant jurisdiction - this assessment is undertaken by human review. In making our determination, TikTok is required
to balance any competing legal rights, such as freedom of speech. Content found to be illegal will generally be
restricted in the country where it is illegal or, in some cases, across the EEA region or by removing the content from
the platform entirely. Those who report suspected illegal content will be notified of our decision, including if we
consider that the content is not illegal. Users who disagree can appeal those decisions using the appeals process.

Section 3. TikTok’s moderators

Our mission to inspire creativity and bring joy to people around the world is made possible by the critical work of our
content moderators who review and remove illegal and other harmful content and behaviour from the platform.
TikTok has 6,125 people dedicated to the moderation of content in the European Union as of the end of
September 2023 (see Annex C for more information).

Our Trust & Safety teams lead our approach to content moderation across the European Union and are responsible
for the development of our Community Guidelines and related moderation policies and for the moderation of content.
For the European Union, TikTok’s Trust & Safety work is led from Dublin, Ireland. In addition to TikTok’s Global Head of
Trust & Safety, other key members of the global Trust & Safety leadership team are also based in Dublin.
Our Monetisation Integrity and Business Integrity teams also play a key role in the moderation of ads and branded
content, and are responsible for the development of TikTok’s Advertising Policies and related moderation policies.

Training

To ensure a consistent understanding and application of our Policies, all content moderator personnel receive training
across our relevant Policies. All content moderators undergo training on TikTok’s content moderation systems and
moderator wellness issues. Personnel involved in reviewing reported illegal content receive additional focused training
on assessing the legality of reported illegal content.

Content moderation training materials are kept under review to ensure that they are accurate and current.
Such materials include clearly defined learning objectives to ensure our content moderators understand the core
policy issues and their underlying policy rationale, key terms and policy exceptions (where applicable).

Members of our Trust & Safety teams attend regular internal sessions dedicated to knowledge sharing and discussion
about relevant issues and trends. They also participate in various external events to share expertise and support their
continued professional learning. These engagements contribute to the team’s awareness of the risks which may arise
on the platform, which in turn informs our approach to content moderation. For example, members of our
Trust & Safety team (including leaders of our fact-checking programme) attended the Global Fact 10 | Global
Fact-Checking Conference hosted by the International Fact-Checking Network in June 2023 and hosted a panel
discussion on our approach to countering harmful misinformation.

Support

At TikTok, building and maintaining a safe experience for our community is of the utmost importance. Our primary
focus is on preventative care measures to minimise the risk of psychological injury through well-timed support,
training and tools, from recruitment through to onboarding and throughout their time moderating TikTok content, that
help foster resilience, while minimising the risk of psychological injury. These may include tools and features to allow
Trust & Safety employees to control exposure to graphic content when reviewing or moderating content, including
grayscaling, muting and blurring; training for managers to help them identify when a team member may need
additional well-being support; and clinical and therapeutic support.

While we focus on preventative care measures, our moderators may be required at times to review potentially harmful
content, which makes providing the right support essential. We recognise this, and are focused on and committed to

TikTok’s DSA Transparency Report September 2023 5


prioritising the health, safety, and well-being of our people. We use an evidence-based approach to develop
programs and resources that support moderators’ psychological well-being.

For our Trust & Safety teams, we also provide them with membership to the Trust and Safety Professional
Association. This membership allows them to access resources for career development, participate in workshops and
events, and connect with a network of peers across the industry.

Qualifications & linguistic expertise

Some of the issues which arise on the platform are highly localised in terms of language and region, which requires
deep knowledge and awareness of relevant cultural nuances, terms and context.

To address this, and ensure its content moderators are appropriately qualified to make decisions, we have
regional policy teams in each region, which includes coverage for all European Union Member States, for example
with either designated policy country managers for larger countries or policy managers covering a number of smaller
countries.

Based primarily in Dublin, the role of our EMEA regional policy team is to bring regional insights, cultural context, local
knowledge and policy understanding to ensure that global moderation policies are localised as appropriate for the
particular context (i.e. across various countries and regions within the European Union). The team plays an important
role in risk detection and identification and mitigation at a regional and local level through its subject matter expertise;
close collaboration with cross-functional teams to detect regional/local trends (such as localised trends); engagement
with external experts, such as NGOs and civil society organisations, and government authorities.

The localised policy outputs from the EMEA regional policy team enable our content moderation teams to take a
regionally informed approach to content moderation (e.g. rapidly evolving alternative vocabulary or terminology in
relation an unfolding election issue, which may vary/evolve over time and as between countries and languages).

We have also established a number of specialised Trust & Safety moderation teams to assist our moderators review
content relating to complex issues. For example, assessing harmful misinformation requires additional context and
assessment by our specialised misinformation moderators who have enhanced training, expertise and tools to
identify such content, including direct access to our fact-checking partners.

We moderate content in more than 70 languages globally and we are transparent in our regular Community
Guidelines Enforcement Reports about the primary languages our moderators work in globally. We have language
capabilities covering at least one official language for each of the 27 European Union Member States, as well as a
number of other languages that are commonly spoken in the region (for example, Arabic and Turkish). This language
capability complements our awareness-raising materials, like the Community Guidelines, that are also available in
multiple languages. We also have moderation personnel that are not assigned to a particular language, which assist
reviewing content such as photos and profiles.

Section 4. Orders from government authorities

We may receive requests from government authorities in the European Union to remove content. When we receive
such requests from government authorities, we review and take action upon content in line with our Policies and the
applicable law. For September 2023, we received 17 requests from government authorities in the European Union to
remove content (see more in Annex D).

We may also receive requests from government authorities in the European Union for user information disclosure.
We do so in a manner that respects the privacy and other rights of our users. Any request we receive is carefully
reviewed on a case-by-case basis in line with our Law Enforcement Guidelines. Our policies and procedures govern
how we handle and respond to such requests and only disclose user data where a request is based on a valid legal
process. For September 2023, we received 452 information requests from government authorities in the
European Union (see more in Annex E).

TikTok’s DSA Transparency Report September 2023 6


Section 5. Complaints and disputes

Complaints

We provide notifications to users and advertisers who have violated our Policies or applicable local laws. If content is
posted that we do not allow or we suspend or ban an account because of a violation, users and advertisers will be
notified in the app or directly. Anyone can appeal these decisions once they receive the notification of a content
violation or account ban or suspension. We provide information to users about how to appeal such decisions here.
We report on the number of appeals, and the action we take in response to those appeals, in Annex F.

Disputes submitted to out-of-court dispute settlement bodies

The DSA envisages that users of the platform will have the right to access a third party out-of-court dispute
settlement process to resolve any disputes that they may have with us regarding moderation actions (including in
relation to any appeals). Those processes have not yet been established under the DSA. Once they are in place,
further details will be made available on the European Commission's web page and we will engage with the
appointed bodies and report on these processes in future transparency reports.

Section 6. Suspensions

We may suspend or permanently ban accounts where we identify violations of our Policies, including where:
● the user does not meet the minimum age or other requirements as indicated in our Terms of Service;
● the account impersonates another person or entity in a deceptive manner;
● a user has a severe violation on their account (such as promoting or threatening violence);
● an account reaches the strike threshold for multiple violations within a policy or feature; or
● multiple violations of our Intellectual Property Policy.

We report on the number of accounts suspended during September 2023 for violations of our Policies in section (b)
of Annex A. Separate to the suspension of accounts for violations of our Policies, TikTok did not impose suspensions
on accounts for the frequent provision of manifestly illegal content. Separate to the rejection of incomplete or
materially unsubstantiated illegal content reports, TikTok did not suspend the processing of illegal content reports or
complaints due to individuals frequently submitting manifestly unfounded notices or manifestly unfounded
complaints.

Section 7. Average monthly recipients per Member State

We report on the average number of ‘monthly active recipients’, broken down per each of the 27 European Union
Member States in Annex G during the period 1 April 2023 to 30 September 2023.

TikTok’s DSA Transparency Report September 2023 7


Annex A - TikTok’s own-initiative content moderation

This Annex A provides the number of moderation actions we took against content and accounts under our Policies. It consists of numbers of video and LIVE content removed and
restricted, as well as restrictions imposed on access to features (i.e. service restriction), and the number of ads removed. We are working hard to ensure we can provide numbers for
the remaining moderation actions across all of our features in future transparency reports.

Content-level moderation actions

This table sets out the number of the content-level moderation actions taken where content is found to violate our Policies, broken down by the type of policy that the content has
been actioned under and by the moderation action taken.

Type of moderation action taken

Content Removed Content Restricted Service Restricted

Type of Community
3,999,960 13,975,378 137,881
policy Guidelines
actioned
under Advertising
38,626 - -
Policies

TikTok DSA Transparency Report September 2023 8


This table sets out the number of the content items removed where content is found to violate our Policies, broken down by the sub-policy under our Community Guidelines and our
Advertising Policies and by the number of content items removed using our automated moderation technology. Content may violate multiple policies and each violation is reflected in
the breakdown of each of the respective sub-policies.

Detection method

Type of policy Total content removed Content removed automatically

Community Guidelines 3,999,960 1,800,826

Youth Safety & Well-Being 807,051 -1

Safety & Civility 589,838 131,245

Mental & Behavioral Health 510,836 253,411

Sensitive & Mature Themes 2,058,472 1,191,426

Regulated Goods & Commercial Activities 1,134,572 581,038

Privacy & Security 77,300 47,316

Integrity & Authenticity 149,341 111,625

Advertising Policies 38,626 14,101

Ad Format 464 182

Adult & Sexual content 758 437

IP infringement 2631 737

Misleading & False Content 2,136 698

Politics & Religion & Culture 459 339

Prohibited & Restricted Content 8,508 2,677

Prohibited & Restricted Industry 23,249 8,646

Violence & Horror & Dangerous activity 421 385

1
The numbers for this sub-policy are not available for video and LIVE content removed automatically. This is because we consider Youth Safety & Well-Being across all the sub-policies under the Community Guidelines. The numbers reflected for each sub-policy include
content TikTok removed for Youth Safety & Well-being purposes.

TikTok DSA Transparency Report September 2023 9


Account-level moderation actions

This table sets out the number of account-level restrictions (i.e. account suspensions or bans) taken against users and advertisers who have been found to have violated our Policies,
broken down by the number of accounts actioned using our automated moderation technology.

Type of moderation action taken

Account ban / suspension

Total removed / suspended 829,649

Detection method Accounts banned / suspended automatically 196,880

TikTok DSA Transparency Report September 2023 10


Annex B - Illegal content reports

TikTok has introduced an additional reporting channel for our European Union community to ‘Report Illegal Content,’ which enables users to alert us to content they believe breaches
the law. This Annex B provides a breakdown of the illegal content reports we received from users within the European Union in relation to video, LIVE and ads, broken down by the
category of illegal content it has been reported under. We are working hard to ensure we can provide numbers for the remaining illegal content reports across all of our features in
future transparency reports.

We received a total number of 35,165 illegal content reports in the European Union, which corresponds to user reports on 24,017 unique items of content. Of the unique items of
content reported, we took action against (i) 3,921 items of content on the basis that it violated local laws and (ii) 2,803 items of content on the basis that it breached our Policies.
No action was taken on the remaining content reported, either because it was not found to be violative under our Policies or the relevant local laws or because the initial report did not
contain enough information.

Median time needed for taking action pursuant to the illegal content reports: The median time between our receipt of an illegal content report and deciding whether or not to
action that content under the applicable local law is approximately 13 hours. This median time consists of the time taken to review more complex user reports, requiring a nuanced
consideration of the legal requirements by a legal reviewer against the applicable local law. Assessing these reports can be a complex task as we strive to be consistent and equitable
in our enforcement, while also weighing up our decisions against other important interests such as freedom of expression.

The median time reported does not reflect the time taken to review reports that are actioned under our Policies during the initial review stage. As part of our Community Guidelines
Enforcement Report, we report on our response times to content reported by users under our Community Guidelines. Between April and June 2023, we reported we actioned 91.6%
of such user reports within 2 hours.

Category of reported illegal content Number of items of content actioned

Information -related offences / contempt of court 160

Content relating to violent or organised crime 554

National security-related offences 618

Terrorist offences / content 825

Illegal goods / services 1,072

Consumer- related offences 1,099

Child sexual exploitation 1,740

TikTok DSA Transparency Report September 2023 11


Category of reported illegal content Number of items of content actioned

Defamation 2,689

Non-consensual sharing of private or intimate images 2,995

Illegal hate speech 3,119

Illegal privacy- related violations 3,199

Harassment or threats 4,890

Financial crime 5,438

Other illegal content 6,767

TikTok DSA Transparency Report September 2023 12


Annex C - TikTok’s content moderators

This Annex C sets out the number of people who are dedicated to content moderation in line with our Policies and applicable local laws, broken down per each of the official
European Union languages. These numbers do not reflect the broader teams who also play a key role in keeping our community safe (for example, those involved in the development
of our content moderation policies).

We estimate that we have 6,125 moderators who are dedicated to moderating content in the European Union, including 395 non-language specific moderators (meaning the
moderators who review profiles or photos) that are not reflected in the numbers below which consist of only language moderators. The estimates below are also confined to people
moderating content in the 24 official languages in the European Union, however we also have moderators covering a number of other languages that are commonly spoken in the
region (for example, Arabic and Turkish).

Our moderators often have linguistic expertise across multiple languages. Where our moderators have linguistic expertise in more than one European Union language, that expertise
is reflected in the detailed language breakdown below. For example, the Czech, Slovakian and Slovenian languages are grouped under one category within our Trust & Safety team
and are moderated by the same moderators. The moderators allocated for the Croatian language also cover the Serbian language.

People dedicated to content moderation

Official Member State language Number of people dedicated to content moderation

Bulgarian 69

Croatian 20

Czech 62

Danish 42

Dutch 167

English 2137

Estonian 6

Finnish 40

French 687

German 869

TikTok DSA Transparency Report September 2023 13


People dedicated to content moderation

Official Member State language Number of people dedicated to content moderation

Greek 96

Hungarian 63

Italian 439

Irish 0

Latvian 9

Lithuanian 6

Maltese 0

Polish 208

Portuguese 75

Romanian 167

Slovak 44

Slovenian 45

Spanish 468

Swedish 108

TikTok DSA Transparency Report September 2023 14


Annex D - Orders from government authorities to remove content

TikTok has a dedicated channel through which government authorities may submit orders to request to remove content. This Annex D provides the numbers of requests received
through our dedicated channel from government authorities in the European Union to remove content, broken down by category of illegal content reported.

Between 1 September 2023 and 30 September 2023, we received 17 orders from government authorities in the European Union to remove content.

Median time needed to inform government authority of receipt of order: We acknowledge receipt of an order from a government authority submitted through our dedicated
channel immediately, by sending an automatic acknowledgement.

Median time needed to give effect to the order: The median time between our receipt of a valid order from a government authority submitted through our dedicated channel and
us either removing the content, or otherwise providing a substantive response to the government authority issuing the order, is approximately 7 hours.

Orders from government authorities in the European Union to remove content

Category of illegal content

MS Child Sexual Terrorist Illegal Hate Content Illegal Non- Illegal Harassmen Defamatio Consumer- Information Financial National Other
Exploitation Offences / Speech Relating to Privacy-Rel Consensual Goods / t or n related -Related Crime Security-R Illegal
Content Violent or ated Sharing of Services Threats Offences Offences / elated Content
Organised Violations Private or Contempt Offences
Crime Intimate of Court
Images

AT

BE

BG

CY

CZ

DE

DK

TikTok DSA Transparency Report September 2023 15


Orders from government authorities in the European Union to remove content

Category of illegal content

MS Child Sexual Terrorist Illegal Hate Content Illegal Non- Illegal Harassmen Defamatio Consumer- Information Financial National Other
Exploitation Offences / Speech Relating to Privacy-Rel Consensual Goods / t or n related -Related Crime Security-R Illegal
Content Violent or ated Sharing of Services Threats Offences Offences / elated Content
Organised Violations Private or Contempt Offences
Crime Intimate of Court
Images

EE

ES

FI

FR 7 4 2

GR

HR

HU

IE 1

IT 1

IT

LT

LU

LV

MT

NL

TikTok DSA Transparency Report September 2023 16


Orders from government authorities in the European Union to remove content

Category of illegal content

MS Child Sexual Terrorist Illegal Hate Content Illegal Non- Illegal Harassmen Defamatio Consumer- Information Financial National Other
Exploitation Offences / Speech Relating to Privacy-Rel Consensual Goods / t or n related -Related Crime Security-R Illegal
Content Violent or ated Sharing of Services Threats Offences Offences / elated Content
Organised Violations Private or Contempt Offences
Crime Intimate of Court
Images

PL

PT 1

RO

SE

SI

SK 1

TikTok DSA Transparency Report September 2023 17


Annex E - Orders from government authorities to provide information

TikTok has a dedicated channel through which government authorities may submit orders to request disclosure of information. This Annex E provides the number of requests we
received through our dedicated channel from government authorities in the European Union for user information disclosure, broken down by category of illegal content reported.

Between 1 September 2023 and 30 September 2023, we received 452 orders from government authorities in the European Union for user information disclosure.

Median time needed to inform government authority of receipt of order: We acknowledge receipt of order from a government authority submitted through our dedicated channel
immediately, by sending an automatic acknowledgement.

Median time needed to give effect to the order: The median time between our receipt of a valid order from a government authority submitted through our dedicated channel and
us either providing the requested information, or otherwise providing a substantive response to the government authority issuing the order, is under 3 hours.

This median time excludes situations where TikTok responds to the requesting government authority to seek clarification or further context in respect of the order, but where the
requesting government authority provides no response. Such cases are closed after 28 days.

Orders from government authorities in the European Union to provide information

Category of illegal content

MS Child Cri Do Dru Envi Exto Fak Fina Firear Fu Har Hat Ho Hu Intell Kidn Miss Miss Nati Org Phy Roa Rob Sex Sext Suici Thre
Expl min mes g ron rtion ed ncial ms giti ass e mici man ectu appi ing ing onal aniz sical d bery Crim ortio de at
oitati al tic Traff men Blac Hac Frau Weap ve men Spe de Expl al ng Adul Min Sec ed Ass Traff Thef es n Self To
on Def Viol ickin tal kmai ked d ons t ech Mur oitat Prop t or urity Crim ault ic t Har Kill
am enc g Cri l Acc Poss Bully der ion erty Terr e Offe m
atio e mes ount essio ing Traff oris nses
n Ani n ickin m
mal Explo g
Welf sives
are

AT 1 1 2 1 1

BE 2 1 5 1 1 2 1 1 1

BG 1 1 1

CY

TikTok DSA Transparency Report September 2023 18


Orders from government authorities in the European Union to provide information

Category of illegal content

MS Child Cri Do Dru Envi Exto Fak Fina Firear Fu Har Hat Ho Hu Intell Kidn Miss Miss Nati Org Phy Roa Rob Sex Sext Suici Thre
Expl min mes g ron rtion ed ncial ms giti ass e mici man ectu appi ing ing onal aniz sical d bery Crim ortio de at
oitati al tic Traff men Blac Hac Frau Weap ve men Spe de Expl al ng Adul Min Sec ed Ass Traff Thef es n Self To
on Def Viol ickin tal kmai ked d ons t ech Mur oitat Prop t or urity Crim ault ic t Har Kill
am enc g Cri l Acc Poss Bully der ion erty Terr e Offe m
atio e mes ount essio ing Traff oris nses
n Ani n ickin m
mal Explo g
Welf sives
are

CZ 1

DE 23 18 2 5 2 6 23 12 1 4 17 24 4 2 7 2 62 6 8 1 5 22 1 8

DK 4 1 1

EE

ES 4 2 1 1 2 3 4 1 1 10 1 1 1 1

FI 1 2 2

FR 1 1 2 1 1 4 4 1 8 2 2 2 8 1 3 2 4 2 1

GR 1 4 2 3 1 1

HR

HU

IE 1 1 1

IT 3 3 3 1 1 1 1 3 2 2 1 1

LT 1

LU

TikTok DSA Transparency Report September 2023 19


Orders from government authorities in the European Union to provide information

Category of illegal content

MS Child Cri Do Dru Envi Exto Fak Fina Firear Fu Har Hat Ho Hu Intell Kidn Miss Miss Nati Org Phy Roa Rob Sex Sext Suici Thre
Expl min mes g ron rtion ed ncial ms giti ass e mici man ectu appi ing ing onal aniz sical d bery Crim ortio de at
oitati al tic Traff men Blac Hac Frau Weap ve men Spe de Expl al ng Adul Min Sec ed Ass Traff Thef es n Self To
on Def Viol ickin tal kmai ked d ons t ech Mur oitat Prop t or urity Crim ault ic t Har Kill
am enc g Cri l Acc Poss Bully der ion erty Terr e Offe m
atio e mes ount essio ing Traff oris nses
n Ani n ickin m
mal Explo g
Welf sives
are

LV 1

MT 1 1 1 1

NL 5 2 1

PL 1 2 3 2 1 1 1 1 1

PT

RO 1 1

SE

SI 2

SK

TikTok DSA Transparency Report September 2023 20


Annex F - Complaints and disputes

TikTok provides notifications to users and advertisers who have violated our Policies or applicable local laws, and they can appeal those decisions once they receive those
notifications. This Annex F comprises the number of appeals received from users or advertisers who have uploaded content and appealed the removal of their video content or ads or
their access to LIVE being restricted. We are working hard to ensure we can provide the numbers for appeals of the remaining moderation actions across all of our features, as well as
the numbers for users who appeal moderation decisions of reported content in future transparency reports.

Total number of appeals received: We received 472,254 appeals from users and advertisers who uploaded content to the platform and between 1 September and
30 September 2023, appealed the moderation action under our Policies to either remove their video or ad content or restrict their access to LIVE.

Basis for those complaints: When appealing a decision, in many cases, users and advertisers are given the opportunity to include a written explanation to set out the basis of their
appeal. As users and advertisers are given the opportunity to explain their basis of appeal by free text, the bases of appeals necessarily vary between each user or advertiser
submitting an appeal.

Decisions taken in respect of the complaints: Between 1 September and 30 September, we reinstated 267,140 video or ad content or access to LIVE. This number cannot be
compared directly to the number of moderation actions taken or the number of actions appealed in that period. This is because some moderation decisions may have been appealed
within the previous time period, and the outcome of some moderation decisions may not be actioned until the next time period.

Median time needed for taking the decisions: The median time between a user or advertiser submitting an appeal, and TikTok taking a decision in relation to that appeal, across all
features, is under 10 hours.

TikTok DSA Transparency Report September 2023 21


Annex G - Monthly active recipients

This Annex G sets out the average number of ‘monthly active recipients’ in the European Union broken down per each Member State during the period 1 April 2023 to
30 September 2023, rounded to the nearest hundred thousand.

We have produced this calculation for the purposes of complying with our obligations under the Article 42(3) of the DSA and it should not be relied on for other purposes. We have
applied the same methodology used when calculating our total monthly active recipients number for the European Union published in February and August 2023. In light of our legal
requirements to provide the number broken down per Member State and given that users may have accessed the platform from different Member States in the relevant period, the
estimates below may mean, in certain limited circumstances, user access is counted more than once.

Where we have shared user metrics in other contexts, the methodology and scope may have differed. Our approach to producing this calculation may evolve or may require altering
over time, for example, because of product changes or new technologies.

TikTok DSA Transparency Report September 2023 22


TikTok DSA Transparency Report September 2023 23

You might also like