Deepfakes Research Paper
Deepfakes Research Paper
AND PRIVACY
This is to certify that Mr. Shubham Saurabh (Enrolment No. 5020240063) has worked under
my guidance and supervision on his Research paper titled- DEEPFAKES: A CHALLENGE
FOR WOMEN’S SECURITY AND PRIVACY. This work is outcome of his own efforts
and all the help taken during the course of study has been duly acknowledged. To the best of
my knowledge, the present research paper is original and has not been submitted either in part
or full to any other institution.
DATE: 17th Jan, 2025 (Signature of Supervisor)
Dr. Ambika (Assistant Professor of Law)
Place: HPNLU, Shimla
ACKNOWLEDGEMENT
I am extremely thankful to one and all for the assistance and support I received throughout my
endeavour of working on this research Paper. I would like to thank my institute, Himachal
Pradesh National Law University, Shimla for providing me with the opportunity to work on
this research Paper and for the immense support. I express my gratitude to Prof. (Dr.) Priti
Saxena, Vice- Chancellor Himachal Pradesh National Law University, Shimla for providing
me with an opportunity to work on this Term paper and for the valuable guidance.
I am also grateful towards (Dr.) Ambika (Assistant Professor of Law), Himachal Pradesh
National Law University, Shimla who supervised this paper and shared his wisdom with me.
He guided me to help improve this research paper significantly. Lastly, I am grateful to my
parents, seniors, batchmates and juniors for their comments and ideas, which were helpful in
channelizing my thoughts and ideas.
SHUBHAM SAURABH
LLM
5020240063
TABLE OF CONTENTS
Deepfakes, as mentioned earlier are a rapidly evolving technology, that involves the use of
artificial intelligence, more specifically GANs to generate new media content. In their most
basic sense, GANs work by having one artificial neural network generate fake data where
another NH observes the fiction and tries to distinguish between the real and fictitious
information, thus making GANs produce highly detailed fake images, videos or audio. These
media are very fake, yet sometimes, very much resemble the real thing, to the point where it
becomes very hard for people, TV media included, to differentiate fake from real. Although
such technology was used with good intentions when it was presented to the society, such as
in boosting the creativity in artistic industry, making education and entertainment accessible,
deepfakes today, are tools with potentially negative consequences.
However, over the years the technology has advanced and become much more ‘mainstream’
because of the increase in availability of AI and more importance by open-source tools and
platforms. Whereas before the use of the application might have required special training, today
it is possible to obtain applications where even those with the least knowledge can create rather
convincing deepfake content (Laffier & Rehman, 2023)1. This evolution is a very negative
process, as the authenticity of the Internet becomes untrustworthy, which undermines the belief
of the population in digital media.
It is crucial to note that deepfake technology has further innovative app osition such an be in
motion pictures or language dubbing for global films; however, such technology is being
abused. Some of the negative uses of deepfakes are; the production of deepfakes with or
without consent for producing adult films and personal adult content, manipulation of political
events through fake news about the opponent candidate, scammers impersonating trusted
official and organizations (Khan & Rizvi, 2023) 2. Of those, the most dangerous is the creation
of non-consensual pornography using deepfakes, which massively targets women and
increases violence on the Internet. These applications raise questions about the integration of
1
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
2
Khan, Z. A., & Rizvi, A., Deepfakes: A Challenge for Women Security and Privacy, 5 CMR Univ. J. Contemp. Legal Aff. 203 (2023).
new and emerging technologies with human rights abuses, which has led to an extensive public
discourse about matters to do with privacy, security, and ethical practices in the emergent
digital world (Viola and Voto, 2023) 3.
And as deepfake technology evolves further, so does its difficulties that we are going to
overview in this paper. The sophistication and availability of these tools creating more realism
for practitioners require serious consideration by policymakers, technologists, and the society.
These malignant operations outweigh all the favorable uses when not regulated when left out
of control; there is a need to urgently consider the ethical, legal, and social implications of this
disruptive innovation (Okolie, 2023)4. Understanding the consequences deepfakes pose,
especially on women’s security and privacy, is critical for development of effective and holistic
approaches to prevent the extent of the harm caused by the technology.
Deepfake technology has been a great security and privacy threat to women, and the latter is
more vulnerable to its abuse than men. This is a problem that is built on the already existing
issues of gendered cyber harassment, gendered cyber exploitation and abuse of women. It
escalates these risks since the new deepfake technology means that even more, fictitious
explicit material can be produced in the actor’s name without the actor’s consent. Deepfake
pornography especially the non-consensual one, has become a form of abuse, where people use
these fake content to threaten, bully or attempt to extort women. The victims end up combating
fruitful efforts to demonstrate the impersonation of such content and to cope with social and
psychological repercussions that ensue.
The consequences of these violations can be very diverse and severe. Consequences of non-
consensual deep fake use often lead to negative persona associate, that the victims are likely to
suffer from social rejection, job loss and in extreme cases banishment from their communities
(Laffier & Rehman, 2023) 5. Women are in a higher risk than men especially those that are
journalists, activists, or celebrities — the more the visibility the higher the risk they will face.
3
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
4
Okolie, C., Artificial Intelligence-Altered Videos (Deepfakes), Image-Based Sexual Abuse, and Data Privacy Concerns, 25 J. Int’l
5
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
In such cases, the fact that the content is fake is unessential as the society focuses on the
ugliness of victim blaming existing in different parts of the globe.
Compared to other types of cyberattacks such as discrediting campaigns, deepfake assaults are
more invasive than social engineering schemes because they disrupt tangible offline
relationships, professional-life, and psychological well-being.
The latest acts of coercion of the deepfake technology also shed light on the systematic
omission in the defense of women’s rights in cyberspace. Current laws are insufficiently
applicable to deepfakes’ specific problems that individuals face; consequently, legal redress
for the harmed party is scarce. The relative anonymity of social media sites only increases the
prevalence of such incidents as criminals utilize the silences in relevant legislation to act with
virtual impunity. These problems have highlighted the necessity for legal changes, higher
demands placed on technology platforms and enhancing public awareness to reduce the impact
of this destructive technology on women’s safety and privacy (Taylor, 2023) 6.
6
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
Women are disproportionately targeted by the malicious use of deepfake technology, often in
the form of non-consensual pornography, harassment, and cyber exploitation. Deepfake videos
and images are frequently weaponized to tarnish reputations, incite violence, and exert coercive
control. The creation and dissemination of such content exacerbate existing issues of online
gendered violence and amplify the systemic discrimination that women face in both digital and
offline spaces. These abuses have far-reaching implications, including psychological trauma,
social stigmatization, career setbacks, and in some cases, physical harm.
The issue is further compounded by the anonymity of online platforms and the widespread
availability of open-source deepfake creation tools, which enable individuals with little
technical expertise to generate convincing fake content. As a result, victims often find
themselves powerless against perpetrators who exploit legal loopholes, jurisdictional
challenges, and the rapid spread of harmful content on social media and other digital platforms.
Existing legal and ethical frameworks have proven insufficient to address the unique challenges
posed by deepfake technology. Traditional laws on defamation, obscenity, and privacy
invasion often fail to adequately capture the complexities of synthetic media, leaving victims
without effective avenues for justice. Moreover, the global and cross-border nature of deepfake
dissemination complicates enforcement, as perpetrators can operate from jurisdictions with lax
or non-existent regulations.
This study seeks to address these critical gaps by investigating the unique threats posed by
deepfakes to women’s security and privacy, analyzing the inadequacies in current legal and
technological responses, and proposing solutions to mitigate the harm caused by this rapidly
evolving technology. By doing so, it aims to provide a comprehensive framework for
understanding and addressing the systemic and individual impacts of deepfake abuse.
1.3 OBJECTIVES
The primary objectives of this research are:
1. To analyze the specific threats posed by deepfake technology to women’s security and
privacy.
2. To evaluate the effectiveness of existing legal, ethical, and technological frameworks
in combating deepfake-related abuses.
3. To propose actionable recommendations for mitigating the risks associated with
deepfakes through policy, education, and technological innovation.
1.4 RESEARCH QUESTIONS
2. What are the gaps in the current legal and ethical frameworks?
4. How effective are international data protection regulations, such as the GDPR, in
addressing the misuse of deepfake technology for non-consensual pornography and
privacy violations?
Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning."7 - This foundational
text explains the principles of machine learning and neural networks, including Generative
Adversarial Networks (GANs), which underpin deepfake technology. GANs have been
instrumental in the creation of highly realistic synthetic media, laying the groundwork for both
innovative applications and potential misuse.
Kietzmann, J., Lee, L., McCarthy, I., & Kietzmann, T. (2020). “Deepfakes: Deception or
Delight?” 8- This article examines the dual-use characteristics of deepfake technology and its
societal ramifications. In India, the widespread availability of deepfake tools has raised
7
Goodfellow, Ian, Yoshua Bengio & Aaron Courville, Deep Learning (MIT Press, 2016)
8
Kietzmann, Jan, Leyland Lee, Ian McCarthy & Tim Kietzmann, "Deepfakes: Trick or Treat?" (2020) 63
Business Horizons 135.
significant concerns regarding their potential misuse in political propaganda, social media
manipulation, and gender-based violence.
Chesney, R., & Citron, D. (2019). “Deepfakes and the Emerging Disinformation Conflict:
The Advent of Post-Truth Geopolitics.” 9 - This essay examines the ethical challenges
presented by deepfakes and their capacity to undermine confidence in digital material. In India,
the ethical ramifications are intensified by the nation's varied socio-political environment,
where deepfakes have been exploited to disseminate misinformation, provoke communal
discord, and tarnish reputations.
Harwell, D. (2018). “Synthetic Pornographic Videos Are Being Utilised as Tools for
Harassment and Humiliation of Women.”10 - This investigative article underscores actual
instances of deepfake exploitation and its effects on women. Deepfake pornography has
subjected Indian women, particularly those in the public sphere, to cultural shame and limited
legal remedies.
Henry, N., & Powell, A. (2015). “Embodied Harms: Gender, Shame, and Technology-
Enabled Sexual Violence.” 12 - This article offers a sociological analysis of the gendered
aspects of technology-enabled abuse. The convergence of gender, caste, and class frequently
exacerbates women's susceptibility to deepfake exploitation in India, disproportionately
affecting marginalized populations.
9
Chesney, Robert & Danielle Citron, "Deepfakes and the New Disinformation War: The Coming Age of
Post-Truth Geopolitics" (2019) 98 Foreign Affairs 147.
10
Harwell, Drew, "Fake-Porn Videos Are Being Weaponized to Harass and Humiliate Women," The
Washington Post (2018), available at https://www.washingtonpost.com
11
Citron, Danielle Keats, Hate Crimes in Cyberspace (Harvard University Press, 2014).
12
Henry, Nicola & Anastasia Powell, "Embodied Harms: Gender, Shame, and Technology-Facilitated
Sexual Violence" (2015) 21 Violence Against Women 758.
Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., & Li, H. (2020). “Safeguarding Global
Leaders from Deepfakes.”13 - This research introduces improvements in AI-based deepfake
detection techniques. India is progressively investigating these technologies to counter political
misinformation and protect individuals from harmful content.
1.6 METHODOLOGY
Data Analysis
i. Primary Source: The study employs qualitative and quantitative data derived from
verified real-life case studies of victims of deepfake exploitation. We derived these
cases from publicly accessible records, legal proceedings, and authenticated victim
testimonies to examine the psychological, social, and professional ramifications of
deepfakes.
ii. Secondary Source: The research integrates information from scholarly literature, legal
assessments, regulatory structures, and policy papers. These secondary sources offer
contextual and theoretical perspectives on the technological, ethical, and legal concerns
presented by deepfake technology.
13
Agarwal, Shruti et al., "Protecting World Leaders Against Deep Fakes" in Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops (2020) 38.
14
Verdoliva, Luisa, "Media Forensics and Deepfake Detection: A Technical Survey" (2020) 37 IEEE Signal
Processing Magazine 20.
iii. Quantitative Analysis: We conduct an analysis on statistical data regarding the
prevalence of deepfake abuse instances, the demographic distribution of victims, and
the response times and efficacy of takedown requests on digital platforms. We analyse
and interpret recent trends to evaluate the increasing sophistication of deepfake
technology and its broader societal implications.
iv. Qualitative Analysis: A theme analysis examines the narratives of victims,
emphasizing privacy infringement, psychological distress, and professional
repercussions. Legal assessments assess the effectiveness of existing frameworks by
analyzing deficiencies, jurisdictional obstacles, and cases related to synthetic media.
v. Cross-Jurisdictional Comparison: The study analyses legislative strategies in various
nations to ascertain optimal practices. We specifically focus on jurisdictions like South
Korea and select U.S. states that have explicit deepfake regulations, evaluating their
effectiveness in mitigating harm and providing victims with remedies.
This comprehensive methodology guarantees a thorough grasp of the security and privacy
issues presented by deepfakes, establishing a foundation for extensive policy suggestions.
1.7 CHAPTERISATION
Chapter 1: Introduction
This chapter lays the groundwork for understanding deepfake technology's effects on women's
security and privacy online. It defines deepfakes as artificial intelligence-generated synthetic
media, mostly employing Generative Adversarial Networks (GANs), and examines its creative
and malevolent uses. As deepfake techniques have made it easier to create realistic modified
content, ethical, legal, and security problems have arisen.
This chapter integrates theories to comprehend deepfakes' social and legal effects. private
theories based on John Locke and Immanuel Kant link autonomy and dignity to private rights.
Synthetic media present regulatory issues, which are examined using libertarian and social
responsibility media regulation frameworks. Feminist theory illuminates how deepfakes
promote patriarchal norms and gendered violence, while intersectionality shows how
overlapping identities increase vulnerability. These approaches help create more equal and
effective deepfake harm solutions.
This chapter discusses how deepfakes threaten women's privacy and security, highlighting the
unique issues female victims of non-consensual synthetic media face. The conversation shows
how deepfake pornography undermines trust and agency, worsening online gender inequality.
It studies privacy violations caused by misusing personal data to create modified content and
the emotional and social effects on victims. The chapter also examines how deepfakes weaken
digital authenticity, making truth detection harder. It illustrates the societal consequences of
deepfake privacy violations through case studies and current research.
This chapter discusses deepfake technology's legal and ethical implications. It examines
privacy, defamation, and cybercrime laws and their inability to solve synthetic media's specific
difficulties. The varied and frequently ineffective legal remedies to deepfake-related problems
are shown by reviewing case laws and regulatory methods from different jurisdictions. The
chapter also covers technology platforms, content providers, and politicians' ethical obligations
in balancing freedom of expression and privacy. It emphasises the necessity for comprehensive
regulation and ethical principles to balance innovation with damage prevention.
2.1 INTRODUCTION
This chapter synthesises multiple theoretical frameworks to enhance comprehension of the
sociological and legal ramifications of deepfakes. Theories of privacy based on the writings of
John Locke and Immanuel Kant are examined, connecting personal autonomy and dignity to
private rights. Theories of media regulation, such as libertarian and social responsibility
models, are analysed to underscore the regulatory issues presented by synthetic media.
Feminist theory gives essential insights into the ways deepfakes sustain gendered violence and
strengthen patriarchal systems, while intersectionality offers a perspective to comprehend how
intersecting identities intensify vulnerabilities. These concepts jointly guide the creation of
more equitable and effective remedies to the damages associated with deepfakes.
Fiscal legislation in all the jurisdictions has recognized privacy as a rights bearing value given
the advancement in technology posing immense challenges to liberty. Strange are the ways of
India’s constitutional law, which was established in the 2017 landmark judgment of K.S.
Puttaswamy v. In Union of India (2017) the apex court recognized the right to privacy as
intrinsic part of Article 21 of the Constitution. The judgment again reemphasized the notion of
privacy means not only personal space but also personal data and personal choice with regard
15
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
to one’s identity being invulnerable to being controlled, manipulated or exploited (Kaushal,
2023)16. As in the case of privacy Shield, the EU’s General Data Protection Regulation GDPR
is rooted in the presumption that individual has right to exercise control over his/her personal
data collection, processing and usage, with privacy considered as part of human’s dignity and
freedom (Viola & Voto, 2023) 17.
This is in line with theoretical frameworks that also consider privacy as a socially related issue.
Privacy acts as insulation that allows people to engage in social, economic as well as political
functions having been guaranteed they will not be watched or exploited. Deepfake technology
undermines this principle by enabling unauthorized manipulation of individuals’ images ,
making it impossible for them to control how their images are used, or have trust in the image-
based interactions. Several theoretical frameworks highlighted the importance of better
protection measures that would prevent the distribution of people’s identifiable attributes and
the erosion of the human personality due to the ever growing technological progress.
The theories of media regulation help to view the potential freedom of speech with the
recognition of the necessity for censorship in large social networks. The media which coveys
libertarian view is based on John Milton and John Stewart Mill that supports minimum
partnership of government and full speech with no censorship imposed. Still, applying this
approach for deepfake technology brings discussions on the manipulation of media freedom to
promote the wrong thing like non-consensual pornography or disinformation (Yan, 2022) 18.
Thus, such purely libertarian concepts can leave ethical and societal problem posed by new
technology untouched.
16
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
17
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
18
Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L. 767 (2022).
The social responsibility theory of media also affords a fairer view in terms of Media Entities
having the ethic responsibility to offer services of the public good without compromising harm
or causing harm to specific individuals or groups. This theory provides evidence to the use of
content moderation measures; for example, flagging, deletion or limitation of the following
forbidden content types like deepfakes. For instance, various social media such as Facebook
and YouTube have pulled up their socks in adopting techniques to identify and remove fake
news and doctored images as other providers have assumed the spirit of social responsibilities.
However, critics apart from common feelings say that these measures are are not always
transparent and consistent thus not very effective (Taylor, 2023)19.
Censorship as a regulatory tool poses other challenges as illustrated by this section. As much
as the states aim at stopping ugly menace to society from circulating with such laws, they hinder
free speech, free speech and protest as well. The earlier discussed theories of regulatory capture
are provide a reminder of the risks, especially of over bending the laws of censorship to suit
the powers that be in a government or corporate capacity. For instance, propaganda fighting
deepfake laws can only be created while providing sufficiently narrow frameworks to avoid
censorship of satire and art. An ultimate tug of war between demands for content moderation
and maintaining freedom of speech persists and has remained an issue of significant
controversy and the subject of subtle, but complex policies.
The very theories related to media regulation also stress on technology that supports self
regulation. Beside human controlled moderation algorithms are used, based on machine
learning, to moderate high volume of contents. However, these systems come with drawbacks,
including overlooking context and being highly sensitive to the pre-processing of training data
and, as a result, over-censoring or missing initially malicious deepfake content. Theoretical
frameworks require more responsibility and openness in the production and use of such
technologies that they conform to democracy and individual rights (Laffier & Rehman, 2023) 20.
19
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
20
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
2.4 GENDERED ONLINE VIOLENCE AND FEMINIST THEORY
Gendered online violence is a manifestation of broader societal power imbalances, and feminist
theory provides a critical framework for understanding the intersection of gender dynamics and
technological abuse. Feminist theorists argue that digital platforms, while offering
opportunities for expression and connection, also reflect and amplify existing gendered
inequalities. The rise of deepfake technology exemplifies this, as women are disproportionately
targeted with non-consensual pornography, harassment, and character defamation, often with
the intention of silencing or controlling them. These abuses are not merely random acts but are
deeply rooted in patriarchal structures that commodify and objectify women's bodies and
identities.
Deepfakes highlight how technological tools are weaponized against women to perpetuate
control and harm. Feminist theorists like Catharine MacKinnon and Andrea Dworkin have long
discussed how pornography and visual exploitation contribute to the systemic subjugation of
women. Deepfake pornography intensifies these harms by erasing the boundaries of consent
entirely, fabricating explicit content that never existed yet carries the same societal stigma and
consequences for the victim. This abuse not only undermines women’s autonomy but also
reinforces stereotypes that limit their participation in public and professional life (Taylor,
2023)21.
Feminist theory also critiques the insufficient responses of platforms and institutions to
gendered online violence. Many digital spaces operate under the guise of neutrality, failing to
recognize the disproportionate harm experienced by women. For example, algorithms that
moderate content often fail to detect the nuances of gendered abuse, leaving victims with little
recourse. Feminist frameworks advocate for a more intersectional approach to designing and
regulating technology, one that considers the specific vulnerabilities of women and other
marginalized groups
21
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
2.5 INTERSECTIONALITY
LGBTQ+ individuals also face unique risks when targeted by deepfake technology. In societies
where their identities are already stigmatized or criminalized, deepfake abuse can lead to severe
social, legal, and even physical consequences. For example, the manipulation of their images
into explicit content could out them against their will, exposing them to harassment or violence.
These cases illustrate how intersectional identities intersect with deepfake abuse to create
compounded layers of harm.
22
Mayoyo, N., The Influence of Social Media Use in the Wake of Deepfakes on Kenyan Female University Students’ Perceptions on
Sexism, Their Body Image and Participation in Politics, in Black Communication in the Age of Disinformation: DeepFakes and Synthetic
Media 89-103 (Cham: Springer Int’l Pub., 2023).
23
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
Intersectionality also reveals the global disparities in responses to deepfake abuse. Women in
developing countries often face greater challenges due to weaker legal protections, limited
digital literacy, and patriarchal norms that dismiss or trivialize online abuse. In such contexts,
the social stigma attached to explicit content—whether real or fabricated—can have life-
altering consequences, including ostracization, loss of employment, or even honor-based
violence (Viola & Voto, 2023) 24.
24
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
CHAPTER 3: GENDER DISPARITIES IN ONLINE VIOLENCE
3.1 INTRODUCTION
The relationship between gender inequality and the increase of online violence made possible
by deepfake technology is examined in this chapter. Women are markedly more susceptible to
online abuse, and the chapter situates deepfake assaults within the wider frameworks of cyber
harassment, stalking, and image-based sexual exploitation. The chapter demonstrates how
deepfake-generated non-consensual pornography exemplifies and exacerbates existing societal
power disparities using real-world examples. It explores the causes for gendered violence via
the lens of deepfakes and evaluates its psychological, societal, and professional ramifications.
The chapter prepares for a thorough assessment of the structural failings that enable such
abuses to flourish and emphasises the necessity of protective measures to uphold women’s
digital rights.
Women around the world experience the problem of defaming them on the Internet more often,
and more so due to such technologies as deepfakes. It was established that women receive
harassment 27 times more than men; 73% of women and girls claimed being victims of at least
one form of digital abuse such as stalking, doxing, or image abuse (Okolie, 2023) 25. The
problem with fake news can become even more complex having deepfake technology meaning
that one could use AI for an act of revenge, coercion, or public shaming.
The primary focus of most deepfakes is women especially in relation to power structure ad
social taboos. For instance, there are many fake sex tapes which are made purely to demoralize
a female political or professional figurehead, to ruin her stain for ever. Rana Ayyub, an Indian
journalist, had fake nudes produced and spread as a method of putting pressure on her since
she covered areas of political controversy (Brieger, 2021)26. The idea is not new; they are not
25
Okolie, C., Artificial Intelligence-Altered Videos (Deepfakes), Image-Based Sexual Abuse, and Data Privacy Concerns, 25 J. Int’l
26
Brieger, A., Taking Back Their Faces: The Damages of Non-Consensual Deepfake Pornography on Female Journalists (2021).
the first to conduct such a campaign that targets women for the purpose of harassment and
intimidation: The more targeted harassment campaign is part of a larger narrative of the use of
technology for injustices against women.
Besides, ordinary women in all ages are often advertised to. Targets commonly describe
receiving threats that such faked materials will be distributed to their families, employers/
employees or on social media platforms provided some orders like being paid some amounts
or being compelled to yield to certain pressures, are not met. These threats foster a chain of
emotional, psychological and, in some cases, physical abuse because people are helpless before
an unknown assailant who entirely hides behind insufficiently protected digital platforms.
Consequences of such actions not only affect direct targets of attack but this war on women
continues affecting others as well. It brings a lot of negativity and offers a rather stifling of
women’s speech, especially in areas where sexuality or control of one’s own body is a taboo.
Deepfake attacks undermine the performance of women in public and social media fora, and
other spheres of their endeavors, such as activism and career progression (Mayoyo, 2023) 27.
Such impacts highlight the emerging call for structural interventions to prevent women from
the current forms of harassment online.
27
Mayoyo, N., The Influence of Social Media Use in the Wake of Deepfakes on Kenyan Female University Students’ Perceptions on
Sexism, Their Body Image and Participation in Politics, in Black Communication in the Age of Disinformation: DeepFakes and Synthetic
Media 89-103 (Cham: Springer Int’l Pub., 2023).
3.3.1 REVENGE PORN: DIGITAL MANIPULATION TO HUMILIATE
Revenge porn is one of the most pervasive forms of online violence against women, where
deepfake technology has amplified its reach and impact. Unlike traditional revenge porn, which
involves actual private photos or videos, deepfake-generated content uses AI to fabricate
explicit material. These fabrications are designed to humiliate victims by depicting them in
compromising positions, often as a means of retaliation for personal disputes or perceived
slights (Laffier & Rehman, 2023) 28. For instance, women who refuse advances or leave toxic
relationships often find themselves targeted by former partners, who create and distribute these
fabricated materials as a form of punishment. Victims endure not only public humiliation but
also legal and social challenges in proving the falsity of these depictions, often facing
skepticism from authorities and communities.
Social media platforms have become fertile ground for the distribution of deepfake content,
exposing victims to widespread harassment. Explicit deepfake videos are often uploaded and
shared on platforms like Twitter, Reddit, and pornography websites, reaching millions within
hours (Scott, 2023). This creates an unrelenting cycle of abuse, as victims must deal with
repeated reposting of the material despite efforts to have it removed. Moreover, these platforms'
slow responses and inadequate moderation policies exacerbate the harm by allowing such
content to remain accessible for extended periods (Taylor, 2023) 29. Women targeted by these
campaigns often report receiving derogatory messages, threats, and even demands for further
compromising material, deepening the emotional and psychological toll of such attacks.
28
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
29
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
Doxxing, the practice of publicly revealing personal information about an individual without
their consent, has taken on a new dimension with the use of deepfakes. Perpetrators often
combine explicit deepfake content with personal details, such as the victim’s address,
workplace, or family information, to amplify the harm and incite further harassment. For
instance, deepfake videos are sometimes paired with captions that falsely allege the victim's
involvement in immoral or illegal activities, prompting mobs to target them both online and
offline. This convergence of deepfakes and doxxing creates a highly volatile environment, as
victims face threats not only to their reputation but also to their physical safety. The fear of
retaliation often silences victims, deterring them from reporting these incidents or seeking legal
recourse.
Deepfake technology has revolutionized the nature of online abuse, amplifying existing forms
of violence against women and introducing new dimensions of harm. By leveraging artificial
intelligence to manipulate images and videos, perpetrators are able to create hyper-realistic,
fabricated content that is weaponized to exploit, humiliate, and silence victims. This technology
not only intensifies the severity of online abuse but also broadens its reach, making it
increasingly difficult for victims to protect themselves or seek justice (Okolie, 2023). Women,
who already experience disproportionately higher levels of online harassment, are particularly
vulnerable to the malicious applications of deepfakes.
One of the most significant ways deepfakes amplify online abuse is by providing perpetrators
with a highly effective tool for non-consensual pornography. Unlike traditional forms of
revenge porn, where actual images or videos are shared, deepfakes enable abusers to create
entirely fabricated content, often portraying victims in explicit or compromising situations
(Laffier & Rehman, 2023)30. These deepfakes are then disseminated on social media platforms,
messaging apps, or adult websites, causing irreparable harm to the victim's reputation, mental
health, and personal relationships. The fabricated nature of these materials complicates legal
recourse, as victims must not only contend with the societal stigma associated with explicit
content but also prove the falsity of the media itself.
30
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
A stark example of this is the case of Rana Ayyub, a journalist in India, who became a target
of deepfake pornography as a retaliation for her investigative work. Fabricated videos depicting
her in explicit scenarios were circulated online, accompanied by a barrage of abusive comments
and threats (Brieger, 2021). These attacks were intended to discredit her professionally and
intimidate her into silence. Despite the falsified nature of the content, Ayyub faced severe
personal and professional repercussions, highlighting how deepfake technology can be
weaponized to perpetuate gendered violence and suppress dissent.
Another notable case is that of a university student in South Korea who discovered that her
image had been used without consent to create explicit deepfake videos that were shared across
various adult content platforms (Viola & Voto, 2023). The victim reported severe
psychological distress, including anxiety and depression, stemming from the knowledge that
countless strangers had seen and potentially believed the fabricated content. These experiences
illustrate how deepfake technology not only violates privacy but also causes profound
emotional and mental harm to victims.
The role of deepfakes in gendered violence extends beyond individual cases. It normalizes the
dehumanization of women by creating a digital culture where their autonomy and consent are
routinely disregarded. The anonymity afforded by the internet enables perpetrators to operate
with impunity, emboldened by the lack of immediate legal and social consequences. This
perpetuates a cycle of abuse where women are repeatedly targeted, silenced, and left to bear
the brunt of societal judgment (Taylor, 2023) 31.
31
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
CHAPTER 4: UNDERSTANDING DEEPFAKE TECHNOLOGY
4.1 INTRODUCTION
The technical intricacy of deepfake technology is examined in this chapter, which also
describes how neural networks, machine learning, and artificial intelligence are used to produce
incredibly lifelike synthetic media. It elucidates the fundamental function of GANs,
characterised by a dual-system architecture wherein one network produces synthetic content
while the other endeavours to identify it, yielding increasingly refined outputs. The chapter
classifies distinct varieties of deepfakes, such as facial swaps, voice cloning, and lip-syncing,
along with their applications in diverse areas. Although deepfake technology presents novel
opportunities in entertainment and education, the chapter also examines its nefarious
applications in misinformation, political manipulation, and personal defamation, thereby
offering a comprehensive perspective on its societal implications.
Deepfakes are considered as an artificial intelligence (AI), based on actual neural processes
and using deep learning algorithms to produce synthetic media. Such media can be in form of
images, videos and sound imitating genuine people’s pictures and voices respectively. It mostly
uses a class of machine learning frameworks, known as Generative Adversarial Networks,
GANs for short, that were developed by Ian Goodfellow in 2014. GANs function through a
dynamic interplay between two neural networks: the generator and the discriminator of the
generated images. The generator synthesizes new data to train the system and the discriminator
Figure 5: Cycle GAN Training compares it to the real data. Every time, the quality increases
and generates content so realistic that in most of the cases it is almost impossible to differentiate
from the original information (Chesney & Citron, 2018; Yan, 2022)32.
GANs are imperative to deepfakes’ existence since they can be used in many different ways of
manipulating media forms. For circumstances, in video deepfakes, the generator applies the
movements of the target person’s face to another video of the other person which has to look
32
Chesney, R., & Citron, D., Deepfakes: A Looming Crisis for National Security, Democracy, and Privacy, The Lawfare Blog (2018).
natural and smooth. Likewise, it is done with the help of GAN that can imitate over voice
modulation, all the tone, inflection, rhythmic sounds, and more. These capabilities have taken
deepfakes from being mostly used in entertainment to applications in advertisement and movie
production, albeit at the same time enabling negative uses like revenge porn or fake news.
DeepFaceLab is one of the open-source options that have only contributed to the development
of the deepfake technology. DeepFaceLab is one of the most famous platforms throughout
numerous deepfakes and offers users to manipulate separately different modules to swap faces
and lips, and transfer messages. Conceived for academic and research use, applications such as
DeepFaceLab are used by cybercriminals for their evil intents. They are easy to use, and most
of them come with step by step guides, making it easier for an individual who may not be so
savvy to use them to get convincing results (Rizzica, 2021)33. The availability of such software
on the market is the evidence that deepfake technology, brilliant to use in the film industry, is
potentially dangerous to society.
Deepfakes employ certain skills that consist of facial recognition; these skills include algorithm
and audio synthesis. The facial recognition algorithms can be used during generating proper
video deep fakes as it allows the AI model to understand textures of the face, movement of the
eyes and other more subtle details on how the skin reacts. Originally, these algorithms use face
recognition to overlay the facial features of a source subject onto a target person and simulate
real-world motion. The work involves obtaining hundreds of frames from the video or pictures
that are used to train the neural network to create realistic results..
Audio synthesis, the other essential aspect of creating deep fake videos, uses Artificial
Intelligences to mimic human voices.. This involves feeding the AI models with voice datasets
in order to replicate the kind of sounds you make, the accents and tones you use. With recent
developments in the text-to-speech (TTS) types of technology in the options for deepfakes is
lip syncing to refer to the actual mouth movements of the subject. These progresses enhance
33
Rizzica, A., Sexually Explicit Deepfakes: To What Extent Do Legal Responses Protect the Depicted Persons? (Doctoral Dissertation,
Deepfakes have become very realistic due to developments made in the AI modeling of deep
learning. In the beginning, algorithms of deepfake have tended to show inaccuracies, including
different illumination or libration, and other such discrepancies. But these limitations have been
eradicated by recent development in deep learning. For instance, higher resolutions of datasets
are used in the current models accompanied by more effective loss functions to make the
texture mapping and transitions more appropriate. Other approaches such as reinforcement
learning and self-supervision have also improved the model capacity to learn independently
and generate high quality work (Franks & Waldman, 2018)34.
Still, the growth of deepfake technology is still quite fast, and it creates issues on how to detect
and mitigate the technology. The much denser output that the model has produced along with
the developing deep fake tools leads to serious implications of misuse and impact on the
society. Since this technology is still evolving, there is increasing pressure for measures that
will lock out the technology’s many evils while preserving its goodness.
Consequently, Although deep fake technology is almost linked with some negative impacts it
has many uses that can be considered as good impacts of the advancement. On the used side,
it brought constructive development into entertainment domain and education sector on a
whole new level. In the case of films and serials, it is used for special effects like giving a
character, body of a different actor or de-ageing the actor as per the role required of him. For
instance, deepfake technology was used in Star Wars: The latest movie, Rogue One, to resurrect
the late Carrie Fisher’s role of Princess Leia (Chesney & Citron, 2018) 35. In the same way, it
has been proposed to translate all movies to different languages while keeping the actors’ facial
expressions to improve the movies’ distribution around the world.
34
Franks, M. A., & Waldman, A. E., Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions, 78 Md. L. Rev. 892 (2018).
35
Chesney, R., & Citron, D., Deepfakes: A Looming Crisis for National Security, Democracy, and Privacy, The Lawfare Blog (2018).
somehow make the subject more stimulating specifically history and literature. Another useful
application is in the case of the restoration of compromised archival material where AI assists
to add missing/clipped images back for historical pieces (Rizzica, 2021)36. These applications
go further to show that deepfakes are useful tools in matters of creativity, access, and archiving.
However, the same advancement has also been exploited to be used in prejudice and
unauthorized acts. Deepfakes are constantly being utilized in generating fake news which
enhances fake news, hatred speeches & beliefs and defying the credibility of media among the
population. For example, incorrect videos of politicians using abusive language or fake
endorsement beforehand have been used to sway general elections and direction people’s
mentality. The domain of deepfakes is creating fake pornographic material with the victim’s
likeness, and for non-consensual pornography, women are the most vulnerable group
experiencing severe psycho-emotional and social repercussions. It is equally dangerous to have
political misinformation because deep fake can manipulate the populace, hence causing the
destabilization. Such mal usage point to the idea that advanced deepfake represents a form of
capability that contains both a positive side and an ethically as well as socially problematic
aspect.
High efficiency of deepfakes’ creation has left behind the means for their detection, which
becomes a major problem for people and organizations that try to stop misuse of this technique.
The first of these is the choice of algorithm, which is not always able to effectively combat new
forms of high-quality deepfakes. Present day detection systems are typically configured to
expose minuscule irregularities, for example, blinking pattern, lighting, or pixels. However,
since deepfake algorithms are continually being developed, other inconsistencies become
difficult to detect, making most demand scapegoat tools ineffective.
An example is high quality deep fakes which are produced with the help of some Generative
Adversarial Networks (GANs). These models continue to self-develop and, therefore, generate
36
Rizzica, A., Sexually Explicit Deepfakes: To What Extent Do Legal Responses Protect the Depicted Persons? (Doctoral Dissertation,
Another giant problem is the overwhelming amount of articles and posts produced daily on
digital media. Of course, different activities on the site, including watching videos and posting
pictures, give bots somewhat limited opportunities to analyze every piece of content created by
users. Currently, detection measures have been such that even on platforms like YouTube,
Facebook and Twitter someone can ‘fake’ an existing persona, despite the fact that these
platforms take certain measures but they are inadequate due to such mass production and the
fast spread of deepfake material (Taylor, 2023) 37. Besides, most of the deepfake tools available
are open source, which implies that even people with low technical skills can develop complex
fakes that are hard to decipher.
The problem is made worse by the fact that there are no set procedures on how deepfakes
should be detected, and there is no database of identified deepfakes. This fragmentation
posesition prevents stakeholders from coming together cohesively to address the problem of
creating or promoting damaging content. In addition, deepfake detection tools are cumbersome
to develop resulting in expensive costs of production making it hard for small organizations
and individuals to protect themselves or seek legal actions against anyone they feel is
deepfaking them (Samuel-Okon et al., 2024)38.
37
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
38
Samuel-Okon, A. D., Akinola, O. I., Olaniyi, O. O., Olateju, O. O., & Ajayi, S. A., Assessing the Effectiveness of Network Security Tools
in Mitigating the Impact of Deepfakes AI on Public Trust in Media, 24 Archives of Current Res. Int’l 355-375 (2024).
CHAPTER 5: SECURITY AND PRIVACY IMPLICATIONS FOR
WOMEN
5.1 INTRODUCTION
This chapter emphasizes the distinctive obstacles encountered by female victims of non-
consensual synthetic media, thereby addressing the significant threats that deepfakes pose to
women's privacy and security. The discourse emphasizes how deepfake pornography
undermines personal liberty and trust, intensifying pre-existing gender disparities in digital
environments. It analyses privacy infringements arising from the illicit use of personal data to
generate modified content and the extensive emotional and social repercussions for the victims.
Additionally, the chapter examines how deepfakes erode digital authenticity, blurring the
differentiation between truth and deception. Through the examination of case studies and
contemporary research, it offers a sophisticated comprehension of the extensive societal
ramifications of privacy violations resulting from deepfakes.
Deepfakes are a clear violation of privacy and individuals’ personal information and identity,
as never before, it can be used in different ways. This technology – which can generate highly
realistic media – depends on people’s personal data from social media accounts or from other
sources with or without the owner’s permission. This is why impersonation, that is the process
of using a real person’s identity and digital attributes to manufacture new content most
especially when it is of sexual or scandalous nature is a clear infringement of the victim’s right
to privacy. While deepfakes go further than traditional forms of privacy violation and they
compound the offence by making it almost impossible for the victim to give evidence that the
information being spread is fake (Viola & Voto, 2023)39.
Deepfake technology is the cruellest one because it lifts someone’s face over the body of
another person and attacks the essence of decision-making individual. For instance, these fakes
39
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
videos or images are commonly used to extort money from the TARGET to shame or to black
mail. Thus, such actions lead to destruction of the victim’s self-boundariedness as far as the
regulation of own image is concerned. The violation is not only restricted to creating these
materials but to share such materials in the large social networking platforms where within a
short span of time; the video goes viral and has millions of viewers within a few hours (Chesney
& Citron, 2018). Such scale of exposure makes it worse as individuals who become victims to
such content are further stigmatized and can even be isolated by society due to the information
spread on fake content.
The negative effects of such privacy violations are not only visible on the psychological level
of a person. Most victims of deepfake abuse say that they get anxious, depressed or paranoid
because they know their image has been used in a manipulative manner without their
permission (Laffier & Rehman, 2023)40. Concerning the impact, thus, for many of those
targeted their inability to control the narrative about their identity or to stop the content from
being disseminated any further poses severe long-term emotional effects. This is particularly
true for women who are usually on the receiving end for non-consensual pornography or
sexually explicit deepfake material. Such violations thus cause violation of personal relation
trust, employment opportunities and social reputation and hereby affects several other aspects
of life of the violated person (Taylor, 2023) 41.
40
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
41
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
In addition, deepfake privacy vulnerability demonstrates that current legal provisions and
technological solutions for data protection are lacking. Present day frameworks do not capture
the intricacies of owning AI delivered content and this puts the victims in a tight corner when
seeking redress. For instance, authentication that a video is a deepfake may need tools that are
expensive or even unavailable to the common citizen (Yan, 2022)42. It is in this regard that the
discrepancy of the above protection presents a gap wherein pertinent policies are yet to be
reformed as well as better strategies developed in order to prevent the exploitation of deepfake
innovation.
Using deepfake technology presented clear and current threats that pose security dangers to
women this goes beyond cyberbullying but begins to infiltrate their professional lives,
reputation and personal safety. Deepfakes are also used actively to orchestrate abuse online
and, as such, women seem to suffer the brunt of it, with their pictures and identities used to
wreak havoc. This misuse erases their authority as professionals, objectifies them in society
while, in some instances, creating actual calls to violence against the characters. Blending of
traditional and virtual menace makes this problem even more sensitive in a society that is
currently dealing with Gender Based Violence (GBV) (Okolie, 2023).
Women are most likely to experience career and reputation harm, which is a major security
threat. For a career-oriented women like an ambitious journalist, a politicain, media personality
or a movie, television or music star, one fake scandalous video puts to waste years of effort.
Fake videos, or actual videos are edited are usually shared on social media with comments that
undermine the complainant. For example, actuality women politicians are given fake videos of
corrupt practices and other wrong doings which not only decreases the confidence of the public
but also reduces the probability of upward mobility of the particular politician (Kaushal, 2023).
Likewise, working women fear that their integrity would be in question due to unfavourable
deepfakes that are created and circulated by bad intent individuals within the corporate world.
42
Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L. 767 (2022).
The social discrimination that people with sexually explicit images encounter when real or fake
deters many victims away from civil and corporate life.
Real world violence is also associated with deepfakes in that online targeting typically signifies
physical attacks. The most common pattern found with women that face deepfake abuse is that
they often receive threats of physical violence and staking from people who are hiding behind
the displays of the online world (H. Ri8er, 2021). Occasionally, the threats turn real where
culprits act in compliance with the concocted information found on the social media platforms.
This is worst in societies that have been conditioned to traditional masculine values where such
material is used to punish women into submission. For example, deep fake pornography has
been used to incite honor killing in some cultures where victims receive very nasty treatment
for perceived misconduct they never performed (Mayoyo, 2023)43.
What makes these threats doubly dangerous is the psychological effect that accompanies it.
These threats of violent imagery placed on female’s body, virtual stalking and harassment make
women change their behaviors and become defensive, such as minimizing encounters with
other people or avoiding the use of social media. They are confined in this way which not only
socially excludes them but also hinders their chances of meeting other people and advancing
in their careers. Furthermore, lack of ability to determine when or where the next batch of
deepfake content will surface contributes to the worry; Taylor (2023).
It also raises questions about the lack of protection against deepfake abuse against existing
forms of real-world violence. Unfortunately, most law enforcement agencies do not have the
capacity or inability to provide the necessary support to identify and arrest the persons
responsible which individual victims cannot get adequate protection or justice. The subsequent
anonymity of digital platforms add to the problem, as attackers purposely find ways to get
around the measures in place (Yan, 2022) 44.
43
Mayoyo, N., The Influence of Social Media Use in the Wake of Deepfakes on Kenyan Female University Students’ Perceptions on
Sexism, Their Body Image and Participation in Politics, in Black Communication in the Age of Disinformation: DeepFakes and Synthetic
Media 89-103 (Cham: Springer Int’l Pub., 2023).
44
Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L. 767 (2022).
5.4 PSYCHOLOGICAL AND EMOTIONAL IMPACT ON VICTIMS
The psychological and emotional toll of deepfake abuse on victims is profound and enduring,
often leaving them grappling with severe mental health issues such as anxiety, depression, and
trauma. For individuals targeted by this form of digital manipulation, the experience transcends
the online sphere, affecting their sense of self, their relationships, and their ability to engage
with the world around them. Deepfake attacks exploit the victim’s identity and autonomy,
making the violation deeply personal and uniquely damaging compared to other forms of online
abuse.
One of the most immediate effects of deepfake abuse is acute anxiety. Victims often describe
the overwhelming fear of being judged or ostracized by their social and professional circles
due to fabricated content portraying them in a negative or explicit light. The virality of
deepfakes exacerbates this fear, as victims have little control over the spread of such material
and remain constantly anxious about who might view it next. This persistent state of worry
disrupts their daily lives, often leading to sleep disturbances, social withdrawal, and difficulty
concentrating on work or studies (Laffier & Rehman, 2023)45.
Depression is another common consequence for victims of deepfake abuse. The erosion of their
public image, combined with the feeling of powerlessness to counteract the damage, creates a
profound sense of hopelessness. For many victims, the stigma surrounding explicit or
defamatory deepfake content compounds their isolation, as they may feel unable to seek help
from friends, family, or authorities for fear of judgment or disbelief (Taylor, 2023) 46. This
isolation, in turn, deepens their depressive symptoms, sometimes leading to more severe
outcomes such as self-harm or suicidal ideation. Women, in particular, are disproportionately
affected due to societal tendencies to blame victims for perceived misconduct, even when it is
fabricated (Brieger, 2021).
45
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
46
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
Trauma is another significant impact of deepfake abuse, especially for individuals whose
images or videos have been manipulated into non-consensual pornography. For these victims,
the violation of their autonomy feels akin to a form of digital assault, leaving lasting emotional
scars. Many report experiencing flashbacks, nightmares, and heightened paranoia, symptoms
commonly associated with post-traumatic stress disorder (PTSD) (Okolie, 2023). For instance,
a young woman in South Korea who discovered her face had been superimposed onto explicit
videos shared online described feeling a “complete loss of control over her own body,” a
reaction that mirrors the trauma experienced by survivors of physical assault (Viola & Voto,
2023)47. The trauma is further compounded by the realization that the internet's permanence
means the content may resurface repeatedly, forcing victims to relive the violation.
Case examples highlight the devastating psychological impact of deepfakes. Rana Ayyub, an
Indian journalist, faced relentless harassment and threats after deepfake pornography falsely
depicting her was circulated online. The attack was not only an assault on her professional
reputation but also left her dealing with significant emotional distress and feelings of
vulnerability. Despite the video’s fabricated nature, the abuse she endured both online and
offline was real, underscoring the far-reaching consequences of deepfake technology (Brieger,
2021). Similarly, another case involved a teenage girl whose images were manipulated into
explicit content and shared within her school community, leading to bullying, social exclusion,
and a complete withdrawal from academic life. These examples illustrate the life-altering harm
that deepfakes can inflict on victims across different age groups and professions (Mayoyo,
2023)48.
In addition to individual psychological harm, deepfake abuse fosters a broader culture of fear
and silence. Victims often avoid public platforms or reduce their online visibility to prevent
further attacks, limiting their personal and professional opportunities. This self-censorship
47
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
48
Mayoyo, N., The Influence of Social Media Use in the Wake of Deepfakes on Kenyan Female University Students’ Perceptions on
Sexism, Their Body Image and Participation in Politics, in Black Communication in the Age of Disinformation: DeepFakes and Synthetic
Media 89-103 (Cham: Springer Int’l Pub., 2023).
creates a chilling effect, particularly for women and marginalized groups, as the fear of being
targeted hinders their ability to fully participate in digital spaces (Chesney & Citron, 2018) 49.
49
Chesney, R., & Citron, D., Deepfakes: A Looming Crisis for National Security, Democracy, and Privacy, The Lawfare Blog (2018).
CHAPTER 6: LEGAL AND ETHICAL CONSIDERATIONS
6.1 INTRODUCTION
The emphasis in this chapter is on the ethical and legal implications of deepfake technology.
This analysis assesses existing legal frameworks related to privacy, defamation, and
cybercrime, highlighting their inadequacies in tackling the distinct challenges presented by
synthetic media. This review examines case laws and regulatory approaches across various
jurisdictions, highlighting the fragmented and frequently insufficient legal responses to harms
associated with deepfakes. This chapter examines the ethical conflicts between freedom of
expression and the right to privacy by analyzing the responsibilities of technology platforms,
content creators, and policymakers. This highlights the necessity for thorough legislation and
ethical standards to reconcile innovation with harm mitigation.
Deepfakes are rather recent, and due to this, they have exposed legal systems worldwide
wanting particularly when it comes to privacy and digital rights. Currently, there are no unified
rules to combat deepfakes and many legal systems have no laws that directly target the
problem; thus, victims must turn to protection of privacy or defamation laws, or laws against
cyber criminality. In India, there especially rules concerning legal aspects against privacy
invasion and impersonation on the Internet through the Indian Penal Code (IPC) and the
Information Technology Act (IT Act). For example, Section 66E of the IT Act makes it
unlawful, and attracts the penalties mentioned above, to knowingly, capture or transmit private
pictures without the consent of the pictured individual while Section 67 makes it unlawful and
attracts the penalties mentioned above to publish or transmit obscene material online.
Nonetheless, these provisions have no overlap with deepfakes; in other words, the latter was
never envisioned when the provisions were created, which resulted in a string of shortcomings
when addressing synthetic media (Kaushal, 2023) 50. Likewise, Section 499 of IPC that falls
under the defamation clause can be employed where fake information harms an individual’s
50
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
reputation but it isn’t equipped to deal with the novel technology of artificial intelligence, or
the identity of its users (Khan & Rizvi, 2023) 51.
Nevertheless, among all the countries worldwide, the European Union presents one of the most
effective legal protections of privacy rights with the General Data Protection Regulation.
GDPR comes from the article, which defines the personal data as any information involving an
identified or identifiable individual giving them the right to access the data and decide what
happens to it. According to GDPR, the creation and distribution of deepfake videos can be
infringements of personal data rights where the content is is of an obscene nature. Non-
consensual deepfake producers and distributors have been prosecuted under GDPR laws where
GDPR law applies but pursued varied legal options among the member countries (Viola &
Voto, 2023)52. Although GDPR serves as a gold standard for protection of data privacy across
the world it needs more specificity in the context of deepfake videos as it only deals with the
misuse of data and does not address the psychological and reputational damages caused by
deepfake videos adequately for passing a legislation there is need.
At a global level several conventions touch on the issue of privacy rights in a way, well known
being the ICCPR and the UDHR. Although the ICCPR guarantees protection of privacy,
family, home or correspondence through Article 17, protections include no enforcement
mechanisms. Still, other treaties such as the Budapest Convention on Cybercrime provides
measures on how to handle cybercrimes but does not have direct reference on deepfakes or AI
content creation (Yan, 2022) 53.
The failure to find sufficient legal regulation of free speech to address deepfake-related harms
shows the need for legal development. It is thus upon policymakers to develop rules that; will
adequately address privacy issues while keeping in mind freedom of the media in the face of
synthetic media. However, it is also especially important to engage collectively at the
international level, as deep fake is created and distributed across boarders.
51
Khan, Z. A., & Rizvi, A., Deepfakes: A Challenge for Women Security and Privacy, 5 CMR Univ. J. Contemp. Legal Aff. 203 (2023).
52
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
53
Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L. 767 (2022).
6.3 CASE LAWS OF DEEPFAKES
The landmark judgment in K.S. Puttaswamy v. Union 54 of India established the right to privacy
as a fundamental right under Article 21 of the Indian Constitution. While the case primarily
dealt with data privacy and state surveillance, its implications extend to deepfake technology.
The recognition of privacy as intrinsic to human dignity provides victims of deepfake abuse
with a constitutional foundation to challenge violations of their rights (Kaushal, 2023) 55.
However, translating this principle into actionable legal remedies remains a challenge, as India
lacks specific laws addressing deepfake creation or dissemination.
In the United States, People v. Golb56 is a notable case addressing digital impersonation, which
is closely related to the misuse of deepfake technology. The case involved the use of fake
emails to impersonate individuals, leading to reputational harm. The court upheld that such acts
constituted criminal impersonation under New York law, setting a precedent for addressing
online identity theft. While this case predates the rise of deepfakes, its principles are
increasingly relevant in holding individuals accountable for digital impersonation and fraud
(Chesney & Citron, 2018). However, the fragmented nature of U.S. state laws means that legal
responses to deepfake abuse vary significantly across jurisdictions.
Under the GDPR, several cases have highlighted the use of its provisions to combat non-
consensual pornography, including deepfake content. For example, in a case involving the
dissemination of explicit deepfake videos, the victim successfully argued that their personal
data, including facial imagery, was used without consent, resulting in significant harm. The
ruling underscored the importance of consent in data usage and affirmed the victim’s right to
compensation under GDPR (Viola & Voto, 2023)57. Despite these successes, enforcement
54
K.S. Puttaswamy v. Union of India, (2017) 10 S.C.C. 1 (India).
55
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
56
People v. Golb, 23 N.Y.3d 455 (N.Y. 2014).
57
Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion of Intimate Images, 201 Synthese 30 (2023).
challenges persist, as many perpetrators operate anonymously or outside the jurisdiction of
GDPR-compliant states.
One of the most significant challenges in addressing deepfake-related abuses lies in navigating
the jurisdictional complexities associated with the cross-border dissemination of content. The
internet's global nature enables perpetrators to operate from any location, often beyond the
jurisdiction of the victim's country. For example, a deepfake video targeting an individual in
India could be created by someone in another country and hosted on servers in a third region.
This decentralization complicates efforts to identify perpetrators, enforce laws, and remove
harmful content from platforms that may not fall under local legal purview (Kaushal, 2023) 58.
Extradition treaties and international agreements on cybercrime, such as the Budapest
Convention, are often insufficient to address the rapid pace at which deepfake content spreads,
leaving victims with limited options for recourse.
Another hurdle stems from the limited legal definitions surrounding deepfake-related crimes.
Many existing laws were drafted before the advent of AI-generated media, making them
inadequate to capture the nuances of deepfake technology. For instance, laws on defamation or
obscenity may apply in some cases but fail to address the specific harm caused by non-
consensual deepfake pornography or the fraudulent use of synthetic content. This gap often
leaves law enforcement and judicial systems ill-equipped to prosecute such crimes effectively.
The absence of standardized legal definitions also creates inconsistencies in how cases are
handled across jurisdictions, further complicating the pursuit of justice for victims. These
limitations underscore the urgent need for legislation explicitly addressing the misuse of
deepfake technology and providing clear pathways for victims to seek legal redress.
58
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
6.5 ETHICAL ISSUES
The ethical concern with deepfake technology can therefore be said to be on the numbers
freedom of speech and right to consent. As the countries of the world continue to uphold
freedom of expression embraced in the autonomous international covenants and most of the
countries’ constitutions, the core principle should not be to cross the freedom of an individual
to do as he or she wish or one’s dignity. Introducing deepfake technologies can be usually
justified by freedom of speech when it comes to art or satirical content, which points at the
problem with unauthorized portrayal of someone. In the case of the victims, especially for those
exposed to the harassment involving the display of such material, the absence of consent turns
what may be described as ‘expression’ into the abuse of the victim (Taylor, 2023) 59. To
alleviate this ethical dilemma, there has to be a legal distinction of acceptable creative uses
from improper exploitation.
Part of the ethical issues caused by the emergence of deepfakes is the involvement of
technology companies in addressing primary harms. Primary distribution of deepfake content
is the internet over social platforms such as YouTube, Facebook, and Twitter, however, their
stances towards such misconduct remain ambiguous, and defensive. Although widespread
actions have also been taken this year, including employee training against the dissemination
of unconsented non-consensual deep fake pornography or politically influenced material,
enforcement is insufficient. Spearheaded by autonomous resolution recognition, other
approaches to interactions that are unsafe for people often fail to evolve at the same pace as
deepfake technology, enabling the spread of such content (Lee et al., 2024)60. Furthermore, the
business interests of these platforms including he visibility and the advertisement revenue—
are sometimes at odd with their duties of keeping users safe (Chesney & Citron, 2018).
Ethical responsibility does not end with the creation of deep fake but regarding the developers
who create tools that enable deep fake, too. Since most of these tools are advertised for legal
purposes like entertainment or education, their availability makes it easy for wrong doers to
59
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
60
Lee, H. P., Yang, Y. J., Von Davier, T. S., Forlizzi, J., & Das, S., Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI
Privacy Risks, in Proceedings of the CHI Conference on Human Factors in Computing Systems 1-19 (May 2024).
use the same tools for unethical ends. Software builders have a ethical responsibility to build
protections, like watermarks, or restrictions on use to protect against it. But this is made
difficult by the fact that many of the tools are open source in nature and thus anyone can tweak
the application to remove protective features (Okolie, 2023). This kind of situation raises bigger
questions for accountability for emerging technologies in development and deployment.
The anti-deepfake legislation and its implementation also show differences depending on the
advancement of technology, approaches to actual legislation, and the set values of the
countries. As for the measures concerning the use and circulation of deepfake content some
countries have already been actively acting, whereas the others experienced certain difficulties
in understanding the consequences of this innovative technology. This leads to a disjointed
global strategy toward deepfake exploitation: a situation that renders the fight against cross
border utilization of the technology challenging.
In the EU, perhaps the most solid regulatory law exists in the form of the General Data
Protection Regulation (GDPR). According to GDPR non-consensual usage of images and
videos in creating fake deepfakes also violates privacy rights. Individuals are allowed to sue
for unlawful use of their image especially in circumstances where the image used is erotic
without sounding the subject’s consent. They stress yet again that compliance is not
homogenous across member states with some applying higher level of legalism and sanctions
compared to others. The GDPR thus offers a baseline approach to dealing with privacy
breaches at the same time as legal loopholes exist for dealing with AI-generated media as will
be discussed further in the following sections.
Reporting of deepfakes on the other hand lacks a federal framework in the United States but is
instead governed by a number of state laws. California and Texas are among the first states to
pass laws regulating aspects of deepfake technology where constituents have been
manipulated, advertised, or in cases of non-consensual pornographic use. For example,
California has banned deepfake content that seeks to manipulate votes in the elections within
sixty days to the event. These laws represent positive change since they are intended to
discourage deepfake utilization, but, being relatively new and piecemeal in nature and lacking
coordination in any case at the federal level, they are deficient and lack the ability to
overwhelmingly address all of the deepfake problems in their broad spectrum of misuse
(Chesney & Citron, 2018). In addition, the presence and sanctioning of free speech under the
First Amendment makes it even more difficult to control deepfakes especially when the
manufacturing agency was undertaking the act in the pretext of creating art or launching a
satire.
In essence, Asian countries’ reactions to the deep fake technology are bipolar. South Korea is
among the few countries that have most strictly responded to the problem of the incorrect use
of deepfakes and non-consensual pornography. In 2020 the country has changed the laws so
that creating deepfakes without permission became punishable by imprisonment and
significantly large fines. This legislation highlights the extent to which South Korea has woken
up to realize the actual psychological and social damage that is occasioned by deepfake
exploitation especially on women. On the other hand, many other Asian countries have no
legislation exclusively for deep fakes but use other legal provisions inherited from the
cybercrime or obscene content laws that may not suit the features of deep fake.
Deepfakes are a phenomenal challenge for the regulation of India as a country that is gradually
but steadily transitioning to a digital society. In particular, the Information Technology Act,
2000 of India contains sections concerning cybercrime and personal data protection, but it is
silent about deepfake technology. Survivors, therefore, turn to the law on defamation or
obscenity which do not cater for the emergence and adaptation of deepfake misuse. The K.S.
Puttaswamy v. The 2017 judgment from the Constitution of India again recognized the right to
privacy while applying for fundamental rights, but cases about deepfake are still uncharted
territory (Kaushal, 2023) 61. Such flexibility is unfortunately accentuating this legal void for
one of the world’s largest democracies, and will necessitate that India now seeks to pass
legislation that can fit the gaps left.
On an international level, there are the Budapest Convention on Cybercrime to give the legal
framework for enforcement of cyber laws for cross border cyber crime like misuse of deep
fake. Nevertheless, deepfakes are not mentioned in the convention at all, which can be
explained by the fact that it was adopted in the age of different cyber threats. The process of
61
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
updating international agreements has been labourious, this due to diverse national priorities
alongside relatively high rates of technological advancement (Yan, 2022) 62. The lack of a
coherent international standard results in problems of criminal prosecution concerning the
offenses based on deepfakes, involving several countries.
62
Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L. 767 (2022).
CHAPTER 7: MITIGATION STRATEGIES
7.1 INTRODUCTION
This chapter mostly focuses on mitigation measures to address the adverse effects of deepfakes.
It examines technology solutions, including AI-driven detection tools and blockchain for
content verification, highlighting their advantages and drawbacks. The chapter calls for
enhanced policy frameworks, including regulatory reforms to criminalize non-consensual
synthetic media and enforce accountability on digital platforms. The chapter addresses the
significance of digital literacy and public education as a preventive strategy. The chapter
finishes with ideas for establishing victim support systems, legal assistance programs, and
cooperative international frameworks to tackle the worldwide issue of deepfake abuse.
Another avenue for content verification is found in blockchain technology. Also, recording the
data on the origin and history of its changes, blockchain can contribute to the identification of
the video and image publishing source. For example, every piece of content that will be
contained in the system can be assigned a specific cryptographic identifier at the time of
generation and recorded on a public database. Any modification of the content in the future
would mean that the initial signatory may have tampered with it thus requires updating it.
Besides, this method proves its effectiveness in not only identifying fake content but preventing
deepfake creation by increasing the challenge of bypassing verification tools (Samuel-Okon et
al., 2024)63. A number of organizations are already looking at the extent to which blockchain-
based solutions to provide provenance information for digital content can be applied so as to
present an effective line of defense against deepfakes . Yet, they cannot be effective without
an extensive scale of deployment across creators, platforms, and governments.
However, the applicability of these developed and innovative solutions based merely on
technology aspects is not completely satisfactory. This is a challenge for deepfake detection
tools since there are numerous content files shared across the Internet every day. Also, most of
the tools used for deepfake creation are open source, which loosens the grip of the detection
algorithms from the wrongdoers. So, while technology is imperative, good policies, and
legislation must be established to fill the gaps and provide a solid barrier against fake misuse.
Policy and legislative solutions are a necessary component of comprehensive treatment and
reflect this fact.
Solving the problems that deepfakes have brought also implies important changes to the legal
regulations and adding targeted policies that correspond to the potential of this technology.
Present laws are inadequate in explaining crimes involving deepfake and prescribing
appropriate penalties, which ends up producing the loopholes that are utilised. In order to close
these gaps legal scholars have to introduce legislative changes that would qualitatively define
deepfake content as a criminal offence in the process prohibiting the creation, distribution and
possession of malicious deepfake content. For instance, laws could classify profiting from non-
consensual deepfake sex movies as a unique crime, with extra punishment for such an unlawful
act because of the unwell effects it has on victims ‘s psychological and reputational status
(Khan & Rizvi, 2023)64. In the same way, regulations should cover politically motivated deep
63
Samuel-Okon, A. D., Akinola, O. I., Olaniyi, O. O., Olateju, O. O., & Ajayi, S. A., Assessing the Effectiveness of Network Security Tools
in Mitigating the Impact of Deepfakes AI on Public Trust in Media, 24 Archives of Current Res. Int’l 355-375 (2024).
64
Khan, Z. A., & Rizvi, A., Deepfakes: A Challenge for Women Security and Privacy, 5 CMR Univ. J. Contemp. Legal Aff. 203 (2023).
fakes which are designed to sway the public during polls or any other democratic elections,
proposing severe consequences in the case of using deep fakes before or during polls.
Besides, the provisions of the laws must incorporate stern measures as a way of discouraging
any misuse of deepfake technology. This could mean financial penalties of potentially millions
of dollars, and years behind bars to those within individuals or organizations that have been
convicted of developing or sharing fake and nearly-real adult content. For instance, it is
unlawful to produce deepfake content without consent in South Korea with regards to laws that
can lead to up to five years imprisonment and this sets a good example for other countries
(Mayoyo, 2023)65. The implimentation of such penalties demands not only comprehensive
legislation, but also functional investigation and prosecutorial mechanisms as well as well
equipped and staffed specialized cybercrime units familiar with the specifics of deepfake
investigation.
In the same way, policy frameworks must also be able to ensure that technology platforms are
responsive for the contents broadcasted into the general public. Video hosting sites and social
media companies strictly should have well-developed policies regarding to content moderation
that they have to use artificial intelligence detection system and manual moderation. Moreover,
here, platforms should be given consequences for not removing flagged deepfakes as soon as
possible and especially if the content is non-consensual pornography or disinformation.
Maintaining the clear and consistent policies on the moderation of the content alongside the
frequent audits can drive additional assurance that these platforms participate in the prevention
of deepfake abuse.
Last but not the least there is need for collaboration between the countries to enable the polices
and legislation to capture cross country interventions. Since social media is equally a global
phenomenon, the laws of social media and the mechanisms of prosecuting offenders have to
be standardized across countries to allow for the tracking and arresting of offenders who may
conduct their evil from different countries. There could be certain agreements like the
amendments to the Budapest Convention on Cybercrime that could cover such collaborations
65
Mayoyo, N., The Influence of Social Media Use in the Wake of Deepfakes on Kenyan Female University Students’ Perceptions on
Sexism, Their Body Image and Participation in Politics, in Black Communication in the Age of Disinformation: DeepFakes and Synthetic
Media 89-103 (Cham: Springer Int’l Pub., 2023).
hence ensure that no juncture turns out to be account that is accommodative of deepfake ardent
enthusiasts (Yan, 2022) 66.
The last threat relates to fake images and videos, and increased digital literacy and the
numerous educational programmes currently being rolled out offer a solution here. One on one
visually educational perception change initiatives are particularly crucial in preparing the
individuals to spot deepfakes and learn about the risks they pose. Such campaigns can be
carried to the general populace as well as to specialized groups such as media and education
professionals and potential corporate clients. These programs simply draw attention to the fact
that there are certain recognizable signs of deep fake content, namely, things like unnatural
lighting or compressing the face or an unusual movement might help people be more skeptical
of what they see online. In addition, there are certain ways on how the information of such a
campaign can be popularized, and these are through the use of social tools…and other more
conventional forms of information sharing. We believe that such education can be made
effective through the employment of, for example, interactive tools, video clips, as well as
workshops relevant to different populations (Samuel-Okon et al., 2024)67.
Training programs that are meant to be taken()- implemented for the police officers are as vital
as those of international relations. Cvltvrc, police and legal experts are generally not trained
enough in technology to properly address deepfake related crimes. Someplace else, it can
educate them about the mechanics of deepfake technology, tools on the same, and the
legislation surrounding its use or misuse. For example, such programs could include practical
scenario that may include physical encounter with detection instruments like Deepware
66
Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L. 767 (2022).
67
Samuel-Okon, A. D., Akinola, O. I., Olaniyi, O. O., Olateju, O. O., & Ajayi, S. A., Assessing the Effectiveness of Network Security Tools
in Mitigating the Impact of Deepfakes AI on Public Trust in Media, 24 Archives of Current Res. Int’l 355-375 (2024).
Scanner and examples to demonstrate different type of deepfake crimes (Taylor, 2023) 68.
Furthermore, involved law enforcement officers can be trained on the mental trauma that
results from deepfake abuse to have better returns to service. Such training can be enriched and
expanded in cooperation with representatives of IT companies, universities, and global
organizations combating cybercrime.
Education also should reach to the schools and universities. Incorporating topics like media
literacy, ethical use of Artificial Intelligence and internet safety in curricula can help young
people arm themselves with ability to handle future technologies. Initiating debates by making
students think critically regarding personal privacy, consent, as well as the roles inherent to
digital content production can shape responsibility-bearing subjects from an early age. Creating
awareness and teaching citizens by ensuring that their literacy embraces the features of deep
fake technology, through the academic institutions and professional courses can help societies
develop mechanisms to combating misuse of the mentioned technologies.
68
Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and Emerging Technologies 171-189 (Routledge,
2023).
specialized victims to cope with and reclaim a sense of agency to work endlessly towards
regaining their strength (Laffier & Rehman, 2023) 69.
Another important type of programs is legal aid ones that aim to help victims. Some of the
victims do not have means to seek justice they lack financial might or legal aid hence they
should be facilitated with free or aerate legal aid services. Legal aid clinics or NGO that deals
with the digital rights can advise the victims on how to file complaints, how to collect evidence
and how to handle jurisdictional question when dealing with deepfake cases (Kaushal, 2023) 70.
These services can also lead victims to law enforcement and guarantee that they hear their cases
with high empathy and importance. Moreover, it is clear that online services delivering
deepfakes need to be named the enablers of abuse; with legal aid programs help compensate
for victims’ valuable time, energy, and resources requesting takedowns or further
compensation from the platform (Chesney & Citron, 2018).
However, there is need for measures that can offer long-term solution in handling victims and
prevention of recurrences. For instance, development of survivor network can help in
developing feeling of togetherness and healing among the Deepfake related crimes victims.
There are also roles that advocacy groups can perform in increasing the nation’s laws and
enforcement, and increasing corporate responsibility. As it was done for any other type of
violence, the publicly funded victim support programs would ensure the sustainability of these
actions, making them available to everyone in need (Mayoyo, 2023)71.
69
Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1 (2023).
70
Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in the Age of AI (2023).
71
Mayoyo, N., The Influence of Social Media Use in the Wake of Deepfakes on Kenyan Female University Students’ Perceptions on
Sexism, Their Body Image and Participation in Politics, in Black Communication in the Age of Disinformation: DeepFakes and Synthetic
Media 89-103 (Cham: Springer Int’l Pub., 2023).
CHAPTER 8: CONCLUSION AND SUGGESTIONS
8.1 CONCLUSION
Deepfake technology poses a profound and multifaceted threat to individuals and society,
undermining privacy, security, and trust in digital environments. This research has illuminated
the scope and severity of these challenges, particularly their disproportionate impact on
women. The ability of deepfake technology to create hyper-realistic but fabricated content has
enabled new forms of harassment, blackmail, and reputational damage, exacerbating pre-
existing issues of gendered violence and discrimination. For instance, non-consensual deepfake
pornography has become a pervasive tool for silencing and shaming women, leveraging
societal stigmas around sexuality to inflict both psychological and professional harm.
Beyond personal violations, deepfakes have broader implications for societal stability and
democratic processes. Politically motivated deepfakes, such as falsified speeches or fake
endorsements, have the potential to manipulate public opinion, disrupt elections, and erode
trust in legitimate institutions. Similarly, their use in spreading disinformation and fake news
complicates efforts to discern truth from fabrication in an already polarized media landscape.
These applications of deepfakes undermine the credibility of digital media and foster a climate
of distrust, where even authentic content is viewed with skepticism.
One of the most alarming findings is the inadequacy of existing legal frameworks and
technological solutions in addressing these threats. While jurisdictions like the European Union
and South Korea have made strides in legislating against deepfake misuse, many countries lack
targeted laws, leaving victims without sufficient avenues for redress. Legal gaps, coupled with
jurisdictional complexities in cross-border cases, highlight the urgent need for international
cooperation and harmonized policies. Additionally, current detection technologies, although
advancing, struggle to keep pace with the sophistication of deepfake generation methods,
underscoring the need for continuous innovation in this area.
The psychological toll on victims of deepfake abuse further reinforces the critical need for
intervention. Many victims experience anxiety, depression, and long-term trauma,
compounded by societal stigma and inadequate support systems. The social impact extends
beyond individuals, creating a chilling effect that discourages women and marginalized groups
from participating freely in online and public spaces. This silencing effect represents a
significant loss of voices and perspectives, weakening the inclusivity and diversity of digital
platforms.
Given the pervasive and evolving nature of deepfake threats, there is an urgent need for
comprehensive legal and societal interventions. Legislators must prioritize the creation of
explicit laws that address deepfake-related crimes, including non-consensual pornography and
political disinformation, with clear definitions and strict penalties. Technology platforms must
enhance their content moderation policies and invest in advanced detection tools, ensuring
timely removal of harmful content. Education and awareness campaigns are equally critical,
equipping individuals with the knowledge to identify deepfakes and protect themselves against
potential misuse.
8.2 SUGGESTIONS
Governments are required to implement targeted legislation that explicitly criminalizes the
creation and dissemination of malicious deepfakes, including non-consensual pornography,
political disinformation, and deception. These rules should promote cross-border collaboration
and victim protection like South Korea's deepfake penalties. We should address deepfake
misuse by expediting the removal of harmful content, streamlining the collection of evidence,
and enhancing data privacy worldwide, under the guidance of the EU's GDPR.
BOOKS
1. Taylor, D., Technologies of Women's (Sexual) Humiliation, in Feminist Philosophy and
Emerging Technologies, 171-189 (Routledge, 2023).
2. Lake, J., Deepfake and Non-Consensual Pornography: Recent Iterations of the
Gendered Battle for Rights in a Photograph, in A Research Agenda for Intellectual
Property Law and Gender, 221-249 (Edward Elgar Pub., 2024).
3. Thomasen, K., & Dunn, S., Reasonable Expectations of Privacy in an Era of Drones and
Deepfakes: Expanding the Supreme Court of Canada’s Decision in R v Jarvis, in The
Emerald International Handbook of Technology-Facilitated Violence and Abuse, 555-
576 (Emerald Pub. Ltd., 2021).
JOURNALS
1. Khan, Z. A., & Rizvi, A., Deepfakes: A Challenge for Women Security and Privacy, 5 CMR
Univ. J. Contemp. Legal Aff. 203 (2023).
2. Okolie, C., Artificial Intelligence-Altered Videos (Deepfakes), Image-Based Sexual
Abuse, and Data Privacy Concerns, 25 J. Int’l Women’s Stud. 11 (2023).
3. Laffier, J., & Rehman, A., Deepfakes and Harm to Women, 3 J. Digital Life & Learning 1
(2023).
4. Chesney, R., & Citron, D. K., 21st Century-Style Truth Decay: Deepfakes and the
Challenge for Privacy, Free Expression, and National Security, 78 Md. L. Rev. 882 (2018).
5. Hall, M., Pester, A., & Atanasov, A., AI Threats to Women’s Rights: Implications and
Legislations, 2 J. L. & Emerging Tech. 88 (2022).
6. Kaushal, T., Women, Deepfake Pornography, and the Imperative of Legal Education in
the Age of AI (2023).
7. Samuel-Okon, A. D., et al., Assessing the Effectiveness of Network Security Tools in
Mitigating the Impact of Deepfakes AI on Public Trust in Media, 24 Archives of Current
Res. Int’l 355 (2024).
8. Viola, M., & Voto, C., Designed to Abuse? Deepfakes and the Non-Consensual Diffusion
of Intimate Images, 201 Synthese 30 (2023).
9. Yan, Y., Deep Dive into Deepfakes-Safeguarding Our Digital Identity, 48 Brook. J. Int’l L.
767 (2022).
10. Franks, M. A., & Waldman, A. E., Sex, Lies, and Videotape: Deep Fakes and Free Speech
Delusions, 78 Md. L. Rev. 892 (2018).
11. Molina, S. E., Lying Beneath the Surface: The Impacts of Deepfake Technology on the
Privacy and Safety of the LGBTQ+ Community, 46 Nova L. Rev. 251 (2021).
12. Rini, R., & Cohen, L., Deepfakes, Deep Harms, 22 J. Ethics & Soc. Phil. 143 (2022).
13. Han, M., The Infringement of Deepfake Technology on Personal Privacy and Legal
Protection: A Discussion Based on Article 1032 of the Civil Code, 41 J. Educ.,
Humanities & Soc. Sci. 188 (2024).
14. Scott, L., Your Body Should Not Belong to the Internet: Online Bodily Integrity in the
World of Deepfake Pornography, in EAI Int’l Conf. on AI for People, Democratizing AI,
105-116 (Cham: Springer Nature Switzerland, 2023).
WEBSITES
1. Chesney, R., & Citron, D., Deepfakes: A Looming Crisis for National Security,
Democracy, and Privacy, The Lawfare Blog (2018).
2. Desai, A., Face/Off: The Damaging Impacts of Deepfakes, NY Times (2020).
3. Brieger, A., Taking Back Their Faces: The Damages of Non-Consensual Deepfake
Pornography on Female Journalists (2021).
4. Rizzica, A., Sexually Explicit Deepfakes: To What Extent Do Legal Responses Protect the
Depicted Persons? (Master’s Thesis, Tilburg L. Sch., 2021).