Professional Documents
Culture Documents
Ims555 Grouping Assignment (Ai Deepfakes)
Ims555 Grouping Assignment (Ai Deepfakes)
PREPARED BY:
SUBMISSION DATE:
18 JANUARY 2024
TABLE OF CONTENTS
ACKNOWLEDGEMENT...................................................................................... 3
1.0 INTRODUCTION............................................................................................. 4
3.1 STRENGTH................................................................................................. 10
3.2 WEAKNESS................................................................................................ 10
3.3 OPPORTUNITIES........................................................................................11
3.4 THREAT....................................................................................................... 11
6.0 CONCLUSION................................................................................................ 19
REFERENCES...................................................................................................... 20
2
ACKNOWLEDGEMENT
Alhamdulillah. We thank Allah SWT, who with His will gave us the opportunity, internal
strength, and a chance to accomplish this project till the end.
We also would like to express our deep gratitude to various people for supporting us to
finish our project, especially to our dearest parents. Lots of love to all of them. We wish to
acknowledge all of them and we wish to make special mention of the following. We also would
like to extend our thanks to our friends for providing us with moral support, and also their
friendly advice throughout this group project. We would not be able to get this far without all of
your great assistance. This work would have not been possible to finish without the financial
support and cooperation of all the participating members of this group project.
3
1.0 INTRODUCTION
Artificial Intelligence, or AI, is a way for computers to learn and function in a way that is
similar to how humans use our brains to learn and make decisions. AI uses computer programs to
accomplish this, so computers can perform tasks that would be difficult for humans to perform
on their own, such as helping doctors diagnose diseases, helping us predict the weather more
accurately, and even playing video games. AI works by using software to look at large amounts
of data and identify patterns, much like a dog and a cat can. AI is important because it makes
tasks faster, more accurate, and sometimes even better than we could do on our own. It also
solves complex problems that we might not be able to figure out on our own. There are three
main types of AI: supervised, unsupervised, and reinforcement learning. Each type learns
uniquely, but they all use a lot of data to get better at what they do. The potential of artificial
intelligence (AI) is unlimited; it's revolutionising both our personal and professional lives. So,
the next time you hear about AI, simply keep in mind that it's essentially a computer's clever
brain, enabling us to solve complex issues and do things we never would have imagined (GCF
Learn Free, 2023).
“Disinformation was spread over social media in the run-up to the Slovak elections that
took place over the weekend, with videos featuring artificial intelligence (AI) produced deepfake
voices. One video shows conservation in which Slovakian Progressive Party leader Michal
Simecka appears to discuss vote-buying from the Roma minority with a journalist, which experts
deemed synthesized by an AI tool trained on samples of the speakers’ voices. Technological
democracy research group Reset’s Rolf Fredheim said, “With the examples from the Slovak
election, there’s every reason to think that professional manipulators are looking at these tools to
create effects and distribute them in a coordinated way” (Solon, 2023).
Recent generative model-based methods have been successfully used for cloned image,
audio, and video generation. Generative models based on deep neural networks have been
successfully applied to many domains such as image generation, speech synthesis, and language
modelling. Advances in artificial intelligence (AI), speech synthesis, and image and video
generation technologies pose new security and privacy threats to biometric-based access control
systems and voice-driven interfaces. For instance, applications of voice-driven interfaces and
services including Amazon Alexa, Google Home, Apple Siri, Microsoft Cortana, and so on, are
4
on the rise. The private banking division of Barclays is the first financial services firm to deploy
voice biometrics as the primary means to authenticate customers to their call centres. Since then,
many voice biometric-based solutions have been deployed across several financial institutions,
including Banco Santander, Royal Bank of Canada, Tangerine Bank, Chase Bank, Citi Bank,
Bank of America, and Manulife, and HSBC Bank, and this list keeps growing (Hafiz. M, &
Raghavendar. C, 2019).
5
2.0 EMERGING TECHNOLOGIES IN ARTIFICIAL INTELLIGENCE
Face swapping refers to the task of swapping faces between images or in a video
while maintaining the rest of the body and environment context (Papers With Code, n.d.).
Advances in digital photography have made it possible to capture large collections of
high-resolution images and share them on the Internet. While the size and availability of
these collections are leading to many exciting new applications, it is also creating new
problems. One of the most important of these problems is privacy. Online systems such
as Google Street View (http://maps.google.com/help/maps/streetview) and EveryScape
(http://everyscape.com) allow users to interactively navigate through panoramic images
of public places created using thousands of photographs. Many of the images contain
people who have not consented to be photographed, much less to have these photographs
publicly viewable. Identity protection by obfuscating the face regions in the acquired
photographs using blurring, pixelation, or simply covering them with black pixels is often
undesirable as it diminishes the visual appeal of the image (Bitouk et al., 2008).
According to Ankita (2023), earlier this year, a lawsuit was filed in the US against
the viral face-swapping application Reface for commercially exploiting photos of
celebrities. The Reface application uses artificial intelligence (AI) to create
hyper-realistic face swaps. The app allows anyone to digitally paste a photograph of any
individual’s face onto a picture or video of another individual. There are several such
applications available for free over app stores. While face-swapping apps are meant for
harmless fun, their potential threats came to the fore earlier this year when a scamster
used this technology to impersonate the victim’s close friend and duped him of over 4.3
million yuan.
6
2.2 VOICE CLONING
How scammers are taking advantage of AI. It is no secret that AI has gotten really
good over the last couple of years. And scammers are already taking advantage of that
fact. For starters, they are using programs like Chat GPT to rewrite their notoriously bad
scam emails. So things like broken English and constant misspellings that are the
hallmarks of a scam email are slowly going to disappear. But the scariest thing that
scammers are doing right now is using voice cloning technology that allows them to
impersonate you and your loved ones. All they need is just a few seconds of your audio to
clone your voice. So they can grab that from a social media post that you made publicly
online or by calling you and recording when you pick up. Things have gotten so bad
recently that the Federal Trade Commission has had to make an announcement letting
Americans know how bad this scam is getting. As AI continues to get better and better,
so will the scams (Scammer Payback, 2023).
7
software gained attention as users started imitating celebrity voices, generating content
where famous individuals say things they never did. Despite being primarily designed for
personal voice cloning, instances of hate speech and rule violations emerged, prompting
the company to limit voice cloning to paid users and develop tools to track misuse. The
conversation with ElevenLabs' CEO reveals the company's acknowledgement of the
misuse and its commitment to addressing the issue. Additionally, the transcript touches
on the broader implications of AI-generated voices, including concerns about scams and
their potential impact on trust in audio content. Experts suggest that as AI advances,
society needs to become more aware of the prevalence of AI-generated content and
establish guidelines for responsible use. The video concludes by emphasizing that voice
cloning is likely here to stay, posing both positive and negative implications for society
(BBC News, 2023).
The video from Al Jazeera English (2023) discusses the development of highly
realistic voice cloning technology by a Silicon Valley startup called Resemble AI. The
company can recreate voices, including that of deceased artist Andy Warhol, for various
applications, such as television documentaries. The CEO, Zohaib Ahmed, explains that
the technology doesn't imitate a voice but uses the actual voice data to learn patterns and
reproduce them convincingly. The cloned voice is demonstrated by having the speaker,
likely the journalist or interviewer, record a few phrases. The cloned voice then recites a
19th-century poem, showcasing the realism of the technology. Resemble AI claims to
have strict safeguards, requiring informed consent from speakers and implementing a
digital watermark on its products for verification. However, the transcript also raises
concerns about the potential misuse of voice cloning technology. The risk of rogue
programmers using the technology for harmful purposes, such as impersonation for
fraudulent activities or manipulating public opinion, is discussed. The example of a
14-year-old's voice being cloned to extort money from parents is mentioned, highlighting
the potential dangers. Experts and business leaders are calling for government regulations
and safeguards to prevent the misuse and uncontrolled spread of this form of AI
technology (Al Jazeera English, 2023).
8
2.3 SYNTHETIC MEDIA CREATION
Synthetic media is a catch-all term to describe video image text or voice that has
been fully or partially generated using artificial intelligence (AI) algorithms (Synthesia,
2022). In recent years, the term “synthetic media” has emerged in common parlance as a
catch-all to describe video, image, text, or voice that has been fully or partially generated
by computers. The ways that people communicate have always been closely tied to the
technologies available at a given time. People didn’t use phones back in the days of the
Renaissance, for example, and we no longer paint in caves. But we do communicate via
snaps, TikTok, and DMs — completely new forms of content never seen before. What
we’re seeing is a constant improvement of technology that leads to new ways of
communicating, i.e., new media formats that vary in terms of creation, consumption, and
contextualisation (Riparbelli, V., 2023).
9
3.0 SWOT ANALYSIS
3.1 STRENGTH
3.1.1 MARKETING
3.1.2 CUSTOMIZATION
Allows the creation of personalized and unique voice content where individuals
with speech impairments or disabilities can create a synthetic voice of their
choice. Provides opportunities for customization in various applications, such as
virtual assistants and voiceovers.
It is a valuable tool for education and training, offering realistic simulations and
interactive learning experiences that allow students to express their creativity and
produce engaging content.
3.2 WEAKNESS
This technology raises ethical concerns related to privacy, consent and the
potential for the creation of content that could harm individuals or organizations.
10
A person's privacy will be threatened because AI can imitate a person's face even
if it is not done by a real person.
3.3 OPPORTUNITIES
3.4 THREAT
This technology poses a major threat in terms of generating fake news or realistic
misinformation that can have serious consequences for public opinion and
decision-making. Things like this can also cost lives due to depression caused by
people spreading false news about them.
11
4.0 DECISION MATRIX
12
c) Assign Weight to Factors (State of Nature)
Give an assessment of the factors that have been listed above based on relevant
references.
Factors Weight
Technology 3
Social 3
Individual 2
Law 2
Government 1
Factors Weight
Technology 3
Social 2
Individual 2
Law 1
Government 1
13
e) Tabulate Factors and Criteria
Explain the criteria more clearly for each factor so that it is easy to evaluate each
criterion based on the Scoring System.
Technology 3 3 3 3
Social 3 3 1 1
Individual 2 3 1 2
Law 2 1 1 2
Government 1 2 2 2
14
f) Total of The Scores
The table shows the level for each problem based on the factors and evaluations
done. The purpose is to identify which one problem deserves more attention and
solve that problem first.
Total Score 27 18 22
15
5.0 STRATEGY ON THE EXECUTION
Any issue arising in a particular context invariably possesses its corresponding solution.
The challenge within AI Deep Fakes lies in the potential for users to perpetrate fraudulent
activities across diverse dimensions through the editing capabilities afforded by existing AI Deep
Fake technologies. This raises significant apprehensions within both the community and the
government, given the escalating dissemination of misinformation, capable of transforming
genuine content into false narratives. In this scenario, the editor utilizes an authentic video
featuring Person A but manipulates it by substituting Person B's face for Person A's. What adds
to the complexity is that Person B's face seamlessly adopts the facial expressions and contours of
Person A, creating a remarkably realistic result. As Chesney and Citron (2018) highlighted,
many of these synthetic videos are pornographic and there is now the risk that malicious users
may synthesise fake content to harass victims. Nonetheless, there exist several alternatives aimed
at mitigating the escalating prevalence of AI Deep Fake-related problems across the nation.
These include:
This necessitates a holistic strategy to address the surge in crimes facilitated by Artificial
Intelligence (AI). Implementation of measures is essential to effectively counteract such
incidents, involving an exploration of diverse legal and technological solutions aimed at
detecting and mitigating AI-related crimes. It's crucial to acknowledge that legal solutions may
encounter limitations due to imposed legal constraints. The adopted approach introduces the
concept of "code as law" (Lessig 1999), wherein the software code serves as a regulatory
framework or a distinct code functioning as a legal reference for dealing with various crimes,
particularly those related to AI. If the legal code requires enhancement, addressing Artificial
Intelligence Crimes (AIC) will introduce an additional layer atop legal reasoning, comprising
normative elements.
16
assertion that, for practical purposes, any party can be treated as if it possesses a mental state.
Furthermore, elucidating the intention becomes crucial in the context of perpetration by others.
Regarding social media, "developers who deliberately create social bots to engage in unethical
actions are guilty" (de Lima Salge and Berente 2017, 30). This underscores the idea that liability
requires the party to harbour evil intentions that can impact the position of the affected party.
Therefore, through collaborative efforts with technological control, one can perceive Artificial
Agents (AAs) strictly as tools for committing Artificial Intelligence Crimes (AIC). Moving
forward, the command responsibility model dovetails with the perpetration by others. This model
finds applicability in scenarios featuring a hierarchical chain of command, typical in settings like
the military and police forces. It inherently clarifies how liability is apportioned from
commanders to officers when investigating charges linked to the participation of an Artificial
Agent (AA). Next, the natural probability effect liability model is not a novel concept, and both
the natural probability principle and command responsibility have historical antecedents. These
principles trace back to rules and laws in Rome, where the owner of an enslaved individual was
held accountable for the damages caused by that individual (Floridi 2017b, 4). This historical
context underscores the notion that not every prohibition should be viewed as a definitive
solution.
17
the AI component (used to detect AIC) will have traceability limitations due to watermarking
and similar techniques.
In conclusion, the psychological aspect of Artificial Intelligence Crimes (AIC) raises two
concerns: user manipulation and the creation of users with criminal intent. While this study
presents suggested solutions for these issues, one proposal deems social bots unacceptable due to
the potential for anthropomorphic imitation, including gender or ethnic perceptions. In the
context of sexual offences, an additional suggestion is to reinforce the ban or law as part of a
comprehensive package of laws aimed at enhancing social sexual morality and clearly expressing
the norm of intolerance (Danaher 2017, 29-30). The second recommendation involves leveraging
anthropomorphic Artificial Agents (AAs) as a strategy to combat sexual offences. For instance,
in addressing the misuse of artificial pedagogical agents, a suggestion is made to reprogram the
agents to provide responses to prevent or suppress student abuse (Veletsianos et al. 2008, 8). The
implementation of this proposal necessitates a decision on whether to criminalize the demand
side, the supply side, or both aspects of a transaction. Users may be subject to penalties within
the scope of this approach.
18
6.0 CONCLUSION
The challenge of the strategies to execute from AI deep fakes is to build privacy concerns
to protect data. Privacy concerns occur when addressing the issue of AI deep fakes due to the
potential need for intensive monitoring and detection techniques, which may entail the collection
and analysis of personal information. In order to detect the AI deep fakes may need extensive
observation in both physical and digital spaces. This surveillance has the potential to breach the
privacy rights of individuals, giving rise to ethical and legal concerns.
Besides, resource intensity must be adequate enough in order to face this AI deepfakes
issue. The establishment and operation of flexible monitoring and detection systems may
necessitate substantial financial investments, skilled personnel, and continuous maintenance. The
development and research of detection technologies might need some funding as well as legal
frameworks and the infrastructure to develop and employ effective strategies to detect AI deep
fakes. Ensuring personnel get adequate training is essential to recognise and respond to this
threat effectively such as training law enforcement agencies, legal professionals as well as the
general public. Governments and organisations may need help to allocate sufficient resources to
stay ahead of emerging AI threats.
In a nutshell, despite all the challenges that may occur from the strategies on the
execution, AI deep fakes could positively impact everyone.
19
REFERENCES
Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., & Nayar, S. K. (2008). Face swapping. ACM
SIGGRAPH 2008 Papers. https://doi.org/10.1145/1399504.1360638
Chesney, R., & Citron, D. (2018). Deep fakes: A looming crisis for national security, democracy
and privacy? Lawfare, February 21, 2018.
https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-an
d-privacy
Clayton, J. [BBC News]. (2023, April 24). What Could ‘Voice Cloning’ Technology Mean for
Society [Video]. YouTube. https://www.youtube.com/watch?v=A-1A8XoA3Qo
Danaher, J. (2017). Robotic rape and robotic child sexual abuse: Should they be criminalised?
Criminal Law and Philosophy, 11(1), 71–95. https://doi.org/10.1007/s11572-014-9362-x.
De Lima Salge, C. A., & Berente, N. (2017). Is that social bot behaving unethically?
Communications of the ACM, 60(9), 29–31. https://doi.org/10.1145/3126492.
Deshkar, A. (2023, October 28). Beyond Fun & Games: The Dangers of AI Face Swapping
Technology. Indian Express.
https://indianexpress.com/article/technology/artificial-intelligence/ai-face-swapping-tech
nology-dangers-9003123/
Floridi, L. (2017b). Robots, jobs, taxes, and responsibilities. Philosophy and Technology, 30(1),
1–4.
GCF Learn Free. (2023, July 28)What is AI? - AI Basics [Video]. YouTube.
https://www.youtube.com/watch?v=J4RqCSD--Dg
King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2019, February 14). Artificial Intelligence
Crime: An interdisciplinary analysis of foreseeable threats and solutions - science and
20
engineering ethics. SpringerLink.
https://link.springer.com/article/10.1007/s11948-018-00081-0
Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.
Malik, H., & Changalvala, R. (2019, June 8). Fighting AI with AI: Fake Speech Detection Using
Deep Learning. " Fighting AI with AI: Fake Speech Detection Using Deep Learning.
https://www.aes.org/e-lib/browse.cfm?elib=20479
Reynold, R. [Al Jazeera English]. (2023, April 22). Voice Cloning AI Technology Present Risk
& Opportunities. https://www.youtube.com/watch?v=oLHlf0Xihog
Scammer Payback. (2023, April 06). A.I. Voice Cloning is Scary [Video]. YouTube.
https://www.youtube.com/shorts/BqRLYI84WYU
Sharkey, N., Goodman, M., & Ross, N. (2010). The coming robot crime wave. IEEE Computer
Magazine, 43(8), 6–8
Smith, J.B. (1996). Using a Decision Matrix to Assess Climate Change Adaptation Options. In:
Smith, J.B., et al. Adapting to Climate Change. Springer, New York, NY.
https://doi.org/10.1007/978-1-4613-8471-7_7
Solon. O. (2023, September 29). Trolls in Slovakian Election Tap AI Deepfakes to Spread
Disinfo. Bloomberg.
https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-
deepfakes-to-spread-disinfo?leadSource=uverify%20wall
21
Veletsianos, G., Scharber, C., & Doering, A. (2008). When sex, drugs, and violence enter the
classroom: Conversations between adolescents and a female pedagogical agent.
Interacting with Computers, 20(3), 292–301.
https://doi.org/10.1016/j.intcom.2008.02.007.
Wellman, M. P., & Rajan, U. (2017). Ethical issues for autonomous trading agents. Minds and
Machines, 27(4), 609–624.
22
23