[go: up one dir, main page]

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

SCHOOL OF INFORMATION SCIENCE STUDIES COLLEGE OF

COMPUTING, INFORMATICS, AND MATHEMATICS

BACHELOR OF INFORMATION SCIENCE (HONS.) RECORDS


MANAGEMENT

IMS555: DECISION THEORY


GROUPING ASSIGNMENT:
AI DEEP FAKES

PREPARED BY:

NO. NAME STUDENT ID GROUP

1. AINAA SHAREENA BINTI RAFEZAM 2021878176 JIM2465B

2. MUHAMMAD ARIFF BIN HARUDIN 2021485246 JIM2465B

3. NUR A’ISYAH BINTI AZIZI 2021847272 JIM2465B

4. SOFEA SHAHEERAH BINTI SHAMSUDDIN 2021878034 JIM2465B


PREPARED FOR:
MS SUGUNA A/P OLAKANATHAN

SUBMISSION DATE:
18 JANUARY 2024
TABLE OF CONTENTS

ACKNOWLEDGEMENT...................................................................................... 3

1.0 INTRODUCTION............................................................................................. 4

2.0 EMERGING TECHNOLOGIES IN ARTIFICIAL INTELLIGENCE.......6

3.0 SWOT ANALYSIS.......................................................................................... 10

3.1 STRENGTH................................................................................................. 10

3.2 WEAKNESS................................................................................................ 10

3.3 OPPORTUNITIES........................................................................................11

3.4 THREAT....................................................................................................... 11

4.0 DECISION MATRIX...................................................................................... 12

5.0 STRATEGY ON THE EXECUTION............................................................ 16

6.0 CONCLUSION................................................................................................ 19

REFERENCES...................................................................................................... 20

2
ACKNOWLEDGEMENT

Alhamdulillah. We thank Allah SWT, who with His will gave us the opportunity, internal
strength, and a chance to accomplish this project till the end.

We would like to express our gratitude to Ms Suguna A/P Olakanathan, Lecturer of


IMS555 – Decision Theory at Universiti Teknologi MARA (Segamat Campus) for her guidance,
kindness, and patience. At the same time, we completed this group project assignment. Her
willingness to give us some time to complete this project has been very much appreciated.

We also would like to express our deep gratitude to various people for supporting us to
finish our project, especially to our dearest parents. Lots of love to all of them. We wish to
acknowledge all of them and we wish to make special mention of the following. We also would
like to extend our thanks to our friends for providing us with moral support, and also their
friendly advice throughout this group project. We would not be able to get this far without all of
your great assistance. This work would have not been possible to finish without the financial
support and cooperation of all the participating members of this group project.

3
1.0 INTRODUCTION

Artificial Intelligence, or AI, is a way for computers to learn and function in a way that is
similar to how humans use our brains to learn and make decisions. AI uses computer programs to
accomplish this, so computers can perform tasks that would be difficult for humans to perform
on their own, such as helping doctors diagnose diseases, helping us predict the weather more
accurately, and even playing video games. AI works by using software to look at large amounts
of data and identify patterns, much like a dog and a cat can. AI is important because it makes
tasks faster, more accurate, and sometimes even better than we could do on our own. It also
solves complex problems that we might not be able to figure out on our own. There are three
main types of AI: supervised, unsupervised, and reinforcement learning. Each type learns
uniquely, but they all use a lot of data to get better at what they do. The potential of artificial
intelligence (AI) is unlimited; it's revolutionising both our personal and professional lives. So,
the next time you hear about AI, simply keep in mind that it's essentially a computer's clever
brain, enabling us to solve complex issues and do things we never would have imagined (GCF
Learn Free, 2023).

“Disinformation was spread over social media in the run-up to the Slovak elections that
took place over the weekend, with videos featuring artificial intelligence (AI) produced deepfake
voices. One video shows conservation in which Slovakian Progressive Party leader Michal
Simecka appears to discuss vote-buying from the Roma minority with a journalist, which experts
deemed synthesized by an AI tool trained on samples of the speakers’ voices. Technological
democracy research group Reset’s Rolf Fredheim said, “With the examples from the Slovak
election, there’s every reason to think that professional manipulators are looking at these tools to
create effects and distribute them in a coordinated way” (Solon, 2023).

Recent generative model-based methods have been successfully used for cloned image,
audio, and video generation. Generative models based on deep neural networks have been
successfully applied to many domains such as image generation, speech synthesis, and language
modelling. Advances in artificial intelligence (AI), speech synthesis, and image and video
generation technologies pose new security and privacy threats to biometric-based access control
systems and voice-driven interfaces. For instance, applications of voice-driven interfaces and
services including Amazon Alexa, Google Home, Apple Siri, Microsoft Cortana, and so on, are

4
on the rise. The private banking division of Barclays is the first financial services firm to deploy
voice biometrics as the primary means to authenticate customers to their call centres. Since then,
many voice biometric-based solutions have been deployed across several financial institutions,
including Banco Santander, Royal Bank of Canada, Tangerine Bank, Chase Bank, Citi Bank,
Bank of America, and Manulife, and HSBC Bank, and this list keeps growing (Hafiz. M, &
Raghavendar. C, 2019).

5
2.0 EMERGING TECHNOLOGIES IN ARTIFICIAL INTELLIGENCE

2.1 FACE SWAPPING

Face swapping refers to the task of swapping faces between images or in a video
while maintaining the rest of the body and environment context (Papers With Code, n.d.).
Advances in digital photography have made it possible to capture large collections of
high-resolution images and share them on the Internet. While the size and availability of
these collections are leading to many exciting new applications, it is also creating new
problems. One of the most important of these problems is privacy. Online systems such
as Google Street View (http://maps.google.com/help/maps/streetview) and EveryScape
(http://everyscape.com) allow users to interactively navigate through panoramic images
of public places created using thousands of photographs. Many of the images contain
people who have not consented to be photographed, much less to have these photographs
publicly viewable. Identity protection by obfuscating the face regions in the acquired
photographs using blurring, pixelation, or simply covering them with black pixels is often
undesirable as it diminishes the visual appeal of the image (Bitouk et al., 2008).

According to Ankita (2023), earlier this year, a lawsuit was filed in the US against
the viral face-swapping application Reface for commercially exploiting photos of
celebrities. The Reface application uses artificial intelligence (AI) to create
hyper-realistic face swaps. The app allows anyone to digitally paste a photograph of any
individual’s face onto a picture or video of another individual. There are several such
applications available for free over app stores. While face-swapping apps are meant for
harmless fun, their potential threats came to the fore earlier this year when a scamster
used this technology to impersonate the victim’s close friend and duped him of over 4.3
million yuan.

6
2.2 VOICE CLONING

How scammers are taking advantage of AI. It is no secret that AI has gotten really
good over the last couple of years. And scammers are already taking advantage of that
fact. For starters, they are using programs like Chat GPT to rewrite their notoriously bad
scam emails. So things like broken English and constant misspellings that are the
hallmarks of a scam email are slowly going to disappear. But the scariest thing that
scammers are doing right now is using voice cloning technology that allows them to
impersonate you and your loved ones. All they need is just a few seconds of your audio to
clone your voice. So they can grab that from a social media post that you made publicly
online or by calling you and recording when you pick up. Things have gotten so bad
recently that the Federal Trade Commission has had to make an announcement letting
Americans know how bad this scam is getting. As AI continues to get better and better,
so will the scams (Scammer Payback, 2023).

According to Hafiz. M, & Raghavendar. C (2019), voice cloning technologies


have found applications in a variety of areas ranging from personalized speech interfaces
to advertisement, video gaming, and so on. Existing voice cloning systems are capable of
learning speaker characteristics from a few samples and generating perceptually
indistinguishable speech. These advances pose new security and privacy threats to
voice-driven interfaces. Artificial human speech synthesis from text, also known as
text-to-speech (TTS), is an essential feature in many applications including voice-driven
interfaces, humanoid robots, navigation systems, video games, chatbots, and accessibility
for the visually impaired. Modern TTS systems are based on complex, multistage
processing pipelines, each of which may rely on hand-engineered features and heuristics.

In January 2023, ElevenLabs released a trial version of its AI speech software


intended for dubbing movies. Given a minute of a person speaking, the software can
quickly put together a ‘clone’ of that person’s voice. It went viral, with the public using
the technology to clone the voices of celebrities and politicians. The BBC’s James
Clayton investigates the impact voice cloning technology could have on society – from
scams to the 2024 US election. The video discusses the release and impact of
ElevenLabs' AI translation software, initially intended for movie dubbing. However, the

7
software gained attention as users started imitating celebrity voices, generating content
where famous individuals say things they never did. Despite being primarily designed for
personal voice cloning, instances of hate speech and rule violations emerged, prompting
the company to limit voice cloning to paid users and develop tools to track misuse. The
conversation with ElevenLabs' CEO reveals the company's acknowledgement of the
misuse and its commitment to addressing the issue. Additionally, the transcript touches
on the broader implications of AI-generated voices, including concerns about scams and
their potential impact on trust in audio content. Experts suggest that as AI advances,
society needs to become more aware of the prevalence of AI-generated content and
establish guidelines for responsible use. The video concludes by emphasizing that voice
cloning is likely here to stay, posing both positive and negative implications for society
(BBC News, 2023).

The video from Al Jazeera English (2023) discusses the development of highly
realistic voice cloning technology by a Silicon Valley startup called Resemble AI. The
company can recreate voices, including that of deceased artist Andy Warhol, for various
applications, such as television documentaries. The CEO, Zohaib Ahmed, explains that
the technology doesn't imitate a voice but uses the actual voice data to learn patterns and
reproduce them convincingly. The cloned voice is demonstrated by having the speaker,
likely the journalist or interviewer, record a few phrases. The cloned voice then recites a
19th-century poem, showcasing the realism of the technology. Resemble AI claims to
have strict safeguards, requiring informed consent from speakers and implementing a
digital watermark on its products for verification. However, the transcript also raises
concerns about the potential misuse of voice cloning technology. The risk of rogue
programmers using the technology for harmful purposes, such as impersonation for
fraudulent activities or manipulating public opinion, is discussed. The example of a
14-year-old's voice being cloned to extort money from parents is mentioned, highlighting
the potential dangers. Experts and business leaders are calling for government regulations
and safeguards to prevent the misuse and uncontrolled spread of this form of AI
technology (Al Jazeera English, 2023).

8
2.3 SYNTHETIC MEDIA CREATION

Synthetic media is a catch-all term to describe video image text or voice that has
been fully or partially generated using artificial intelligence (AI) algorithms (Synthesia,
2022). In recent years, the term “synthetic media” has emerged in common parlance as a
catch-all to describe video, image, text, or voice that has been fully or partially generated
by computers. The ways that people communicate have always been closely tied to the
technologies available at a given time. People didn’t use phones back in the days of the
Renaissance, for example, and we no longer paint in caves. But we do communicate via
snaps, TikTok, and DMs — completely new forms of content never seen before. What
we’re seeing is a constant improvement of technology that leads to new ways of
communicating, i.e., new media formats that vary in terms of creation, consumption, and
contextualisation (Riparbelli, V., 2023).

9
3.0 SWOT ANALYSIS

3.1 STRENGTH

3.1.1 MARKETING

Enables imaginative storytelling and character development with high potential to


create entertaining and engaging content in the film, television, and social media
industries. It is a valuable tool for marketing and advertising, especially in
campaigns that require personalized or unique visuals.

3.1.2 CUSTOMIZATION

Allows the creation of personalized and unique voice content where individuals
with speech impairments or disabilities can create a synthetic voice of their
choice. Provides opportunities for customization in various applications, such as
virtual assistants and voiceovers.

3.1.3 EDUCATION APPLICATIONS

It is a valuable tool for education and training, offering realistic simulations and
interactive learning experiences that allow students to express their creativity and
produce engaging content.

3.2 WEAKNESS

3.2.1 MISUSE AND MANIPULATION

The main weakness of Deepfake AI technology is its potential for abuse by


unscrupulous individuals, including the creation of misleading or malicious
content for the purpose of fraud, spreading false information or defaming people.

3.2.2 ETHICAL CONCERNS

This technology raises ethical concerns related to privacy, consent and the
potential for the creation of content that could harm individuals or organizations.

10
A person's privacy will be threatened because AI can imitate a person's face even
if it is not done by a real person.

3.3 OPPORTUNITIES

3.3.1 ENTERTAINMENT INDUSTRY

Deepfake AI has the potential to revolutionize the entertainment industry by


creating realistic CGI characters and enhancing special effects in movies and
video games. It makes those movies and video games look more interesting than
usual.

3.3.2 EDUCATION AND TRAINING

Deepfake technology can be used for educational purposes, such as simulating


realistic scenarios for training purposes, medical simulations or language learning.
This will make students more interested and easy to understand.

3.4 THREAT

3.4.1 FAKE NEWS AND MISINFORMATION

This technology poses a major threat in terms of generating fake news or realistic
misinformation that can have serious consequences for public opinion and
decision-making. Things like this can also cost lives due to depression caused by
people spreading false news about them.

3.4.2 PRIVACY CONCERNS

Deepfake technology raises concerns about the unauthorized use of personal


images or videos by unscrupulous individuals with the potential to compromise
individual privacy and security.

11
4.0 DECISION MATRIX

A methodology or procedure called a decision matrix can be used to define problems,


examine them, and come to a decision for each one that arises. Problem tables, problem factor
evaluation tables, and score calculation tables have all been incorporated into the methodology.
Based on what was said by Joel B. Smith (1996), “A decision matrix can be used to analyze the
effectiveness of adaptation options in meeting specific policy goals under various climate change
scenarios”. Below is the Decision Matrix step:

a) Identify The Problem


List the problems that occur in the AI ​Deepfake problem.
i) Face Swapping
ii) Voice Cloning
iii) Synthetic Media Creation

b) State of Nature (Factors of The Problems)


List the factors that contribute to the AI ​Deepfake problem occurring.
i) Technology
ii) Social
iii) Individual
iv) Law
v) Government

12
c) Assign Weight to Factors (State of Nature)
Give an assessment of the factors that have been listed above based on relevant
references.

Factors Weight

Technology 3

Social 3

Individual 2

Law 2

Government 1

1 - A little 2 - Simple 3 - A lot

d) Design Scoring System


Give an assessment of the factors listed above based on the level of importance of
each factor that contributes the most to the Deepfake AI problem.

Factors Weight

Technology 3

Social 2

Individual 2

Law 1

Government 1

1 - Low 2 - Medium 3 - High

13
e) Tabulate Factors and Criteria
Explain the criteria more clearly for each factor so that it is easy to evaluate each
criterion based on the Scoring System.

Factor Weight Face Swapping Voice Cloning Synthetic


Media Creation

Technology 3 Abuse of Identity fraud The existence of


technology a false identity

Social 3 Spread of fake Fights with Social changes


news communities

Individual 2 Self carelessness Self carelessness Violation of


personal rights

Law 2 Violation of the Violation of the Violation of the


law law law

Government 1 Weak cyber Less Political changes


defences empowering law

● Based on the Scoring System

Factor Weight Face Swapping Voice Cloning Synthetic


Media Creation

Technology 3 3 3 3

Social 3 3 1 1

Individual 2 3 1 2

Law 2 1 1 2

Government 1 2 2 2

1 - Low 2 - Medium 3 - High

14
f) Total of The Scores
The table shows the level for each problem based on the factors and evaluations
done. The purpose is to identify which one problem deserves more attention and
solve that problem first.

Factor Weight Face Swapping Voice Cloning Synthetic


Media Creation

Technology 3 3x3=9 3x3=9 3x3=9

Social 3 3x3=9 1x3=3 1x3=3

Individual 2 3x2=6 1x2=2 2x2=4

Law 2 1x2=2 1x2=2 2x2=4

Government 1 1x1=1 2x1=2 2x1=2

Total Score 27 18 22

15
5.0 STRATEGY ON THE EXECUTION

Any issue arising in a particular context invariably possesses its corresponding solution.
The challenge within AI Deep Fakes lies in the potential for users to perpetrate fraudulent
activities across diverse dimensions through the editing capabilities afforded by existing AI Deep
Fake technologies. This raises significant apprehensions within both the community and the
government, given the escalating dissemination of misinformation, capable of transforming
genuine content into false narratives. In this scenario, the editor utilizes an authentic video
featuring Person A but manipulates it by substituting Person B's face for Person A's. What adds
to the complexity is that Person B's face seamlessly adopts the facial expressions and contours of
Person A, creating a remarkably realistic result. As Chesney and Citron (2018) highlighted,
many of these synthetic videos are pornographic and there is now the risk that malicious users
may synthesise fake content to harass victims. Nonetheless, there exist several alternatives aimed
at mitigating the escalating prevalence of AI Deep Fake-related problems across the nation.
These include:

This necessitates a holistic strategy to address the surge in crimes facilitated by Artificial
Intelligence (AI). Implementation of measures is essential to effectively counteract such
incidents, involving an exploration of diverse legal and technological solutions aimed at
detecting and mitigating AI-related crimes. It's crucial to acknowledge that legal solutions may
encounter limitations due to imposed legal constraints. The adopted approach introduces the
concept of "code as law" (Lessig 1999), wherein the software code serves as a regulatory
framework or a distinct code functioning as a legal reference for dealing with various crimes,
particularly those related to AI. If the legal code requires enhancement, addressing Artificial
Intelligence Crimes (AIC) will introduce an additional layer atop legal reasoning, comprising
normative elements.

Secondly, addressing liability assumes paramount importance. While liability is a broad


and multifaceted topic, Hallevy's (2012) literature review has identified four distinct models:
direct liability, perpetration by others, command responsibility, and potential natural
consequences. Some proponents advocate for holding individuals directly and fully responsible,
contending that "the process of analysis in AI systems is parallel to current human
understanding" (Hallevy 2012, 15). This viewpoint resonates with Daniel Dennett's (1987)

16
assertion that, for practical purposes, any party can be treated as if it possesses a mental state.
Furthermore, elucidating the intention becomes crucial in the context of perpetration by others.
Regarding social media, "developers who deliberately create social bots to engage in unethical
actions are guilty" (de Lima Salge and Berente 2017, 30). This underscores the idea that liability
requires the party to harbour evil intentions that can impact the position of the affected party.
Therefore, through collaborative efforts with technological control, one can perceive Artificial
Agents (AAs) strictly as tools for committing Artificial Intelligence Crimes (AIC). Moving
forward, the command responsibility model dovetails with the perpetration by others. This model
finds applicability in scenarios featuring a hierarchical chain of command, typical in settings like
the military and police forces. It inherently clarifies how liability is apportioned from
commanders to officers when investigating charges linked to the participation of an Artificial
Agent (AA). Next, the natural probability effect liability model is not a novel concept, and both
the natural probability principle and command responsibility have historical antecedents. These
principles trace back to rules and laws in Rome, where the owner of an enslaved individual was
held accountable for the damages caused by that individual (Floridi 2017b, 4). This historical
context underscores the notion that not every prohibition should be viewed as a definitive
solution.

Thirdly, regarding monitoring, a suggested approach is the development of an Artificial


Intelligence Crime (AIC) predictor leveraging domain knowledge. Establishing such a tool holds
the potential to address limitations inherent in more intricate machine learning classification.
This is crucial because the features employed by the machine for detection can sometimes be
exploited to circumvent AIC prevention. The second recommendation involves employing social
simulation to uncover criminal patterns (Wellman and Rajan 2017, 14). However, in the process
of pattern generation, there needs to be a competitive element, often with constrained capacity, to
ensure that the pattern created becomes the exclusive property of a party, thereby preventing any
crime of imitation. The third recommendation involves addressing detectability by intentionally
leaving traces in the components that can be detected for Artificial Intelligence Crimes (AIC).
For instance, manufacturers typically leave physical traces in Artificial Agent (AA) equipment,
such as Unmanned Underwater Vehicles (UUVs) used for drug distribution, or fingerprints in
third-party AI software (Sharkey et al. 2010). However, the lack of knowledge and control over

17
the AI ​component (used to detect AIC) will have traceability limitations due to watermarking
and similar techniques.

In conclusion, the psychological aspect of Artificial Intelligence Crimes (AIC) raises two
concerns: user manipulation and the creation of users with criminal intent. While this study
presents suggested solutions for these issues, one proposal deems social bots unacceptable due to
the potential for anthropomorphic imitation, including gender or ethnic perceptions. In the
context of sexual offences, an additional suggestion is to reinforce the ban or law as part of a
comprehensive package of laws aimed at enhancing social sexual morality and clearly expressing
the norm of intolerance (Danaher 2017, 29-30). The second recommendation involves leveraging
anthropomorphic Artificial Agents (AAs) as a strategy to combat sexual offences. For instance,
in addressing the misuse of artificial pedagogical agents, a suggestion is made to reprogram the
agents to provide responses to prevent or suppress student abuse (Veletsianos et al. 2008, 8). The
implementation of this proposal necessitates a decision on whether to criminalize the demand
side, the supply side, or both aspects of a transaction. Users may be subject to penalties within
the scope of this approach.

18
6.0 CONCLUSION

Information falsification on social media platforms is extremely prevalent. Indeed, it is


possible for individuals who lack accountability for professionals to engage in such activities,
perhaps due to dissatisfaction or intending to sabotage them. Several recommendations have
been given to address or reduce these unnecessary activities. These include the development of
Artificial Intelligence Crime (AIC) predictors, legal and technological solutions, and addressing
liability. However, there must be challenges in order to do the strategies mentioned above.

The challenge of the strategies to execute from AI deep fakes is to build privacy concerns
to protect data. Privacy concerns occur when addressing the issue of AI deep fakes due to the
potential need for intensive monitoring and detection techniques, which may entail the collection
and analysis of personal information. In order to detect the AI deep fakes may need extensive
observation in both physical and digital spaces. This surveillance has the potential to breach the
privacy rights of individuals, giving rise to ethical and legal concerns.

Besides, resource intensity must be adequate enough in order to face this AI deepfakes
issue. The establishment and operation of flexible monitoring and detection systems may
necessitate substantial financial investments, skilled personnel, and continuous maintenance. The
development and research of detection technologies might need some funding as well as legal
frameworks and the infrastructure to develop and employ effective strategies to detect AI deep
fakes. Ensuring personnel get adequate training is essential to recognise and respond to this
threat effectively such as training law enforcement agencies, legal professionals as well as the
general public. Governments and organisations may need help to allocate sufficient resources to
stay ahead of emerging AI threats.

In a nutshell, despite all the challenges that may occur from the strategies on the
execution, AI deep fakes could positively impact everyone.

19
REFERENCES

Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., & Nayar, S. K. (2008). Face swapping. ACM
SIGGRAPH 2008 Papers. https://doi.org/10.1145/1399504.1360638

Chesney, R., & Citron, D. (2018). Deep fakes: A looming crisis for national security, democracy
and privacy? Lawfare, February 21, 2018.
https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-an
d-privacy

Clayton, J. [BBC News]. (2023, April 24). What Could ‘Voice Cloning’ Technology Mean for
Society [Video]. YouTube. https://www.youtube.com/watch?v=A-1A8XoA3Qo

Danaher, J. (2017). Robotic rape and robotic child sexual abuse: Should they be criminalised?
Criminal Law and Philosophy, 11(1), 71–95. https://doi.org/10.1007/s11572-014-9362-x.

De Lima Salge, C. A., & Berente, N. (2017). Is that social bot behaving unethically?
Communications of the ACM, 60(9), 29–31. https://doi.org/10.1145/3126492.

Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press

Deshkar, A. (2023, October 28). Beyond Fun & Games: The Dangers of AI Face Swapping
Technology. Indian Express.
https://indianexpress.com/article/technology/artificial-intelligence/ai-face-swapping-tech
nology-dangers-9003123/

Floridi, L. (2017b). Robots, jobs, taxes, and responsibilities. Philosophy and Technology, 30(1),
1–4.

GCF Learn Free. (2023, July 28)What is AI? - AI Basics [Video]. YouTube.
https://www.youtube.com/watch?v=J4RqCSD--Dg

Hallevy, G. (2012). Unmanned vehicles—Subordination to criminal law under the modern


concept of criminal liability. Journal of Law, Information and Science, 21, 200–211.

King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2019, February 14). Artificial Intelligence
Crime: An interdisciplinary analysis of foreseeable threats and solutions - science and

20
engineering ethics. SpringerLink.
https://link.springer.com/article/10.1007/s11948-018-00081-0

Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.

Malik, H., & Changalvala, R. (2019, June 8). Fighting AI with AI: Fake Speech Detection Using
Deep Learning. " Fighting AI with AI: Fake Speech Detection Using Deep Learning.
https://www.aes.org/e-lib/browse.cfm?elib=20479

Papers With Code. (n.d.). Face Swapping.


https://paperswithcode.com/task/face-swapping#task-home

Reynold, R. [Al Jazeera English]. (2023, April 22). Voice Cloning AI Technology Present Risk
& Opportunities. https://www.youtube.com/watch?v=oLHlf0Xihog

Riparbelli, V. (2023, November 01). The Future of Synthetic Media. Synthesia.


https://www.synthesia.io/post/the-future-of-synthetic-media

Scammer Payback. (2023, April 06). A.I. Voice Cloning is Scary [Video]. YouTube.
https://www.youtube.com/shorts/BqRLYI84WYU

Sharkey, N., Goodman, M., & Ross, N. (2010). The coming robot crime wave. IEEE Computer
Magazine, 43(8), 6–8

Smith, J.B. (1996). Using a Decision Matrix to Assess Climate Change Adaptation Options. In:
Smith, J.B., et al. Adapting to Climate Change. Springer, New York, NY.
https://doi.org/10.1007/978-1-4613-8471-7_7

Solon. O. (2023, September 29). Trolls in Slovakian Election Tap AI Deepfakes to Spread
Disinfo. Bloomberg.
https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-
deepfakes-to-spread-disinfo?leadSource=uverify%20wall

Synthesia. (2022, December 22). What is Synthetic Media? [Video]. YouTube.


https://www.youtube.com/watch?v=n_1v1kkTyiY

21
Veletsianos, G., Scharber, C., & Doering, A. (2008). When sex, drugs, and violence enter the
classroom: Conversations between adolescents and a female pedagogical agent.
Interacting with Computers, 20(3), 292–301.
https://doi.org/10.1016/j.intcom.2008.02.007.

Wellman, M. P., & Rajan, U. (2017). Ethical issues for autonomous trading agents. Minds and
Machines, 27(4), 609–624.

22
23

You might also like