[go: up one dir, main page]

0% found this document useful (0 votes)
18 views50 pages

Full Text 01

This research investigates the perceptions of 25 computer science students at Uppsala University regarding the use of Generative AI (GenAI) tools in higher education. The findings reveal that while GenAI tools enhance innovative educational practices and independent learning, they also raise ethical concerns related to academic misconduct. The study provides recommendations for the responsible integration of GenAI in educational settings to mitigate these challenges.

Uploaded by

cherryzinoo88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views50 pages

Full Text 01

This research investigates the perceptions of 25 computer science students at Uppsala University regarding the use of Generative AI (GenAI) tools in higher education. The findings reveal that while GenAI tools enhance innovative educational practices and independent learning, they also raise ethical concerns related to academic misconduct. The study provides recommendations for the responsible integration of GenAI in educational settings to mitigate these challenges.

Uploaded by

cherryzinoo88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Uppsal a Uni versity logoty pe

IT mDV 25 001

Degree Project 30 credits


January, 2025

Generative AI in Higher
Education: The Students’
Perception.

Udit Verma REF Program \* MERGEF ORMAT Progr amme name (eg Mas ter's Programme in Molec ul ar Biotec hnol ogy Engineeri ng)

Master’s Programme in Computer Science


Upps ala University logotype

Generative AI in Higher Education: The Students’ Perception.


Udit Verma

Abstract
The integration of Generative AI (GenAI) tools into computing education represents a significant
shift in the education sector, influencing both students’ learning processes and academic
practices. This research investigates the use of GenAI tools through semi-structured interviews
with 25 computer science students at Uppsala University. It explores participants’ experiences,
perceptions, and expectations of GenAI tools in terms of educational purposes, trustworthiness
of outputs, and ethical implications in academic work. The findings confirm that GenAI tools play
a crucial role in fostering innovative educational practices and promoting independent learning.
However, they also introduce ethical challenges, particularly in relation to academic misconduct.
The research offers strategies to address these challenges and provides four key
recommendations for developing effective approaches to ensure the responsible use of GenAI
in computing education. Facul ty of Science and Technology Error: Refer ence sourc e not found , Upps al a Univers ity. eg Uppsal a/Visby Error: Refer ence sourc e not found . Supervisor: Name Surnam e Error: Reference source not found , Subj ect reader : Name Surnam e Error: Reference sour ce not found , Exami ner: Name Surnam e Error: Refer ence source not found

Faculty of Science and Technology


Uppsala University, Place of publication Uppsala
Supervisor: Anna Eckerdal Subject Reviewer: Anna Eckerdal
Examiner: Mats Daniels
Contents

1 Introduction 1

2 Related Work 2

3 Methodology 8

4 Results 15
4.1 GenAI usage patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Timing of GenAI usage . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Trustworthiness of GenAI outputs . . . . . . . . . . . . . . . . . . . . 19
4.4 Perceptions of Academic Misconduct involving GenAI . . . . . . . . . 21

5 Results in Context 23

6 Discussion 25

7 Recommendation for Education 28

8 Limitations 29

9 Future Works 30

10 Conclusion 31

11 References 32

12 Appendix A 39

13 Appendix B 44

ii
1 Introduction

1 Introduction

The term Generative AI refers to computational techniques that are capable of gener-
ating seemingly new, meaningful content such as text, images or audio from training
data [24]. The introduction of Generative AI (GenAI) models has led to significant
excitement in the computing education community [5]. These models can solve basic
programming assignments [73] and offer substantial benefits to students as they become
increasingly integrated into educational frameworks [5]. Understandably, these substan-
tial and rapid advances in the performance of generative models are causing excitement
and consternation among students. Tools such as ChatGPT have transitioned rapidly
from experimental concepts to essential components of organizational workflows, offer-
ing support in areas ranging from content generation and coding assistance to complex
problem solving.
However, the rapid adoption of GenAI in educational practices also brings about sig-
nificant operational and ethical challenges that need careful oversight and monitoring.
These innovations in technologies are revolutionary but also disrupt traditional work-
flows, including evaluation metrics, decision-making processes, dependency and feed-
back loops.
The growing reliance on GenAI in educational processes needs a careful reassessment
because of its impact on both development and potential of computing students. GenAI
tools enhance the potential to boost productivity and efficiency but on other hand emerges
the risk of over-reliance, plagiarism & other forms of misuse from AI-generated outputs
which sidelines the expertise and creativity [73].
For understanding these complexities of how and when computing students are engaged
with GenAI tools in their day-to-day tasks and their concerns related to GenAI, a re-
search is conducted in Uppsala University with twenty-five students. This study seeks
to address detailed insight of the utilization patterns among computing students and also
understanding their perceptions upon reliability and ethical usage in academic practices.

The research stands on four foundation theme questions:

1. How do university computing students use GenAI in their educational activities?

2. When do they use GenAI tools in their educational process?

3. How trustworthy do CS students think the output from GenAI is?

4. What usage of GenAI do the students think is academic misconduct?

1
2 Related Work

2 Related Work

Introduction
Recent advances in generative AI and language processing have enabled the develop-
ment of large language models (LLMs) that show impressive capabilities in generating
and reasoning about code [19]. Major LLM-based products like Generative Pre-trained
Transformer (GPT-4), CodeX, GitHub Copilot, Bard and ChatGPT have significant im-
plications for computer science education and practice [57].
This literature review provides essential definitions of the GenAI proposed by researchers
which aid in understanding its importance and impact on computer science education. It
further explains ‘when’ computer science students feel the significance of GenAI tools
in their studies and also explores the patterns of ‘how’ computer science students use
GenAI in their academic tasks and practicals. Researchers have proposed, a growing
body of work has begun empirically how these LLMs perform on tasks and assessments
commonly used in programming courses [25] & [26]. Furthermore, this section exam-
ines the factors that impact as well as influence the interaction between GenAI and users
(computer science students). The section concludes by addressing the risks associated
with GenAI regarding over-reliance impeding learning [68] and circumventing assess-
ments [15] & [18]. Challenges related to plagiarism detection [64] & [58] and inherent
biases in the technology [43].

Generative Artificial Intelligence


According to Kingma & Welling [39], Generative AI (GenAI) encompasses a group of
machine learning algorithms designed to generate new data samples that mimic existing
datasets. One of the foundational techniques in GenAI is the Variational Autoencoder
(VAE), which is a type of neutral network that learns to encode and decode data in a
way that maintains its essential features. Similarly, Goodfellow [28] defines the GenAI
method as Generative Adversarial Networks (GANs), which consists of two neutral net-
works in competition to generate realistic data samples. GenAI models use advanced
algorithms to learn patterns and generate new content such as text, images, sounds,
videos and code. Some examples of GenAI tools include ChatGPT, Bard, Stable Diffu-
sion and Dall-E. Its ability to handle complex prompts and produce human-like output
has led to research and interest into the integration of GenAI in various fields such as
education, healthcare, medicine, media, tourism etc.

Generative AI in Higher Education


Given the emerging nature of this field, peer-reviewed articles on Generative AI (GenAI)
and its application in higher education are scarce. Despite the extensive body of research
on large language models within the educational sector, our study focuses specifically

2
2 Related Work

on literature pertaining to ChatGPT [8] & [31]. Recent efforts have begun to scrutinize
ChatGPT’s impact on education more rigorously. According to Aljanabi [2] & Baidoo-
Anu [6] it’s important to recognize that GenAI distinguishes itself by generating not
just responses, but also the content within those responses, surpassing the capabilities of
traditional Conversational AI. With the progression of GenAI programming assistants,
research has increasingly concentrated on deploying AI tools in computing education,
exploring both the opportunities and challenges this presents [1] & [42]. Key inquiries
have been made into the benefits GenAI provides to those without formal education, its
effects on existing pedagogical methods, and concerns over plagiarism, potential biases,
and the fostering of detrimental habits among students.
According to Mejia & Sargent [49], In addition to theoretical discussions, considerable
effort has been directed towards helping students generate code, provide explanations,
and address problems within their code. These efforts have shown that Large Language
Models (LLMs) can offer significant assistance, although the extent of benefit varies
with task complexity [49]. Building on these insights, researchers have investigated
ways to optimize students’ use of AI tools, noting that the effectiveness of AI-generated
output largely depends on the quality of user prompts [60]. Denny and colleagues [18]
notably improved GitHub Copilot’s efficacy in introductory programming tasks from
about 50 percent to 80 percent by refining prompt strategies.
According to Halaweh [31], GenAI technologies, such as ChatGPT, have the poten-
tial to foster collaborative problem-solving and achievement among students, thereby
cultivating community. There’s ongoing debate about ChatGPT’s capacity to support
educators, students, and researchers significantly. The tool’s effectiveness and promise
have been rigorously evaluated in numerous studies [31] & [65]. For instance, a survey
by Omar Ibrahim Obaid and colleagues [53] highlighted ChatGPT’s role in propelling
scientific research by generating new ideas, offering fresh perspectives, and boosting
productivity. ChatGPT has also been shown to support visually impaired individuals
through text-to-speech applications, enhance digital accessibility, and tackle significant
issues like bias and misinformation [65]. Its effectiveness in search engine applications
further underscores its utility in delivering precise and relevant answers to user queries.
However, fully leveraging ChatGPT’s capabilities while addressing ethical concerns in
AI requires ongoing research, which is vital for ensuring more ethical and seamless fu-
ture human-AI interactions.
According to Halaweh [31], ChatGPT’s applications are wide-ranging. In the educa-
tional domain, it can power chatbots and online tutors to help students improve their
language skills. The model has attracted considerable research interest due to its educa-
tional potential. Studies by Raman [60] and Uddin et al. [67], demonstrate its utility in
higher education and safety education, respectively. Furthermore, its role in enhancing
students’ critical thinking skills and its potential effects on traditional evaluation meth-
ods in higher education have been critically assessed, underscoring the need for careful

3
2 Related Work

application and ethical considerations [1]. ChatGPT employs machine learning to pro-
vide proactive recommendations, solve problems automatically, and offer personalized
advice, illustrating its multifaceted utility in advancing educational goals [2].

How and When do Computer Science students use Generative AI


A study in 2023 by Lund and Wang [44] highlights that ‘ChatGPT has considerable
power to advance academia and librarianship in both anxiety-provoking and exciting
new ways’. According to Yilmaz and Karaoglan Yilmaz [72], AI-powered tools and en-
vironment can help students in solving the programming related problems with AI tools
and can give instant feedback and solve the problem. Also, AI-powered tools can help
students code by providing suggestions, error detection and automatic code generation.
This can help students write more efficient and accurate code and reduce the time and
efforts required to complete programming assignments. They further suggested that AI-
powered tools and environment can increase students’ engagement and motivation by
interacting with students and providing them with personalized support and feedback as
they learn to program.
In academia, ChatGPT can be utilized for a diverse range of applications beyond what
has been mentioned above. To begin with, it provides valuable support for writing re-
ports, essays and scientific articles. According to Kohnke [41] and Kasneci [37], it
can also proofread the provided text for structural, punctuation and grammatical errors.
According to Hodges [40] and Mollick [50], ChatGPT can act as a virtual tutor as it
can break down a complex concept into an easier-to-understand language. For research
projects, Huang [36] and Hill-Yardin [34], viewed that ChatGPT can not only aid in
literature review but can also generate innovative ideas in brainstorming sessions. Also,
Surameery [66] suggested that it can aid computer science students by debugging their
code and suggesting programming solutions.

Trustworthiness
Trust is one of the main factors affecting the interaction between GenAI and its users.
Ruijia [16] and Oleksandra [69] defined trust of GenAI as “the user’s judgment or expec-
tations about how the AI system can help when the user is in a situation of uncertainty
or vulnerability”. Both lack of trust [54] and blind trust [55] & [56] affect how users
interact with an AI tool and introduce risks in the human-AI interaction.
According to Amoozadeh [3], it is found that somewhat paradoxically, many students
(especially those from advanced Computer Science courses) that found GenAI helpful
also expressed distrust in the ability or output of AI. They recognize that AI is not infal-
lible and should be subject to human oversight to ensure accuracy and reliability. This
lack of trust stemmed from biases, errors, and limitations in AI algorithms. Also, GenAI
will be the tool that will be commonly used by Computer Science students (especially

4
2 Related Work

programmers). Furthermore, Amoozadeh claims that it is possible that if students don’t


trust their tools, they will be unable to receive the benefits of using these tools. This
could result in these students falling behind their peers who are able to use these GenAI
tools. Thus, CS educators should understand and identify the appropriate level of trust
that students should have in GenAI tools. Once this level of trust is identified, educators
should find ways to help students calibrate their trust correctly.
According to Chan & Hu [14], university students’ perceptions of using Generative AI
(GenAI) technologies in higher education, focus on both the benefits and challenges as
seen through the lens of the students themselves. Overall, students displayed a generally
positive attitude towards GenAI, recognizing its potential to revolutionize teaching and
learning by providing personalized support, aiding in writing and brainstorming, and fa-
cilitating research and analysis. Students valued GenAI’s ability to offer 24/7 assistance,
provide immediate and personalized feedback, and support a wide range of learning ac-
tivities, from generating new ideas to polishing their writing skills. However, alongside
these perceived benefits, students expressed concerns regarding GenAI’s accuracy and
reliability, privacy and ethical implications, and the potential impact on personal devel-
opment and career prospects. There was apprehension about over-reliance on AI, which
might undermine the value of education and hinder the development of critical think-
ing, creativity, and interpersonal skills. Concerns were also raised about GenAI’s role
in widening societal inequalities and its implications for future employment, as automa-
tion could replace jobs students are currently training for.
According to Sallam [62], the use of ChatGPT in education poses challenges related
to its accuracy and reliability. Because ChatGPT is trained on a large corpus of data,
it may be biased or contain inaccuracies. Fang and colleagues investigates gender and
racial biases in news content generated by several large language models (LLMs), in-
cluding ChatGPT. Further, Fang and colleagues assess biases at the word, sentence and
document levels and findings reveal significant biases across all models, with ChatGPT
showing the lowest level of bias and the highest resistance to generating content from
biased prompts. For example; ChatGPT’s knowledge is limited and has not (yet) been
updated with data after 2023 [27],[38] & [6]. Therefore, its responses may not always
be accurate or reliable, particularly for specialized subject matters and recent events.
Furthermore, ChatGPT may generate incorrect or even fake information [48],[59] &
[22]. This issue can be problematic for students who rely on ChatGPT to inform their
learning.
AI hallucination refers to the phenomenon where generative AI systems produce false or
fabricated information. Brameier et al. [9] describe AI hallucination as the presentation
of ‘untruths or half-truths with misleading confidence’. Pophal [17] defines states AI
hallucinations ‘are inaccurate, implausible, or wholly made-up outputs provided in re-
sponse to a prompt made in a generative AI application’. Some experts advocate using
the term AI fabrication instead, arguing that "hallucination" may lead to misconcep-

5
2 Related Work

tions and wrong inference about how learning models operate. According to Obrien,
hallucinations are prevalent in generative AI because notes, language models are fun-
damentally designed to invent content. According to Brender [11] and Wang et al. [70]
this issue is widely recognized in platforms like ChatGPT and is frequently discussed
in the trade press, though it is not yet extensively covered in academic research. Obrien
[11] through an example stated that a recent AP news article highlighted the ongoing
challenge of AI hallucination titled ‘Chatbots sometimes make things up. Is AI’s hal-
lucination problem fixable?’, noting its significant impact on businesses, organizations,
and students relying on AI for content creation.

Cheating and Plagiarism


Plagiarism is an ongoing issue in higher educational institutions. Martin [47] defined
plagiarism in the simplest form as “claiming credit for ideas or creations without proper
acknowledgement”. According to Mares [46] and Dobrovska [20], cheating in the aca-
demic context is the ‘theft’ of ideas and other forms of copyrighted material. According
to Howard [35], both cheating and plagiarism are considered as a subcategory of aca-
demic dishonesty.
Claiming with Henriksson [33] survey done in Uppsala University that neither students
nor teachers were clear in their understanding or definition of plagiarism. According to
Ma et al. [45], the reasons that contribute to an increase in academic cheating include:
Peer culture, websites that facilitate plagiarism, pressure for high academic achieve-
ment, few consequences and/or punishments and the lack of understanding of the con-
cept of plagiarism.
Plagiarism is a complex issue which has been studied using a variety of frameworks. Ac-
cording to research by (Angell [4], Rettinger & Kramer [61] and Williams, Nathanson
& Paulhus [71]) students characteristics that predict a greater likelihood of committing
plagiarism, includes levels of moral reasoning and self-esteem as well as achievement
and motivation orientations. This perspective attributes the decision to plagiarize to
characteristics of the students, discounting outside factors that might contribute to the
choice to plagiarize. Also, Ma et al. [45] suggest that reasons that contribute to increase
in academic cheating include: Peer culture, websites that facilitate plagiarism, pressure
for high academic achievement, few consequences and/or punishments and the lack of
the concept of plagiarism.
According to Barnas [7], the major cause for plagiarism is teaching style and according
to Brown [12] and Feldman [23], classroom culture indicating the cause of plagiarism
originates outside the student. From these perspectives, instructors are seen as con-
tributing to students’ beliefs that they can submit another author’s work as their own
by not providing an adequate level of rigor in their classrooms or by not checking stu-
dent work for plagiarism. Engler et al. [21] and Hard et al., [32] and the current study
has examined plagiarism through the lens of social or peer norms. Social norms theory

6
2 Related Work

posits that individuals gauge acceptable behaviors by observing what is commonly prac-
ticed by others. For instance, it has been shown that young adults often overestimate
the prevalence of negative behaviors like substance abuse among their peers, leading to
a distorted perception of what is socially acceptable and, consequently, an increase in
these behaviors (Berkowitz, 2004; Perkins, 2003; Perkins & Berkowitz, 1986). Sim-
ilarly, if students believe that plagiarism is frequently practiced and rarely penalized
among their peers, they may be more inclined to engage in plagiarism themselves.
McDowell and Brown [13] states that changes in the form of assessment (like group
projects), the communication and information dilemma, focusing on obtaining high
grades and the fear of future unemployment, all contribute to increasing incidents of
cheating and plagiarism.

7
3 Methodology

3 Methodology

To better understand the perceptions, experiences and trust of computer science stu-
dents’ in computing courses as they relate to Generative AI tools, semi-structured inter-
views were conducted with bachelors and masters students in Uppsala University (see
Appendix B). It included questions that assessed participants’ awareness of AI, their
beliefs about the potential benefits and challenges of these technologies, and also their
attitude toward using them in the classroom & studies. Conducting the study allowed
us to gather insights from a diverse group of participants, and provided better general-
izability for conclusions [30].

Interview Questions
The interviews included a range of questions including open and close-ended questions,
covering demographics, participants’ confidence, trust in AI and their experiences &
opinions about using AI tools.
The survey questionnaire was divided into four sections. (See Appendix A for the whole
script)

1. The first section was dedicated to participants’ experience with GenAI which
elicited ‘How’ computer science students find the need of using GenAI for their
academics.
2. The second section consisted of participants’ need with the GenAI which depicts
‘When’ computer science students find the need of using GenAI for their aca-
demics.
3. The third section described participants’ trust and faith in GenAI in terms of re-
sults and guidance they get for academic related purposes.
4. The fourth section reflects participants’ opinions and experiences on GenAI in
terms of cheating and plagiarism.

In the study, the research questions were pre-determined with the primary objec-
tive of understanding the nuanced perceptions and experiences of students, their
interaction with Generative AI (GenAI), and its impact on their academic prac-
tices. The value of this research lies in its potential to help educators identify if
and where they might need to adapt their teaching strategies to account for the
integration of these new tools.
Data Collection and Analysis Methods
This study employed semi-structured interviews to collect data, and content analysis to

8
3 Methodology

analyze it.

Semi-structured Interviews Semi-structured interviews [52] are a flexible qualitative


research method that combines a predetermined set of open-ended questions with the
opportunity for the interviewer to explore particular themes or responses further. This
method allows for guided conversations in which the interviewer can probe for depth
and clarification, ensuring that specific topics are addressed while still allowing the
conversation to flow naturally based on the interviewee’s responses.
These interviews are particularly valuable for exploring complex behaviors, motiva-
tions, or perceptions, providing rich, detailed data that are not easily obtainable through
more structured methods. Semi-structured interviews require careful preparation of an
interview guide, a framework of themes to be discussed while also allowing for the flex-
ibility to diverge from the guide to pursue interesting or pertinent topics as they arise
during the interaction.

Content Analysis
Content analysis is a research method used to systematically interpret and analyze tex-
tual, visual, or audio data to identify patterns, themes, and meaningful insights. It is
commonly employed in qualitative research but can also involve quantitative techniques
to count and classify specific elements within data.

Deductive Analysis: Deductive Analysis [29] approach begins with a theory or hypoth-
esis and seeks to see if the data conforms to these pre-existing concepts. It’s a top-down
approach where the researcher tests the data against theoretical constructs to either con-
firm or refute them. The challenge here lies in how to handle data that does not fit the
theory, deciding whether it indicates a need for adjusting the theory or whether it’s an
outlier. In practice, deductive analysis [51] may involve categorizing data based on pre-
defined criteria and observing if these categories adequately capture the nuances of the
data.

Inductive Analysis: Inductive analysis [51] starts with observations and builds up to
generalizations or theories, thus moving from specific to general. This bottom-up ap-
proach involves identifying patterns, themes, and categories emerging directly from the
data without prior theoretical expectations. The process is inherently explorative, aim-
ing to generate theories that are grounded in the observed data. The risk here is getting
trapped in descriptive surface level of data without reaching deeper, more insightful
generalizations.

9
3 Methodology

Both methods have their significant place in qualitative research and are chosen based
on the research goals, questions, and the nature of the data. In some cases, researchers
might employ a combination of both in a single study to enrich the analysis and ensure
robustness in their findings.

10
3 Methodology

The Study
The purpose of this study was to explore computer science students’ perceptions, expe-
riences, and ethical considerations regarding the use of Generative AI (GenAI) tools in
their academic activities. By conducting semi-structured interviews, this research aimed
to gather in-depth insights into students’ usage patterns, trust in AI-generated outputs,
and views on academic misconduct. The qualitative approach was chosen to allow a
comprehensive understanding of the nuanced ways students interact with GenAI, ad-
dressing key themes such as timing of usage, trustworthiness, and ethical implications.

Data Collection
The interviews were conducted with bachelor’s and master’s students at Uppsala Uni-
versity. A total of 25 interviews were carried out. Initially, in December 2023, a group
project involving three students was the focus, with nine interviews conducted and ana-
lyzed. An additional 16 interviews were carried out in April 2024 for the master’s thesis
project of the researcher. The motivation of utilizing previous data and comparing it
with new data was to understand the evolving thoughts and perceptions of computer
science students regarding time perspective. Comparing the 2023 data with that of 2024
revealed interesting facts that clearly illustrated the changing usage and perceptions of
Generative AI among computer science students over time.
Participants in both Phase I (December 2023) and Phase II (April 2024) joined voluntar-
ily and were free to withdraw at any time. Approximately 45 students were approached
during these phases. Most interactions occurred at the Ångström campus of Uppsala
University, where interviews were proposed in an open format. While some students
agreed to participate, others declined. Additionally, some participants were acquain-
tances or dorm mates of the researchers. It is important to note that not everyone ap-
proached was willing to participate in the study.

11
3 Methodology

Table 1: Data collected for Analysis

Id Gender Programme Year of study Level Origin Interviewed


S1 Male CS 3rd Year Bachelor Sweden Dec, 2023
S2 Male CS 2nd Year Bachelor Sweden Dec, 2023
S3 Male Data Analysis 4th Year Masters Sweden Dec, 2023
S4 Male CS 4th Year Bachelor Sweden Dec, 2023
S5 Male CS 4th Year Bachelor Sweden Dec, 2023
S6 Male CS 2nd Year Bachelor Sweden Dec, 2023
S7 Male IT 2nd Year Bachelor Sweden Dec, 2023
S8 Male Machine Learning 2nd Year Masters India Dec, 2023
S9 Female CS 4th Year Bachelor Canada Dec, 2023
S10 Male CS 2nd Year Masters India Apr, 2024
S11 Male CS 2nd Year Masters Iran Apr, 2024
S12 Female CS 2nd Year Masters India Apr, 2024
S13 Male CS 1st Year Masters India Apr, 2024
S14 Female CS 2nd Year Masters France Apr, 2024
S15 Female CS 2nd Year Masters Bangladesh Apr, 2024
S16 Female CS 2nd Year Masters India Apr, 2024
S17 Female CS 3rd Year Bachelor USA Apr, 2024
S18 Female Embedded Systems 2nd Year Masters India Apr, 2024
S19 Female CS 2nd Year Masters India Apr, 2024
S20 Female CS 1st Year Bachelor Sweden Apr, 2024
S21 Female CS 1st Year Bachelor Sweden Apr, 2024
S22 Male CS 2nd Year Masters India Apr, 2024
S23 Male Data Science 2nd Year Masters Pakistan Apr, 2024
S24 Female CS 2nd Year Masters Sri Lanka Apr, 2024
S25 Male CS 2nd Year Masters Pakistan Apr, 2024

The Informed Consent form provided participants with essential information about the
study, such as its voluntary nature, the estimated time required (here, no more than
15 minutes), and the assurance of confidentiality. It emphasized that no identifiable
information would be collected or stored. The form highlighted that there were no fore-
seeable risks associated with participating in the research and encouraged participants
to ask any questions they had before deciding to participate.

Data Analysis
Data analysis [10] was conducted using a combination of deductive and inductive anal-
ysis. Initially, an established framework based on the research questions guided the

12
3 Methodology

process, followed by an inductive analysis to identify emerging themes. This approach


allowed for a comprehensive understanding of the diverse ways students interact with
GenAI tools, the contexts in which these tools are most frequently employed, and the
underlying ethical concerns that influence their usage. All 25 interviews—nine from the
first phase and sixteen from the second phase—were recorded on a phone, transcribed
into text using Trint software (Mobile Application), and then reviewed by listening to
the recordings to ensure clarity and completeness.
The analysis was conducted in two phases. During the first phase, December 2023
to January 2024, the researcher did a project in a course together with two other stu-
dent researchers where we together developed the interview script, interviewed nine
students, performed an initial analysis identifying various categorizations based on the
themes of the analysis. In the context of qualitative analysis, a theme represents pat-
terns or underlying meanings that are identified during the process of analyzing text,
interview transcripts, survey responses or other qualitative data. According to Braun
and Clark [2], essence themes are central to understand deeper meanings within quali-
tative data and are critical for drawing conclusions and making sense of the information
collected. Each section of the research which was interviewed formed a theme in the
analysis. Also, during the first phase of the research, researchers in computing educa-
tion research shared identified quotes and preliminary categories. This was followed by
a group discussion on these initial findings before continuing with individual analysis.
For the fourth research theme, we relied on the GenAI tool Bard which was utilized to
help generate categories from quotes. Later, we reconvened, and each team member pre-
sented the categories they had independently developed. Despite significant variations
in progress and presentation among team members, we had discussions until we reached
consensus. Then one of the researchers created tables for each category, which included
descriptions and data samples. According to Groves [1] in data analysis, category refers
to a classification or grouping of data or concepts based on shared characteristics, crite-
ria or attributes. Categories help in structuring the analysis by breaking down complex
data into more manageable segments for meaningful conclusions. In essence, categories
in analysis help to structure the data in a way that makes it easier to analyze, interpret
and draw insight from. These tables significantly influenced the final results, although
they were slightly modified following feedback and discussions with other researchers
and course coordinators.
In the second phase, from April 2024 to May 2024, the researcher conducting the Mas-
ter’s thesis continued the study using the same interview guide and research themes,
conducting sixteen additional interviews. The analysis of these sixteen interviews fol-
lowed a two-step approach, starting with deductive analysis and concluding with induc-
tive analysis. The deductive analysis aimed to test the categories established in the first
phase by identifying similarities in the data collected during the second phase. Once
this process was complete, and the data had been organized into the relevant categories,

13
3 Methodology

the inductive analysis began. This phase involved re-examining the data to identify any
new categories that aligned with the research themes. These new categories were then
structured into a table based on the research themes for the final results. Each category
was thoroughly explained in the Results section, supported by participant data samples
to highlight the key findings.

14
4 Results

4 Results

The study conducted on the data collected through semi-structured interviews using a
qualitative approach with twenty-five (nine in phase 1 and sixteen in phase 2) computing
science students at Uppsala University aimed to explore the use and impact of gener-
ative artificial intelligence (GenAI) in the learning practices of computing students at
a university setting. The interviews were designed to elicit in-depth responses, allow-
ing for a nuanced exploration of how computing students engage with GenAI tools, their
perceived benefits and drawbacks, and their views on ethical considerations in academic
structure.
As the research analysis progressed, the identification of recurring patterns among more
specific categories highlighted the need for generalization. This approach resulted in
a more appropriate representation of the findings in relation to the research’s aims and
objectives. This section presents the generalized results derived from the research anal-
ysis. The generalizations account for the recurring patterns among specific categories,
including individual descriptions of similar ideas or opinions, all within the scope of
each research question.
The results are organized into four main themes: GenAI Usage Patterns, Timing of
GenAI Usage, Trustworthiness of GenAI Outputs, and Perceptions of Academic
Misconduct Involving GenAI explained in detail in the text. These four themes form
the foundation of this study, providing essential insights into understanding the
four research questions. They also clarify students’ perspectives on prevalent is-
sues and concerns, particularly those related to academic dishonesty or cheating
associated with the use of Generative AI technologies. Each theme is further sub-
divided into categories to provide a detailed examination of the findings, supported by
direct quotations from participants to illustrate key points. Table 2 illustrates four gen-
eral themes identified and how they are broken down into key categories:

15
4 Results

Table 2: General themes and Category

General theme Category


Content Creation
Debugging and Problem solving
GenAI usage patterns Assistance
Studying for exams
Learning new concepts
Motivation & Productivity
When stuck
Timing of GenAI usage When time is finite
When motivation is low
Reliable in solving problem
Efficiency and speed
Trustworthiness of GenAI outputs Motivation and Task automation
Unreliable in results
Hallucination
Fear of misuse of personal data
GenAI tools as author
Perception of Academic GenAI tools as tutor
Misconduct involving GenAI GenAI tools as an inspirational source

The results from the initial study (Phase I) were previously published in an IEEE confer-
ence. The findings from this study (Phase II) provide valuable insights into the evolving
role of Generative AI (GenAI) in higher education, particularly within computing dis-
ciplines, where students frequently encounter complex problem-solving scenarios that
benefit from technological assistance. The results highlight both the potential advan-
tages of GenAI in enhancing learning efficiency and comprehension, as well as the
significant ethical dilemmas it poses, particularly regarding academic integrity.
These findings contribute to the ongoing discourse on the appropriate integration of
GenAI tools in educational contexts, offering practical recommendations for educators
and policymakers to balance innovation with ethical responsibility. While the find-
ings from Phase II share similarities with the categories identified in Phase I, they also
yielded new data, which were categorized into distinct themes, as highlighted in Table 2.

16
4 Results

4.1 GenAI usage patterns


1. Content Creation
Students used GenAI to generate and refine content, particularly for coding tasks
and writing assignments. For instance, one student shared, “I’ve used it to make
like a data structure library. I did this to make a queue library in C.“, demon-
strating the practical applications of GenAI in programming. Another student
mentioned using GenAI to rewrite paragraphs for better clarity and inspiration.

2. Debuggig and problem solving


GenAI tools were frequently used for debugging and problem solving purposes.
Students highlighted the utility of these tools in understanding and resolving er-
rors in their code. One participant noted, "Sometimes I can ask about error mes-
sages if I don’t understand them," indicating a reliance on GenAI for understand-
ing complex error feedback. Another student mentioned, ”Okay, so when I’m
programming I would say that I use it almost at least once every session.”

3. Assistance
Generative AI also plays a significant role in helping students grasp difficult con-
cepts and theories, particularly in technical subjects. Instead of only providing
answers to direct questions, AI tools can break down complex ideas into more di-
gestible parts. One student mentioned, "I use AI to help explain coding concepts
that are difficult to understand, or when I am struggling with a particular piece of
code."

4. Studying for exams


Another significant area where students utilize Generative AI for assistance is
exam preparation. AI tools can generate quizzes, practice questions, and mock
exams based on course materials provided by the students. As one participant
described, “I sent my course plan to ChatGPT, and it generated a quiz for me.
It asked me questions, corrected my answers, and gave feedback.” This indicates
how AI can simulate an interactive learning environment, keeping students en-
gaged while reinforcing key concepts.

5. Learning new concepts


Generative AI serves as an indispensable assistant for students, offering help with
learning new concepts from technical coding tasks to learning and writing. One
participant described, ”when I have difficulty understanding a concept, or if I have

17
4 Results

spent quite a long time with a code and I don’t know why it doesn’t work.”

6. Motivation & Productivity


Generative AI tools help in motivating students to start with new tasks and main-
tain their spirit also along the whole way by helping and providing assistance
to get best results by referring to maintaining productivity. One participant de-
scribed, “I take reference from ChatGPT before starting any new tasks or assign-
ment for best ideas and motivation to get valuable results.”

4.2 Timing of GenAI usage

The use of Generative AI tools for assistance in educational activities is often deter-
mined by specific needs and situational factors that arise throughout the academic pro-
cess.

1. When stuck
When students face challenges with their assignments or projects, they often rely
on GenAI for support. These tools help identify issues, offer explanations, or pro-
pose solutions, allowing students to navigate their difficulties more effectively.
One student described their experience, saying, "I turn to it when I’m stuck."
They added, "It helps when I can’t grasp a concept, or if I’ve spent a lot of time
on some code and can’t figure out why it’s not working." The appeal of GenAI lies
in its instant availability, which contrasts with the sometimes delayed responses
from peers or instructors. Many students value this speed, with one comment-
ing, "If I know someone in the same class, I might ask them for help, but that’s
not always possible because I don’t know as many people in my classes anymore."

2. When time is finite


Efficiency plays a significant role in why students turn to GenAI, especially when
they’re under time pressure and need to organize their tasks more efficiently.
These tools provide faster solutions, helping them avoid the time-consuming pro-
cess of searching through traditional resources. One student explained, "It saves
time compared to searching on Google, as you can get answers instantly." When
it comes to coding issues, many students prefer GenAI for immediate help with
code syntax, noting, "It’s just much quicker to ask something like, ’How do you
write this function in Python?’" Another student emphasized the benefit of quicker
responses, stating, "If I’ve been stuck on code for a long time and can’t figure out
the issue, ChatGPT gives me a faster answer than waiting two days after emailing

18
4 Results

on a Friday evening."

3. When motivation is low


GenAI is also a useful tool for managing periods of low motivation, especially
when students face mundane or less engaging tasks. By using GenAI for repet-
itive or tedious activities, students can focus more on the aspects of their work
that interest them. One student highlighted this convenience, saying, "If you
want a quick answer and don’t feel like searching yourself." This dependency is
even more pronounced when it comes to tasks considered boring: "I hate reading
documentation, it’s just incredibly dull." For routine lookups that require effort,
another student shared, "Instead of searching through something like Stack Ex-
change, I just ask ChatGPT, especially for syntax or coding questions."

4.3 Trustworthiness of GenAI outputs

The trustworthiness of Generative AI tools in educational settings is a key consideration


for students relying on them for assistance. The findings suggest that trust in GenAI
varies depending on the context and the nature of the task it’s used for. There was mini-
mal difference in how participants viewed the reliability of the AI’s output, though their
trust levels differed. Overall, participants tended to consider the AI’s output more de-
pendable for certain tasks, while expressing less confidence in its accuracy for others.

1. Reliable in solving problem


When it comes to problem-solving and code debugging, Generative AI tools
are often viewed as highly trustworthy, particularly for tasks that require quick
and straightforward solutions. Many students trust AI-generated responses when
they are stuck in their coding assignments, especially when they are seeking help
with syntax errors, code formatting, or general programming logic. One student
shared, “When I don’t know why my code isn’t working, I turn to ChatGPT, and
it usually points out the error right away.” Another student provides insight, when
they have difficulty understanding a concept or encounter persistent issues with
code, stating, "When I have difficulty understanding a concept, or if I have spent
quite a long time with a code and I don’t know why it doesn’t work." This suggests
a high level of trust in AI’s capacity to offer accurate information and practical
solutions in real-time.

19
4 Results

2. Efficiency and speed


Students generally trust Generative AI tools when the focus is on efficiency rather
than deep understanding. For example, when time is limited, or when students are
dealing with repetitive tasks, AI tools are relied upon for their speed in generating
accurate results. One student noted, “It’s faster to ask the AI for a Python function
than to search it up myself,” highlighting the efficiency gains from using AI tools.

3. Motivation and Task automation


When students feel unmotivated, they often rely on GenAI to automate routine
or repetitive tasks, trusting it to manage the less engaging aspects of their work.
This reflects a selective trust in GenAI’s ability to handle tasks they find dull or
tedious. One student expressed their preference, saying, "I hate reading documen-
tation. It’s incredibly boring." This trust also extends to seeking help with coding
syntax, as another student noted, "Rather than searching through Stack Exchange,
I just ask ChatGPT, especially for syntax or code." This highlights how GenAI is
favored for its convenience in these situations.

4. Unreliable in results
While students trust AI to identify errors, they often feel the need to cross-check
its suggestions with their own knowledge or external sources before fully relying
on the output. They exercise greater caution when using AI for complex tasks that
demand deeper understanding or creativity. In situations requiring critical think-
ing or detailed analysis, students are more skeptical of the AI’s reliability. They
may rely on it for initial ideas or a basic framework but do not expect it to provide
thorough or nuanced solutions. As one participant explained, "I trust it to give me
an idea, but I always double-check its responses when it’s something important."

5. Hallucination
AI hallucination refers to the phenomenon where a generative artificial intelli-
gence (AI) system, such as ChatGPT, produces outputs that are factually incor-
rect, nonsensical, or entirely fabricated, despite the input or prompt being accu-
rate. Students have issues on trusting the results or outputs from GenAI. One
student stated, “GenAI is human bias because it creates incorrect references or
non-existent facts because it is made by humans”. Another student called GenAI
as “Black Box technology”.

6. Fear of misuse of personal data


Students expressed concerns about their personal data being compromised when

20
4 Results

using GenAI. Despite efforts to avoid sharing sensitive information online, some
have experienced instances where their data was misused. One student, after com-
pleting an internship, shared their experience: "We were discouraged from using
GenAI for debugging or assistance because it records everything, which violates
company policy." This highlights the caution students feel regarding privacy risks
associated with GenAI, especially in professional or sensitive environments.

4.4 Perceptions of Academic Misconduct involving GenAI

The use of Generative AI (GenAI) in educational settings has opened up discussions


on the boundaries of ethical usage, especially regarding academic integrity. Students
have varied perceptions of what constitutes academic misconduct when using GenAI,
and these perceptions often revolve around the distinction between responsible use for
learning purposes and unethical practices that compromise academic standards.

1. GenAI tools as author


One of the most universally agreed-upon forms of academic misconduct involves
using AI-generated content directly in assignments or exams without any mod-
ification or attribution. Students perceive this as equivalent to plagiarism, as it
involves passing off the work generated by an AI as their own. A student in the
study noted, "If you take its answer and put it directly in your assignment, then
it’s definitely cheating." This sentiment reflects a clear ethical boundary in which
students recognize that copying and pasting AI-generated code, essays, or other
content without understanding or modifying it is academically dishonest. This
form of misconduct is seen as undermining the learning process, as it allows stu-
dents to submit work without engaging in critical thinking, analysis, or original
thought.

2. GenAI tools as tutor


Students generally perceive using GenAI for learning and understanding as eth-
ical, provided the AI is used as a tutoring tool rather than as the author of their
work. Many students view GenAI as a helpful resource for clarifying concepts,
generating study materials, or guiding their thought processes. However, they
draw the line when the AI’s outputs are directly integrated into assignments with-
out critical engagement. As one student pointed out, “Using AI to help explain
something or provide feedback is fine, but copying what it generates is cheating.”
This suggests that students see a clear difference between using AI to supplement

21
4 Results

their learning and using it to complete their assignments. When AI is used as a


tutor—helping to explain difficult concepts, quiz students, or provide alternative
perspectives—it is considered a legitimate educational tool. However, when AI
becomes the primary source of content for submission, it is viewed as academic
misconduct.

3. GenAI tools as an inspirational source


Students make a clear distinction between using GenAI for inspiration and di-
rectly copying its output. They generally see it as acceptable to use GenAI for
generating ideas or improving their understanding, as long as they ensure the fi-
nal work is their own. One student explained, "It’s good at generating text, so for
a writing assignment, it could easily produce something decent. But that doesn’t
mean you’ve understood it." Another student noted, "If I take its answer and put it
straight into my assignment, that’s definitely cheating. It gets a bit blurry if you’re
just inspired by the response, but... yes."

22
5 Results in Context

5 Results in Context

Generative Artificial Intelligence (GenAI) has emerged as a transformative force in ed-


ucation, particularly in computer science, through tools like ChatGPT, GitHub Copilot
and Codex. These AI-driven platforms have changed the traditional learning methods
by generating human-like responses, automating code and offering personalized feed-
back in real time. GenAI not only assists students with assignments and programming
tasks but also reshapes education delivery methods in society.
This research was to explore the nuanced role of GenAI in computer science education,
focusing on how students use GenAI tools, when students adopt the means of GenAI
tools, the impact on their learning behaviors and the ethical considerations surrounding
their adoption. The study helps in understanding both the risks and benefits and also
aims to provide insights into responsible AI integration that complements, rather than
compromises students’ education development.
According to Yilmaz and Karaoglan Yilmaz [72], GenAI tools and environment helps
students in solving the programming related problem by giving instant feedback, resolv-
ing the issues by helping students in providing suggestions, error detection and code
generation. This helps students to write more efficient and accurate code and reduce
the time and efforts required to complete programming assignments. Our study also
shows similarity to this body of work by shedding light on students’ perception of using
GenAI for programming assistance in computer science education. Similarly, Hodges
[40] and Mollick [50] study suggests that ChatGPT (GenAI tool) can act as a virtual
tutor as it breaks down complex concepts into an easier to understand language which is
also claimed in the results that computer science students consider GenAI tools as tutor
for understanding difficult concepts and guidance in study.
Trust is one of the main factors affecting the interaction between GenAI and its users.
According to Amoozadeh [3], GenAI tools are helpful in academic and educational sup-
port but Amoozadeh also expressed distrust in the ability or output of AI which shows
a lot similarity in the study done by the researcher for Masters thesis where students
are unreliable when it comes to dependency on GenAI tools based on the nature of
the task. Overall, participants tended to consider the AI’s output more dependable for
certain tasks, while expressing less confidence in its accuracy for others which gives
the similarity with the claim by Sallam [62] that the use of ChatGPT (GenAI tool) in
education poses challenges related to its accuracy and reliability because ChatGPT is
trained on a large corpus of data which may be biased or contain inaccuracies. All the
claims of inaccuracies refer to the AI hallucination, a phenomenon where generative
AI systems produce false or fabricated information. Brameier et al. [9] and Pophal
[17] describe hallucination may lead to misconceptions and wrong inference about how
learning models operate.
The related work and results from the collected data show many similarities with the re-

23
5 Results in Context

search themes. The researcher found that the usage patterns of GenAI among computer
science students displayed notable similarities, with common usage practices that align
with the purposes outlined in the related work. This indicates that GenAI effectively
supports various aspects of study and fulfills its intended roles. Over time, students are
becoming more experienced with GenAI, using it for a wider range of purposes.

24
6 Discussion

6 Discussion

The results of this study provide important insights into how computing students engage
with Generative AI (GenAI) in their learning and the ethical challenges it presents.
These findings echo our previous research [5] on AI in education while introducing
new concerns related to academic integrity, student motivation, and the reliability of
AI-generated content.

1. The Role and Benefits of GenAI in Enhancing Learning


GenAI has emerged as a powerful support system for students, particularly in
computing and technical disciplines. The ability of AI tools to assist with coding
tasks, provide technical problem-solving, and offer quick solutions to routine aca-
demic challenges highlights their increasing importance. Students appreciate the
ease with which GenAI can provide immediate feedback, particularly in scenarios
where time constraints limit deeper engagement with the material. In this sense,
GenAI acts as a catalyst for efficiency, allowing students to tackle academic work-
loads more effectively. Beyond simple time management, GenAI’s role extends
to enhancing learning outcomes. By helping students understand complex coding
structures, correcting syntax errors, and clarifying difficult concepts, AI tools can
function as personalized tutors. These benefits are particularly evident in content
creation and debugging, where students can rely on AI for instant solutions to
technical hurdles. The broader implications suggest that GenAI can democratize
access to academic support, particularly for students who may not have access to
traditional resources like tutoring or peer support.

2. Risks of Overdependence and Surface-Level Engagement


While GenAI offers substantial benefits in technical problem-solving and routine
academic tasks, the study highlights concerns about overreliance. The conve-
nience of AI-generated answers risks reducing students’ opportunities for deep
learning and critical engagement with course material. In scenarios where stu-
dents use GenAI to quickly solve problems under time pressure, there is a risk
of surface-level understanding, where the focus shifts from mastering the subject
matter to merely completing tasks. This can hinder the development of foun-
dational skills essential for long-term success in computing and other technical
fields. The potential for overdependence on GenAI raises important questions
about the balance between efficiency and learning. While students benefit from
the immediate problem-solving capabilities of AI, there is a danger that prolonged
reliance on these tools could stifle the development of essential cognitive skills
such as critical thinking, creativity, and problem-solving. As students become
habitual to using AI for quick solutions, they may bypass the deeper cognitive

25
6 Discussion

processes required to fully understand complex problems, leading to gaps in their


knowledge.

3. Trust and Reliability of GenAI Outputs


Another critical theme that emerged in the study is the issue of trust and reliabil-
ity in AI outputs. While students generally trust GenAI for straightforward tasks
like debugging and answering simple syntax questions, there is significant skep-
ticism about its use in more complex or creative assignments. The phenomenon
of "AI hallucination" where AI tools generate plausible but incorrect information,
underlines the need for caution when using these tools for high-stakes academic
work. Students acknowledged the need to verify AI-generated content, particu-
larly for assignments that require deeper conceptual understanding or innovative
thinking. Although AI can offer useful insights and suggestions, its limitations
become apparent in tasks that demand creativity, critical analysis, or advanced
problem-solving. This highlights a key distinction between tasks where GenAI
is a reliable aid and those where it cannot substitute for human judgment. Con-
sequently, while GenAI can accelerate learning and provide quick solutions, it
cannot fully replace the delicate and context-driven decision-making processes
inherent in human cognition.

4. Ethical Considerations and Academic Misconduct


The ethical use of GenAI in education forms a central pillar of this discussion.
As AI tools become more integrated into academic workflows, concerns about
academic integrity and misconduct have intensified. Most students in the study
agreed that directly copying AI-generated content into assignments without mod-
ification or attribution constitutes academic dishonesty. However, ambiguity re-
mains regarding the appropriate use of GenAI as a source of inspiration or as
a digital tutor. The blurred lines between legitimate use and plagiarism reflect
a broader uncertainty among students regarding the ethical boundaries of AI-
assisted learning. The lack of clear institutional guidelines exacerbates this confu-
sion, leaving students to navigate the ethical gray areas of AI usage. While some
argue that using AI as a learning aid, when critically engaged with, is acceptable,
others remain unsure about where to draw the line between responsible use and
misconduct.

5. Differences in First Phase and Second Phase of Research Data


In both phases, the research was guided by the same interview framework and
research themes. An article based on the data collected in the first phase was
published and presented at the FIE 2024 Conference. The data from both phases
provide a comprehensive understanding of how and when computing students use
Generative AI for learning, as well as the concerns associated with its usage. In
the first phase, only nine participants were involved, and many details remained

26
6 Discussion

unclear or under-defined. However, in the second phase, sixteen additional partic-


ipants were interviewed, yielding new and interesting insights into GenAI usage,
dependency, and ethical concerns, particularly regarding trust topics that were
not as evident in the previous research. The researcher observed that in the sec-
ond phase, participants expressed concerns about the reliability and dependency
on the results generated by GenAI tools. They noted issues with trusting these
results and mentioned the phenomenon of ‘AI hallucination.’

27
7 Recommendation for Education

7 Recommendation for Education

The findings underscore the importance of adapting teaching practices to the reality of
AI-assisted learning. First, there is a clear need for explicit guidelines on how and when
GenAI can be used in educational settings. Without clear rules, students are left to nav-
igate these ethical issues on their own, resulting in inconsistent AI usage [6]. Providing
clear instructions on the ethical use of AI for brainstorming, debugging, or drafting
would help students make informed decisions.
Second, educators should emphasize deeper learning by designing assignments that re-
quire critical thinking and originality, reducing the likelihood of students relying on
AI for easy solutions [6]. Adopting problem-based learning (PBL) or active learning
methods [63] can create assignments that AI cannot easily complete, fostering more
meaningful engagement.
Furthermore, the study was confined to Uppsala University, which limits the general-
izability of the findings. Expanding the research to other universities in Sweden with
computer science students could provide more comprehensive insights. Additionally,
conducting similar studies across different European countries would offer a broader
perspective on the use of GenAI in education and highlight the necessary precautions
required for integrating these tools effectively into computer science education.
Finally, promoting GenAI as a learning tool rather than a shortcut is essential. Of-
fering tutorials or workshops on effective AI usage for tasks like quizzing, explaining
concepts, or coding help, could enhance students’ understanding of these tools while
ensuring academic integrity [31].

28
8 Limitations

8 Limitations

While this study offers valuable insights into the use of GenAI in computing education,
several limitations should be taken into account when interpreting the findings. First,
the research involved a relatively small sample size of twenty-five participants, all from
a single university. This limited scope, combined with the lack of geographic and in-
stitutional diversity, may affect the generalizability of the results. The experiences and
perspectives of students from one institution may not reflect those of students at other
universities or in different countries, where educational practices and technological ac-
cess vary.
Additionally, the study focused specifically on computing students, who are likely to
have a more in-depth understanding and familiarity with GenAI tools compared to stu-
dents from other disciplines. This targeted approach narrows the research’s applica-
bility, as students in other fields might engage with GenAI in different ways or have
varying perceptions of its ethical implications.
Moreover, data collection occurred over a short timeframe, which may not fully cap-
ture the changing nature of GenAI usage and students’ evolving perceptions. Given the
rapid advancement of GenAI technologies and their increasing presence in education,
the findings could quickly become outdated. This underscores the need for continuous
research to monitor shifts in usage patterns and attitudes over time, ensuring that future
studies account for the dynamic nature of GenAI in educational settings.

29
9 Future Works

9 Future Works

Building on the findings of this study and acknowledging its limitations, several key
areas for future research are proposed to further explore the role of GenAI in education.
A crucial next step involves expanding the participant pool to include a larger and more
diverse sample. Future studies should involve various educational institutions from dif-
ferent geographic regions and academic disciplines to improve the generalizability of
the results. By broadening the scope, researchers can gain a more comprehensive un-
derstanding of how GenAI is utilized across diverse educational cultures and settings.
Conducting longitudinal studies is another important avenue for research, as this ap-
proach could offer valuable insights into how students’ use of GenAI evolves over time.
As students progress through their academic careers and as AI technologies continue
to develop, a longitudinal approach would reveal the long-term impacts of GenAI on
learning outcomes, student engagement, and academic integrity.
Additionally, integrating quantitative research methods would complement the qual-
itative findings of this study. Collecting quantitative data could provide a statistical
foundation for evaluating the frequency, intensity, and outcomes of GenAI usage in ed-
ucation. This would enable more objective analysis and facilitate comparisons across
different educational contexts.
Further research should also investigate how the integration of GenAI affects the role
of educators and influences pedagogical strategies. Understanding these changes could
inform the development of training programs for teachers, equipping them to effectively
incorporate GenAI into their teaching while ensuring academic integrity and enhancing
the learning experience.
Another potential area of investigation is the use of GenAI in the corporate sector,
specifically within IT companies. Research could explore how these companies utilize
GenAI tools to improve productivity, while addressing concerns related to confidential-
ity and security. Additionally, such studies could examine the skills and knowledge that
IT companies expect from candidates in relation to GenAI, providing insight into the
evolving demands of the industry.

30
10 Conclusion

10 Conclusion

In conclusion, this study reflects that GenAI holds significant potential to transform
education by providing valuable academic support and its integration must be handled
with care to ensure that GenAI tools enriches rather than detracts from the learning
experience. Clear institutional policies, alongside thoughtful and adaptive pedagogical
strategies, will be crucial in guiding students toward the responsible and productive use
of GenAI in their academic endeavors. This balance will help maximize the benefits of
AI while preserving the integrity and depth of the learning process.

31
11 References

11 References
[1] T. Adıgüzel, M. H. Kaya, and F. K. Cansu, “Revolutionizing education with ai: Ex-
ploring the transformative potential of chatgpt,” Contemporary Educational Tech-
nology, 2023.

[2] M. Aljanabi, M. Ghazi, A. H. Ali, S. A. Abed et al., “Chatgpt: open possibilities,”


Iraqi Journal For Computer Science and Mathematics, vol. 4, no. 1, pp. 62–64,
2023.

[3] M. Amoozadeh, D. Daniels, D. Nam, A. Kumar, S. Chen, M. Hilton, S. Srini-


vasa Ragavan, and M. A. Alipour, “Trust in generative ai among students: An
exploratory study,” in Proceedings of the 55th ACM Technical Symposium on Com-
puter Science Education V. 1, 2024, pp. 67–73.

[4] L. R. Angell, “The relationship of impulsiveness, personal efficacy, and academic


motivation to college cheating.” College Student Journal, vol. 40, no. 1, 2006.

[5] A. Axelsson, D. T. Wallgren, U. Verma, Cajander, M. Daniels, A. Eckerdal, and


R. McDermott, “From assistance to misconduct: Unpacking the complex role of
generative ai in student learning,” in To appear in the proceedings of the IEEE
Frontiers in Education Conference. Washington, DC, USA: Uppsala University,
Department of Information Technology, 2024, to appear.

[6] D. Baidoo-Anu and L. O. Ansah, “Education in the era of generative artificial intel-
ligence (ai): Understanding the potential benefits of chatgpt in promoting teaching
and learning,” Journal of AI, vol. 7, no. 1, pp. 52–62, 2023.

[7] M. Barnas, “" parenting" students: Applying developmental psychology to the


college classroom,” Teaching of Psychology, vol. 27, no. 4, pp. 276–277, 2000.

[8] V. Bozic and I. Poola, “Chat gpt and education,” Preprint, vol. 10, 2023.

[9] D. T. Brameier, A. A. Alnasser, J. M. Carnino, A. R. Bhashyam, A. G. von Keudell,


and M. J. Weaver, “Artificial intelligence in orthopaedic surgery: can a large lan-
guage model “write” a believable orthopaedic journal article?” JBJS, vol. 105,
no. 17, pp. 1388–1392, 2023.

[10] V. Braun and V. Clarke, “Using thematic analysis in psychology,” Qualitative re-
search in psychology, vol. 3, no. 2, pp. 77–101, 2006.

[11] T. D. Brender, “Chatbot confabulations are not hallucinations—reply,” JAMA In-


ternal Medicine, vol. 183, no. 10, pp. 1177–1178, 2023.

32
11 References

[12] G. Brown, “Student disruption in a global college classroom: Multicultural issues


as predisposing factors.” ABNF Journal, vol. 23, no. 3, 2012.

[13] S. Brown, L. McDowell, and F. Duggan, Assessing students: cheating and plagia-
rism. University of Northumbria at Newcastle, Materials and Resources Centre
for . . . , 1998.

[14] C. K. Y. Chan and W. Hu, “Students’ voices on generative ai: Perceptions, ben-
efits, and challenges in higher education,” International Journal of Educational
Technology in Higher Education, vol. 20, no. 1, p. 43, 2023.

[15] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. D. O. Pinto, J. Kaplan, H. Edwards,


Y. Burda, N. Joseph, G. Brockman et al., “Evaluating large language models
trained on code,” arXiv preprint arXiv:2107.03374, 2021.

[16] R. Cheng, R. Wang, T. Zimmermann, and D. Ford, ““it would work for me too”:
How online communities shape software developers’ trust in ai-powered code gen-
eration tools,” ACM Transactions on Interactive Intelligent Systems, vol. 14, no. 2,
pp. 1–39, 2024.

[17] J. Christensen, J. M. Hansen, and P. Wilson, “Understanding the role and impact
of generative artificial intelligence (ai) hallucination within consumers’ tourism
decision-making processes,” Current Issues in Tourism, pp. 1–16, 2024.

[18] P. Denny, V. Kumar, and N. Giacaman, “Conversing with copilot: Exploring


prompt engineering for solving cs1 problems using natural language,” in Proceed-
ings of the 54th ACM Technical Symposium on Computer Science Education V. 1,
2023, pp. 1136–1142.

[19] P. Denny, J. Prather, B. A. Becker, J. Finnie-Ansley, A. Hellas, J. Leinonen,


A. Luxton-Reilly, B. N. Reeves, E. A. Santos, and S. Sarsa, “Computing edu-
cation in the era of generative ai,” Communications of the ACM, vol. 67, no. 2, pp.
56–67, 2024.

[20] D. Dobrovska and A. Pokorny, “Avoiding plagiarism and collusion,” in


Naskah dipresentasikan pada International Conference on Engineering Educa-
tion. Ditemu kembali dari http://icee2007. dei. uc. pt/proceedings/papers/112. pdf,
2007.

[21] J. N. Engler, J. D. Landau, and M. Epstein, “Keeping up with the joneses: Stu-
dents’ perceptions of academically dishonest behavior,” Teaching of Psychology,
vol. 35, no. 2, pp. 99–102, 2008.

33
11 References

[22] X. Fang, S. Che, M. Mao, H. Zhang, M. Zhao, and X. Zhao, “Bias of ai-generated
content: an examination of news produced by large language models,” Scientific
Reports, vol. 14, no. 1, p. 5224, 2024.

[23] L. J. Feldmann, “Classroom civility is another of our instructor responsibilities,”


College teaching, vol. 49, no. 4, pp. 137–140, 2001.

[24] S. Feuerriegel, J. Hartmann, C. Janiesch, and P. Zschech, “Generative ai,” Business


& Information Systems Engineering, vol. 66, no. 1, pp. 111–126, 2024.

[25] J. Finnie-Ansley, P. Denny, B. A. Becker, A. Luxton-Reilly, and J. Prather, “The


robots are coming: Exploring the implications of openai codex on introductory
programming,” in Proceedings of the 24th Australasian Computing Education
Conference, 2022, pp. 10–19.

[26] J. Finnie-Ansley, P. Denny, A. Luxton-Reilly, E. A. Santos, J. Prather, and B. A.


Becker, “My ai wants to know if this will be on the exam: Testing openai’s codex
on cs2 programming exercises,” in Proceedings of the 25th Australasian Comput-
ing Education Conference, 2023, pp. 97–104.

[27] A. Gilson, C. W. Safranek, T. Huang, V. Socrates, L. Chi, R. A. Taylor, D. Chartash


et al., “Correction: How does chatgpt perform on the united states medical licens-
ing examination (usmle)? the implications of large language models for medical
education and knowledge assessment,” JMIR Medical Education, vol. 10, no. 1, p.
e57594, 2024.

[28] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,


A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural
information processing systems, vol. 27, 2014.

[29] U. H. Graneheim, B.-M. Lindgren, and B. Lundman, “Methodological chal-


lenges in qualitative content analysis: A discussion paper,” Nurse education today,
vol. 56, pp. 29–34, 2017.

[30] R. M. Groves, F. J. Fowler Jr, M. P. Couper, J. M. Lepkowski, E. Singer, and


R. Tourangeau, Survey methodology. John Wiley & Sons, 2011.

[31] M. Halaweh, “Chatgpt in education: Strategies for responsible implementation.”


Contemporary educational technology, vol. 15, no. 2, 2023.

[32] S. F. Hard, J. M. Conway, and A. C. Moran, “Faculty and college student beliefs
about the frequency of student academic misconduct,” The Journal of Higher Ed-
ucation, vol. 77, no. 6, pp. 1058–1080, 2006.

34
11 References

[33] A.-S. Henriksson, To förebygga plagiarism in student work: an educational devel-


opment öjlighet. Avd. för university pedagogical development, 2008.

[34] E. L. Hill-Yardin, M. R. Hutchinson, R. Laycock, and S. J. Spencer, “A chat (gpt)


about the future of scientific publishing,” Brain, behavior, and immunity, vol. 110,
pp. 152–154, 2023.

[35] R. M. Howard, “The ethics of plagiarism,” The ethics of writing instruction: Issues
in theory and practice, vol. 4, pp. 79–89, 2000.

[36] J. Huang and M. Tan, “The role of chatgpt in scientific communication: writing
better scientific review articles,” American journal of cancer research, vol. 13,
no. 4, p. 1148, 2023.

[37] E. Kasneci, K. Seßler, S. Küchemann, M. Bannert, D. Dementieva, F. Fischer,


U. Gasser, G. Groh, S. Günnemann, E. Hüllermeier et al., “Chatgpt for good? on
opportunities and challenges of large language models for education,” Learning
and individual differences, vol. 103, p. 102274, 2023.

[38] R. A. Khan, M. Jawaid, A. R. Khan, and M. Sajjad, “Chatgpt-reshaping medical


education and clinical management,” Pakistan journal of medical sciences, vol. 39,
no. 2, p. 605, 2023.

[39] D. P. Kingma, “Auto-encoding variational bayes,” arXiv preprint


arXiv:1312.6114, 2013.

[40] P. Kirschner, C. Hendrick, and J. Heal, How teaching happens: Seminal works in
teaching and teacher effectiveness and what they mean in practice. Routledge,
2022.

[41] L. Kohnke, B. L. Moorhouse, and D. Zou, “Chatgpt for language teaching and
learning,” Relc Journal, vol. 54, no. 2, pp. 537–550, 2023.

[42] A. Kuzdeuov, O. Mukayev, S. Nurgaliyev, A. Kunbolsyn, and H. A. Varol, “Chat-


gpt for visually impaired and blind,” in 2024 International Conference on Artificial
Intelligence in Information and Communication (ICAIIC). IEEE, 2024, pp. 722–
727.

[43] Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu
et al., “Summary of chatgpt-related research and perspective towards the future of
large language models,” Meta-Radiology, p. 100017, 2023.

[44] B. D. Lund and T. Wang, “Chatting about chatgpt: how may ai and gpt impact
academia and libraries?” Library hi tech news, vol. 40, no. 3, pp. 26–29, 2023.

35
11 References

[45] H. Ma, E. Y. Lu, S. Turner, and G. Wan, “An empirical investigation of digital
cheating and plagiarism among middle school students,” American Secondary Ed-
ucation, pp. 69–82, 2007.
[46] J. Mareš, “Tradiční a netradiční podvádění ve škole,” Pedagogika, vol. 55, no. 2,
pp. 310–335, 2005.
[47] B. Martin, “Plagiarism: policy against cheating or policy for learning?” 2004.
[48] F. M. Megahed, Y.-J. Chen, J. A. Ferris, S. Knoth, and L. A. Jones-Farmer, “How
generative ai models such as chatgpt can be (mis) used in spc practice, education,
and research? an exploratory study,” Quality Engineering, vol. 36, no. 2, pp. 287–
315, 2024.
[49] M. Mejia and J. M. Sargent, “Leveraging technology to develop students’ critical
thinking skills,” Journal of Educational Technology Systems, vol. 51, no. 4, pp.
393–418, 2023.
[50] E. R. Mollick and L. Mollick, “Using ai to implement effective teaching strategies
in classrooms: Five strategies, including prompts,” The Wharton School Research
Paper, 2023.
[51] J. M. Morse and C. Mitcham, “Exploring qualitatively-derived concepts: In-
ductive—deductive pitfalls,” International journal of qualitative methods, vol. 1,
no. 4, pp. 28–35, 2002.
[52] N. Naz, F. Gulab, and M. Aslam, “Development of qualitative semi-structured in-
terview guide for case study research,” Competitive Social Science Research Jour-
nal, vol. 3, no. 2, pp. 42–52, 2022.
[53] O. I. Obaid, A. H. Ali, and M. G. Yaseen, “Impact of chat gpt on scientific research:
Opportunities, risks, limitations, and ethical issues,” Iraqi Journal for Computer
Science and Mathematics, vol. 4, no. 4, pp. 13–17, 2023.
[54] A. M. O’Connor, G. Tsafnat, J. Thomas, P. Glasziou, S. B. Gilbert, and B. Hutton,
“A question of trust: can we build an evidence base to gain trust in systematic
review automation technologies?” Systematic reviews, vol. 8, pp. 1–8, 2019.
[55] H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, and R. Karri, “Asleep at the key-
board? assessing the security of github copilot’s code contributions,” in 2022 IEEE
Symposium on Security and Privacy (SP). IEEE, 2022, pp. 754–768.
[56] N. Perry, M. Srivastava, D. Kumar, and D. Boneh, “Do users write more insecure
code with ai assistants?” in Proceedings of the 2023 ACM SIGSAC Conference on
Computer and Communications Security, 2023, pp. 2785–2799.

36
11 References

[57] J. Prather, P. Denny, J. Leinonen, B. A. Becker, I. Albluwi, M. Craig, H. Keuning,


N. Kiesler, T. Kohn, A. Luxton-Reilly et al., “The robots are here: Navigating
the generative ai revolution in computing education,” in Proceedings of the 2023
Working Group Reports on Innovation and Technology in Computer Science Edu-
cation, 2023, pp. 108–159.

[58] B. Puryear and G. Sprint, “Github copilot in the classroom: learning to code with
ai assistance,” Journal of Computing Sciences in Colleges, vol. 38, no. 1, pp. 37–
47, 2022.

[59] J. Qadir, “Engineering education in the era of chatgpt: Promise and pitfalls of gen-
erative ai for education,” in 2023 IEEE Global Engineering Education Conference
(EDUCON). IEEE, 2023, pp. 1–9.

[60] R. Raman, S. Mandal, P. Das, T. Kaur, J. Sanjanasri, and P. Nedungadi, “University


students as early adopters of chatgpt: Innovation diffusion study,” 2023.

[61] D. A. Rettinger and Y. Kramer, “Situational and personal causes of student cheat-
ing,” Research in higher education, vol. 50, pp. 293–313, 2009.

[62] M. Sallam, “The utility of chatgpt as an example of large language models in


healthcare education, research and practice: Systematic review on the future per-
spectives and potential limitations,” MedRxiv, pp. 2023–02, 2023.

[63] K. Sanders, J. Boustedt, A. Eckerdal, R. McCartney, and C. Zander, “Folk ped-


agogy: Nobody doesn’t like active learning,” in Proceedings of the 2017 ACM
Conference on International Computing Education Research, 2017, pp. 145–154.

[64] J. Savelka, A. Agarwal, C. Bogart, and M. Sakr, “Large language models


(gpt) struggle to answer multiple-choice questions about code,” arXiv preprint
arXiv:2303.08033, 2023.

[65] H. Singh, M.-H. Tayarani-Najaran, and M. Yaqoob, “Exploring computer science


students’ perception of chatgpt in higher education: A descriptive and correlation
study,” Education Sciences, vol. 13, no. 9, p. 924, 2023.

[66] N. M. S. Surameery and M. Y. Shakor, “Use chat gpt to solve programming


bugs,” International Journal of Information Technology and Computer Engineer-
ing, no. 31, pp. 17–22, 2023.

[67] S. J. Uddin, A. Albert, A. Ovid, and A. Alsharef, “Leveraging chatgpt to aid con-
struction hazard recognition and support safety education and training,” Sustain-
ability, vol. 15, no. 9, p. 7121, 2023.

37
11 References

[68] P. Vaithilingam, T. Zhang, and E. L. Glassman, “Expectation vs. experience: Eval-


uating the usability of code generation tools powered by large language models,” in
Chi conference on human factors in computing systems extended abstracts, 2022,
pp. 1–7.

[69] O. Vereschak, G. Bailly, and B. Caramiaux, “How to evaluate trust in ai-assisted


decision making? a survey of empirical methodologies,” Proceedings of the ACM
on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–39, 2021.

[70] F.-Y. Wang, Q. Miao, X. Li, X. Wang, and Y. Lin, “What does chatgpt say: The
dao from algorithmic intelligence to linguistic intelligence,” IEEE/CAA Journal of
Automatica Sinica, vol. 10, no. 3, pp. 575–579, 2023.

[71] K. M. Williams, C. Nathanson, and D. L. Paulhus, “Identifying and profiling


scholastic cheaters: their personality, cognitive ability, and motivation.” Journal
of experimental psychology: applied, vol. 16, no. 3, p. 293, 2010.

[72] R. Yilmaz and F. G. K. Yilmaz, “The effect of generative artificial intelligence


(ai)-based tool use on students’ computational thinking skills, programming self-
efficacy and motivation,” Computers and Education: Artificial Intelligence, vol. 4,
p. 100147, 2023.

[73] C. Zastudil, M. Rogalska, C. Kapp, J. Vaughn, and S. MacNeil, “Generative ai


in computing education: Perspectives of students and instructors,” in 2023 IEEE
Frontiers in Education Conference (FIE). IEEE, 2023, pp. 1–9.

38
12 Appendix A

12 Appendix A

The following are the Results found from the initial phase (Phase I) of the study.

39
12 Appendix A

Figure 1 How do CS students use GenAI in their education?

40
12 Appendix A

Figure 2 When do CS students use GenAI in their education?

41
12 Appendix A

Figure 3 Do CS students think the output from GenAI is trustworthy?

42
12 Appendix A

Figure 4 When do CS students consider the use of GenAI as cheating?

43
13 Appendix B

13 Appendix B

Here is the interview script followed while interviewing students for the study.

1. Do you use GenAI tools for your studies?

2. What AI tools do you use in your education?

3. Do you use the paid version or the free version?

4. In what situations you use GenAI tools?

5. How much familiar are you with ChatGPT?

6. Are you an expert or are you a beginner in using ChatGPT?

7. When do you use ChatGPT in your education or coursework?

8. Would you like to explain how you use GenAI tools?

9. What do you think your teachers think about ChatGPT? Do they assume that you
all use it?

10. Has the teaching been adapted to use GenAI tools (like ChatGPT)?

11. Do you google first before you use ChatGPT or do you go straight to ChatGPT?

12. Do you use GenAI outside of education?

13. Why do you choose ChatGPT instead of googling your query?

14. Do you use it to generate code also from a description?

15. How often do you use ChatGPT? Do you use it daily or sometimes?

16. What do you do before using GenAI tools when you’re stuck? What does the
process look like there?

17. Do you think ChatGPT is reliable? (Rate it on a scale of 1 to 10 where 10 is very


reliable and 1 is not reliable at all.)

18. When do you think it’s cheating to use ChatGPT?

19. If you were to consider what a course leader might think about whether students
use AI or not, what would you think?

44
13 Appendix B

20. Have you had any professor who has encouraged you to use GenAI in any course?

21. What do you think teachers think about students using ChatGPT?

22. How do you make the assessment that the answer ChatGPT gives you is correct
and not that it has made something up?

23. Did you find any kind of a threat to your privacy or security when it comes to the
ChatGPT?

24. Do you think students are dependent on GenAI for their studies?

25. What are your views on students dependency on GenAI is making students less
efficient?

26. What do you think on students need proper education and training for using
GenAI tools?

27. What are yout views on usage of GenAI tools for studies is directly related to
Plagiarism?

28. Do you recommend your friends to use GenAI tools?

29. How much you support GenAI tools usage in education?

30. When was the last time you use any GenAI tools?

31. Do you cross check the results from GenAI tools?

32. Have you ever heard about the term Hallucination in terms of AI?

33. What is your discouragement in usage of GenAI tools?

34. Do you have any views in terms of GenAI tools or future of GenAI tools?

45

You might also like