Full Text 01
Full Text 01
IT mDV 25 001
Generative AI in Higher
Education: The Students’
Perception.
Udit Verma REF Program \* MERGEF ORMAT Progr amme name (eg Mas ter's Programme in Molec ul ar Biotec hnol ogy Engineeri ng)
Abstract
The integration of Generative AI (GenAI) tools into computing education represents a significant
shift in the education sector, influencing both students’ learning processes and academic
practices. This research investigates the use of GenAI tools through semi-structured interviews
with 25 computer science students at Uppsala University. It explores participants’ experiences,
perceptions, and expectations of GenAI tools in terms of educational purposes, trustworthiness
of outputs, and ethical implications in academic work. The findings confirm that GenAI tools play
a crucial role in fostering innovative educational practices and promoting independent learning.
However, they also introduce ethical challenges, particularly in relation to academic misconduct.
The research offers strategies to address these challenges and provides four key
recommendations for developing effective approaches to ensure the responsible use of GenAI
in computing education. Facul ty of Science and Technology Error: Refer ence sourc e not found , Upps al a Univers ity. eg Uppsal a/Visby Error: Refer ence sourc e not found . Supervisor: Name Surnam e Error: Reference source not found , Subj ect reader : Name Surnam e Error: Reference sour ce not found , Exami ner: Name Surnam e Error: Refer ence source not found
1 Introduction 1
2 Related Work 2
3 Methodology 8
4 Results 15
4.1 GenAI usage patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Timing of GenAI usage . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Trustworthiness of GenAI outputs . . . . . . . . . . . . . . . . . . . . 19
4.4 Perceptions of Academic Misconduct involving GenAI . . . . . . . . . 21
5 Results in Context 23
6 Discussion 25
8 Limitations 29
9 Future Works 30
10 Conclusion 31
11 References 32
12 Appendix A 39
13 Appendix B 44
ii
1 Introduction
1 Introduction
The term Generative AI refers to computational techniques that are capable of gener-
ating seemingly new, meaningful content such as text, images or audio from training
data [24]. The introduction of Generative AI (GenAI) models has led to significant
excitement in the computing education community [5]. These models can solve basic
programming assignments [73] and offer substantial benefits to students as they become
increasingly integrated into educational frameworks [5]. Understandably, these substan-
tial and rapid advances in the performance of generative models are causing excitement
and consternation among students. Tools such as ChatGPT have transitioned rapidly
from experimental concepts to essential components of organizational workflows, offer-
ing support in areas ranging from content generation and coding assistance to complex
problem solving.
However, the rapid adoption of GenAI in educational practices also brings about sig-
nificant operational and ethical challenges that need careful oversight and monitoring.
These innovations in technologies are revolutionary but also disrupt traditional work-
flows, including evaluation metrics, decision-making processes, dependency and feed-
back loops.
The growing reliance on GenAI in educational processes needs a careful reassessment
because of its impact on both development and potential of computing students. GenAI
tools enhance the potential to boost productivity and efficiency but on other hand emerges
the risk of over-reliance, plagiarism & other forms of misuse from AI-generated outputs
which sidelines the expertise and creativity [73].
For understanding these complexities of how and when computing students are engaged
with GenAI tools in their day-to-day tasks and their concerns related to GenAI, a re-
search is conducted in Uppsala University with twenty-five students. This study seeks
to address detailed insight of the utilization patterns among computing students and also
understanding their perceptions upon reliability and ethical usage in academic practices.
1
2 Related Work
2 Related Work
Introduction
Recent advances in generative AI and language processing have enabled the develop-
ment of large language models (LLMs) that show impressive capabilities in generating
and reasoning about code [19]. Major LLM-based products like Generative Pre-trained
Transformer (GPT-4), CodeX, GitHub Copilot, Bard and ChatGPT have significant im-
plications for computer science education and practice [57].
This literature review provides essential definitions of the GenAI proposed by researchers
which aid in understanding its importance and impact on computer science education. It
further explains ‘when’ computer science students feel the significance of GenAI tools
in their studies and also explores the patterns of ‘how’ computer science students use
GenAI in their academic tasks and practicals. Researchers have proposed, a growing
body of work has begun empirically how these LLMs perform on tasks and assessments
commonly used in programming courses [25] & [26]. Furthermore, this section exam-
ines the factors that impact as well as influence the interaction between GenAI and users
(computer science students). The section concludes by addressing the risks associated
with GenAI regarding over-reliance impeding learning [68] and circumventing assess-
ments [15] & [18]. Challenges related to plagiarism detection [64] & [58] and inherent
biases in the technology [43].
2
2 Related Work
on literature pertaining to ChatGPT [8] & [31]. Recent efforts have begun to scrutinize
ChatGPT’s impact on education more rigorously. According to Aljanabi [2] & Baidoo-
Anu [6] it’s important to recognize that GenAI distinguishes itself by generating not
just responses, but also the content within those responses, surpassing the capabilities of
traditional Conversational AI. With the progression of GenAI programming assistants,
research has increasingly concentrated on deploying AI tools in computing education,
exploring both the opportunities and challenges this presents [1] & [42]. Key inquiries
have been made into the benefits GenAI provides to those without formal education, its
effects on existing pedagogical methods, and concerns over plagiarism, potential biases,
and the fostering of detrimental habits among students.
According to Mejia & Sargent [49], In addition to theoretical discussions, considerable
effort has been directed towards helping students generate code, provide explanations,
and address problems within their code. These efforts have shown that Large Language
Models (LLMs) can offer significant assistance, although the extent of benefit varies
with task complexity [49]. Building on these insights, researchers have investigated
ways to optimize students’ use of AI tools, noting that the effectiveness of AI-generated
output largely depends on the quality of user prompts [60]. Denny and colleagues [18]
notably improved GitHub Copilot’s efficacy in introductory programming tasks from
about 50 percent to 80 percent by refining prompt strategies.
According to Halaweh [31], GenAI technologies, such as ChatGPT, have the poten-
tial to foster collaborative problem-solving and achievement among students, thereby
cultivating community. There’s ongoing debate about ChatGPT’s capacity to support
educators, students, and researchers significantly. The tool’s effectiveness and promise
have been rigorously evaluated in numerous studies [31] & [65]. For instance, a survey
by Omar Ibrahim Obaid and colleagues [53] highlighted ChatGPT’s role in propelling
scientific research by generating new ideas, offering fresh perspectives, and boosting
productivity. ChatGPT has also been shown to support visually impaired individuals
through text-to-speech applications, enhance digital accessibility, and tackle significant
issues like bias and misinformation [65]. Its effectiveness in search engine applications
further underscores its utility in delivering precise and relevant answers to user queries.
However, fully leveraging ChatGPT’s capabilities while addressing ethical concerns in
AI requires ongoing research, which is vital for ensuring more ethical and seamless fu-
ture human-AI interactions.
According to Halaweh [31], ChatGPT’s applications are wide-ranging. In the educa-
tional domain, it can power chatbots and online tutors to help students improve their
language skills. The model has attracted considerable research interest due to its educa-
tional potential. Studies by Raman [60] and Uddin et al. [67], demonstrate its utility in
higher education and safety education, respectively. Furthermore, its role in enhancing
students’ critical thinking skills and its potential effects on traditional evaluation meth-
ods in higher education have been critically assessed, underscoring the need for careful
3
2 Related Work
application and ethical considerations [1]. ChatGPT employs machine learning to pro-
vide proactive recommendations, solve problems automatically, and offer personalized
advice, illustrating its multifaceted utility in advancing educational goals [2].
Trustworthiness
Trust is one of the main factors affecting the interaction between GenAI and its users.
Ruijia [16] and Oleksandra [69] defined trust of GenAI as “the user’s judgment or expec-
tations about how the AI system can help when the user is in a situation of uncertainty
or vulnerability”. Both lack of trust [54] and blind trust [55] & [56] affect how users
interact with an AI tool and introduce risks in the human-AI interaction.
According to Amoozadeh [3], it is found that somewhat paradoxically, many students
(especially those from advanced Computer Science courses) that found GenAI helpful
also expressed distrust in the ability or output of AI. They recognize that AI is not infal-
lible and should be subject to human oversight to ensure accuracy and reliability. This
lack of trust stemmed from biases, errors, and limitations in AI algorithms. Also, GenAI
will be the tool that will be commonly used by Computer Science students (especially
4
2 Related Work
5
2 Related Work
tions and wrong inference about how learning models operate. According to Obrien,
hallucinations are prevalent in generative AI because notes, language models are fun-
damentally designed to invent content. According to Brender [11] and Wang et al. [70]
this issue is widely recognized in platforms like ChatGPT and is frequently discussed
in the trade press, though it is not yet extensively covered in academic research. Obrien
[11] through an example stated that a recent AP news article highlighted the ongoing
challenge of AI hallucination titled ‘Chatbots sometimes make things up. Is AI’s hal-
lucination problem fixable?’, noting its significant impact on businesses, organizations,
and students relying on AI for content creation.
6
2 Related Work
posits that individuals gauge acceptable behaviors by observing what is commonly prac-
ticed by others. For instance, it has been shown that young adults often overestimate
the prevalence of negative behaviors like substance abuse among their peers, leading to
a distorted perception of what is socially acceptable and, consequently, an increase in
these behaviors (Berkowitz, 2004; Perkins, 2003; Perkins & Berkowitz, 1986). Sim-
ilarly, if students believe that plagiarism is frequently practiced and rarely penalized
among their peers, they may be more inclined to engage in plagiarism themselves.
McDowell and Brown [13] states that changes in the form of assessment (like group
projects), the communication and information dilemma, focusing on obtaining high
grades and the fear of future unemployment, all contribute to increasing incidents of
cheating and plagiarism.
7
3 Methodology
3 Methodology
To better understand the perceptions, experiences and trust of computer science stu-
dents’ in computing courses as they relate to Generative AI tools, semi-structured inter-
views were conducted with bachelors and masters students in Uppsala University (see
Appendix B). It included questions that assessed participants’ awareness of AI, their
beliefs about the potential benefits and challenges of these technologies, and also their
attitude toward using them in the classroom & studies. Conducting the study allowed
us to gather insights from a diverse group of participants, and provided better general-
izability for conclusions [30].
Interview Questions
The interviews included a range of questions including open and close-ended questions,
covering demographics, participants’ confidence, trust in AI and their experiences &
opinions about using AI tools.
The survey questionnaire was divided into four sections. (See Appendix A for the whole
script)
1. The first section was dedicated to participants’ experience with GenAI which
elicited ‘How’ computer science students find the need of using GenAI for their
academics.
2. The second section consisted of participants’ need with the GenAI which depicts
‘When’ computer science students find the need of using GenAI for their aca-
demics.
3. The third section described participants’ trust and faith in GenAI in terms of re-
sults and guidance they get for academic related purposes.
4. The fourth section reflects participants’ opinions and experiences on GenAI in
terms of cheating and plagiarism.
In the study, the research questions were pre-determined with the primary objec-
tive of understanding the nuanced perceptions and experiences of students, their
interaction with Generative AI (GenAI), and its impact on their academic prac-
tices. The value of this research lies in its potential to help educators identify if
and where they might need to adapt their teaching strategies to account for the
integration of these new tools.
Data Collection and Analysis Methods
This study employed semi-structured interviews to collect data, and content analysis to
8
3 Methodology
analyze it.
Content Analysis
Content analysis is a research method used to systematically interpret and analyze tex-
tual, visual, or audio data to identify patterns, themes, and meaningful insights. It is
commonly employed in qualitative research but can also involve quantitative techniques
to count and classify specific elements within data.
Deductive Analysis: Deductive Analysis [29] approach begins with a theory or hypoth-
esis and seeks to see if the data conforms to these pre-existing concepts. It’s a top-down
approach where the researcher tests the data against theoretical constructs to either con-
firm or refute them. The challenge here lies in how to handle data that does not fit the
theory, deciding whether it indicates a need for adjusting the theory or whether it’s an
outlier. In practice, deductive analysis [51] may involve categorizing data based on pre-
defined criteria and observing if these categories adequately capture the nuances of the
data.
Inductive Analysis: Inductive analysis [51] starts with observations and builds up to
generalizations or theories, thus moving from specific to general. This bottom-up ap-
proach involves identifying patterns, themes, and categories emerging directly from the
data without prior theoretical expectations. The process is inherently explorative, aim-
ing to generate theories that are grounded in the observed data. The risk here is getting
trapped in descriptive surface level of data without reaching deeper, more insightful
generalizations.
9
3 Methodology
Both methods have their significant place in qualitative research and are chosen based
on the research goals, questions, and the nature of the data. In some cases, researchers
might employ a combination of both in a single study to enrich the analysis and ensure
robustness in their findings.
10
3 Methodology
The Study
The purpose of this study was to explore computer science students’ perceptions, expe-
riences, and ethical considerations regarding the use of Generative AI (GenAI) tools in
their academic activities. By conducting semi-structured interviews, this research aimed
to gather in-depth insights into students’ usage patterns, trust in AI-generated outputs,
and views on academic misconduct. The qualitative approach was chosen to allow a
comprehensive understanding of the nuanced ways students interact with GenAI, ad-
dressing key themes such as timing of usage, trustworthiness, and ethical implications.
Data Collection
The interviews were conducted with bachelor’s and master’s students at Uppsala Uni-
versity. A total of 25 interviews were carried out. Initially, in December 2023, a group
project involving three students was the focus, with nine interviews conducted and ana-
lyzed. An additional 16 interviews were carried out in April 2024 for the master’s thesis
project of the researcher. The motivation of utilizing previous data and comparing it
with new data was to understand the evolving thoughts and perceptions of computer
science students regarding time perspective. Comparing the 2023 data with that of 2024
revealed interesting facts that clearly illustrated the changing usage and perceptions of
Generative AI among computer science students over time.
Participants in both Phase I (December 2023) and Phase II (April 2024) joined voluntar-
ily and were free to withdraw at any time. Approximately 45 students were approached
during these phases. Most interactions occurred at the Ångström campus of Uppsala
University, where interviews were proposed in an open format. While some students
agreed to participate, others declined. Additionally, some participants were acquain-
tances or dorm mates of the researchers. It is important to note that not everyone ap-
proached was willing to participate in the study.
11
3 Methodology
The Informed Consent form provided participants with essential information about the
study, such as its voluntary nature, the estimated time required (here, no more than
15 minutes), and the assurance of confidentiality. It emphasized that no identifiable
information would be collected or stored. The form highlighted that there were no fore-
seeable risks associated with participating in the research and encouraged participants
to ask any questions they had before deciding to participate.
Data Analysis
Data analysis [10] was conducted using a combination of deductive and inductive anal-
ysis. Initially, an established framework based on the research questions guided the
12
3 Methodology
13
3 Methodology
the inductive analysis began. This phase involved re-examining the data to identify any
new categories that aligned with the research themes. These new categories were then
structured into a table based on the research themes for the final results. Each category
was thoroughly explained in the Results section, supported by participant data samples
to highlight the key findings.
14
4 Results
4 Results
The study conducted on the data collected through semi-structured interviews using a
qualitative approach with twenty-five (nine in phase 1 and sixteen in phase 2) computing
science students at Uppsala University aimed to explore the use and impact of gener-
ative artificial intelligence (GenAI) in the learning practices of computing students at
a university setting. The interviews were designed to elicit in-depth responses, allow-
ing for a nuanced exploration of how computing students engage with GenAI tools, their
perceived benefits and drawbacks, and their views on ethical considerations in academic
structure.
As the research analysis progressed, the identification of recurring patterns among more
specific categories highlighted the need for generalization. This approach resulted in
a more appropriate representation of the findings in relation to the research’s aims and
objectives. This section presents the generalized results derived from the research anal-
ysis. The generalizations account for the recurring patterns among specific categories,
including individual descriptions of similar ideas or opinions, all within the scope of
each research question.
The results are organized into four main themes: GenAI Usage Patterns, Timing of
GenAI Usage, Trustworthiness of GenAI Outputs, and Perceptions of Academic
Misconduct Involving GenAI explained in detail in the text. These four themes form
the foundation of this study, providing essential insights into understanding the
four research questions. They also clarify students’ perspectives on prevalent is-
sues and concerns, particularly those related to academic dishonesty or cheating
associated with the use of Generative AI technologies. Each theme is further sub-
divided into categories to provide a detailed examination of the findings, supported by
direct quotations from participants to illustrate key points. Table 2 illustrates four gen-
eral themes identified and how they are broken down into key categories:
15
4 Results
The results from the initial study (Phase I) were previously published in an IEEE confer-
ence. The findings from this study (Phase II) provide valuable insights into the evolving
role of Generative AI (GenAI) in higher education, particularly within computing dis-
ciplines, where students frequently encounter complex problem-solving scenarios that
benefit from technological assistance. The results highlight both the potential advan-
tages of GenAI in enhancing learning efficiency and comprehension, as well as the
significant ethical dilemmas it poses, particularly regarding academic integrity.
These findings contribute to the ongoing discourse on the appropriate integration of
GenAI tools in educational contexts, offering practical recommendations for educators
and policymakers to balance innovation with ethical responsibility. While the find-
ings from Phase II share similarities with the categories identified in Phase I, they also
yielded new data, which were categorized into distinct themes, as highlighted in Table 2.
16
4 Results
3. Assistance
Generative AI also plays a significant role in helping students grasp difficult con-
cepts and theories, particularly in technical subjects. Instead of only providing
answers to direct questions, AI tools can break down complex ideas into more di-
gestible parts. One student mentioned, "I use AI to help explain coding concepts
that are difficult to understand, or when I am struggling with a particular piece of
code."
17
4 Results
spent quite a long time with a code and I don’t know why it doesn’t work.”
The use of Generative AI tools for assistance in educational activities is often deter-
mined by specific needs and situational factors that arise throughout the academic pro-
cess.
1. When stuck
When students face challenges with their assignments or projects, they often rely
on GenAI for support. These tools help identify issues, offer explanations, or pro-
pose solutions, allowing students to navigate their difficulties more effectively.
One student described their experience, saying, "I turn to it when I’m stuck."
They added, "It helps when I can’t grasp a concept, or if I’ve spent a lot of time
on some code and can’t figure out why it’s not working." The appeal of GenAI lies
in its instant availability, which contrasts with the sometimes delayed responses
from peers or instructors. Many students value this speed, with one comment-
ing, "If I know someone in the same class, I might ask them for help, but that’s
not always possible because I don’t know as many people in my classes anymore."
18
4 Results
on a Friday evening."
19
4 Results
4. Unreliable in results
While students trust AI to identify errors, they often feel the need to cross-check
its suggestions with their own knowledge or external sources before fully relying
on the output. They exercise greater caution when using AI for complex tasks that
demand deeper understanding or creativity. In situations requiring critical think-
ing or detailed analysis, students are more skeptical of the AI’s reliability. They
may rely on it for initial ideas or a basic framework but do not expect it to provide
thorough or nuanced solutions. As one participant explained, "I trust it to give me
an idea, but I always double-check its responses when it’s something important."
5. Hallucination
AI hallucination refers to the phenomenon where a generative artificial intelli-
gence (AI) system, such as ChatGPT, produces outputs that are factually incor-
rect, nonsensical, or entirely fabricated, despite the input or prompt being accu-
rate. Students have issues on trusting the results or outputs from GenAI. One
student stated, “GenAI is human bias because it creates incorrect references or
non-existent facts because it is made by humans”. Another student called GenAI
as “Black Box technology”.
20
4 Results
using GenAI. Despite efforts to avoid sharing sensitive information online, some
have experienced instances where their data was misused. One student, after com-
pleting an internship, shared their experience: "We were discouraged from using
GenAI for debugging or assistance because it records everything, which violates
company policy." This highlights the caution students feel regarding privacy risks
associated with GenAI, especially in professional or sensitive environments.
21
4 Results
22
5 Results in Context
5 Results in Context
23
5 Results in Context
search themes. The researcher found that the usage patterns of GenAI among computer
science students displayed notable similarities, with common usage practices that align
with the purposes outlined in the related work. This indicates that GenAI effectively
supports various aspects of study and fulfills its intended roles. Over time, students are
becoming more experienced with GenAI, using it for a wider range of purposes.
24
6 Discussion
6 Discussion
The results of this study provide important insights into how computing students engage
with Generative AI (GenAI) in their learning and the ethical challenges it presents.
These findings echo our previous research [5] on AI in education while introducing
new concerns related to academic integrity, student motivation, and the reliability of
AI-generated content.
25
6 Discussion
26
6 Discussion
27
7 Recommendation for Education
The findings underscore the importance of adapting teaching practices to the reality of
AI-assisted learning. First, there is a clear need for explicit guidelines on how and when
GenAI can be used in educational settings. Without clear rules, students are left to nav-
igate these ethical issues on their own, resulting in inconsistent AI usage [6]. Providing
clear instructions on the ethical use of AI for brainstorming, debugging, or drafting
would help students make informed decisions.
Second, educators should emphasize deeper learning by designing assignments that re-
quire critical thinking and originality, reducing the likelihood of students relying on
AI for easy solutions [6]. Adopting problem-based learning (PBL) or active learning
methods [63] can create assignments that AI cannot easily complete, fostering more
meaningful engagement.
Furthermore, the study was confined to Uppsala University, which limits the general-
izability of the findings. Expanding the research to other universities in Sweden with
computer science students could provide more comprehensive insights. Additionally,
conducting similar studies across different European countries would offer a broader
perspective on the use of GenAI in education and highlight the necessary precautions
required for integrating these tools effectively into computer science education.
Finally, promoting GenAI as a learning tool rather than a shortcut is essential. Of-
fering tutorials or workshops on effective AI usage for tasks like quizzing, explaining
concepts, or coding help, could enhance students’ understanding of these tools while
ensuring academic integrity [31].
28
8 Limitations
8 Limitations
While this study offers valuable insights into the use of GenAI in computing education,
several limitations should be taken into account when interpreting the findings. First,
the research involved a relatively small sample size of twenty-five participants, all from
a single university. This limited scope, combined with the lack of geographic and in-
stitutional diversity, may affect the generalizability of the results. The experiences and
perspectives of students from one institution may not reflect those of students at other
universities or in different countries, where educational practices and technological ac-
cess vary.
Additionally, the study focused specifically on computing students, who are likely to
have a more in-depth understanding and familiarity with GenAI tools compared to stu-
dents from other disciplines. This targeted approach narrows the research’s applica-
bility, as students in other fields might engage with GenAI in different ways or have
varying perceptions of its ethical implications.
Moreover, data collection occurred over a short timeframe, which may not fully cap-
ture the changing nature of GenAI usage and students’ evolving perceptions. Given the
rapid advancement of GenAI technologies and their increasing presence in education,
the findings could quickly become outdated. This underscores the need for continuous
research to monitor shifts in usage patterns and attitudes over time, ensuring that future
studies account for the dynamic nature of GenAI in educational settings.
29
9 Future Works
9 Future Works
Building on the findings of this study and acknowledging its limitations, several key
areas for future research are proposed to further explore the role of GenAI in education.
A crucial next step involves expanding the participant pool to include a larger and more
diverse sample. Future studies should involve various educational institutions from dif-
ferent geographic regions and academic disciplines to improve the generalizability of
the results. By broadening the scope, researchers can gain a more comprehensive un-
derstanding of how GenAI is utilized across diverse educational cultures and settings.
Conducting longitudinal studies is another important avenue for research, as this ap-
proach could offer valuable insights into how students’ use of GenAI evolves over time.
As students progress through their academic careers and as AI technologies continue
to develop, a longitudinal approach would reveal the long-term impacts of GenAI on
learning outcomes, student engagement, and academic integrity.
Additionally, integrating quantitative research methods would complement the qual-
itative findings of this study. Collecting quantitative data could provide a statistical
foundation for evaluating the frequency, intensity, and outcomes of GenAI usage in ed-
ucation. This would enable more objective analysis and facilitate comparisons across
different educational contexts.
Further research should also investigate how the integration of GenAI affects the role
of educators and influences pedagogical strategies. Understanding these changes could
inform the development of training programs for teachers, equipping them to effectively
incorporate GenAI into their teaching while ensuring academic integrity and enhancing
the learning experience.
Another potential area of investigation is the use of GenAI in the corporate sector,
specifically within IT companies. Research could explore how these companies utilize
GenAI tools to improve productivity, while addressing concerns related to confidential-
ity and security. Additionally, such studies could examine the skills and knowledge that
IT companies expect from candidates in relation to GenAI, providing insight into the
evolving demands of the industry.
30
10 Conclusion
10 Conclusion
In conclusion, this study reflects that GenAI holds significant potential to transform
education by providing valuable academic support and its integration must be handled
with care to ensure that GenAI tools enriches rather than detracts from the learning
experience. Clear institutional policies, alongside thoughtful and adaptive pedagogical
strategies, will be crucial in guiding students toward the responsible and productive use
of GenAI in their academic endeavors. This balance will help maximize the benefits of
AI while preserving the integrity and depth of the learning process.
31
11 References
11 References
[1] T. Adıgüzel, M. H. Kaya, and F. K. Cansu, “Revolutionizing education with ai: Ex-
ploring the transformative potential of chatgpt,” Contemporary Educational Tech-
nology, 2023.
[6] D. Baidoo-Anu and L. O. Ansah, “Education in the era of generative artificial intel-
ligence (ai): Understanding the potential benefits of chatgpt in promoting teaching
and learning,” Journal of AI, vol. 7, no. 1, pp. 52–62, 2023.
[8] V. Bozic and I. Poola, “Chat gpt and education,” Preprint, vol. 10, 2023.
[10] V. Braun and V. Clarke, “Using thematic analysis in psychology,” Qualitative re-
search in psychology, vol. 3, no. 2, pp. 77–101, 2006.
32
11 References
[13] S. Brown, L. McDowell, and F. Duggan, Assessing students: cheating and plagia-
rism. University of Northumbria at Newcastle, Materials and Resources Centre
for . . . , 1998.
[14] C. K. Y. Chan and W. Hu, “Students’ voices on generative ai: Perceptions, ben-
efits, and challenges in higher education,” International Journal of Educational
Technology in Higher Education, vol. 20, no. 1, p. 43, 2023.
[16] R. Cheng, R. Wang, T. Zimmermann, and D. Ford, ““it would work for me too”:
How online communities shape software developers’ trust in ai-powered code gen-
eration tools,” ACM Transactions on Interactive Intelligent Systems, vol. 14, no. 2,
pp. 1–39, 2024.
[17] J. Christensen, J. M. Hansen, and P. Wilson, “Understanding the role and impact
of generative artificial intelligence (ai) hallucination within consumers’ tourism
decision-making processes,” Current Issues in Tourism, pp. 1–16, 2024.
[21] J. N. Engler, J. D. Landau, and M. Epstein, “Keeping up with the joneses: Stu-
dents’ perceptions of academically dishonest behavior,” Teaching of Psychology,
vol. 35, no. 2, pp. 99–102, 2008.
33
11 References
[22] X. Fang, S. Che, M. Mao, H. Zhang, M. Zhao, and X. Zhao, “Bias of ai-generated
content: an examination of news produced by large language models,” Scientific
Reports, vol. 14, no. 1, p. 5224, 2024.
[32] S. F. Hard, J. M. Conway, and A. C. Moran, “Faculty and college student beliefs
about the frequency of student academic misconduct,” The Journal of Higher Ed-
ucation, vol. 77, no. 6, pp. 1058–1080, 2006.
34
11 References
[35] R. M. Howard, “The ethics of plagiarism,” The ethics of writing instruction: Issues
in theory and practice, vol. 4, pp. 79–89, 2000.
[36] J. Huang and M. Tan, “The role of chatgpt in scientific communication: writing
better scientific review articles,” American journal of cancer research, vol. 13,
no. 4, p. 1148, 2023.
[40] P. Kirschner, C. Hendrick, and J. Heal, How teaching happens: Seminal works in
teaching and teacher effectiveness and what they mean in practice. Routledge,
2022.
[41] L. Kohnke, B. L. Moorhouse, and D. Zou, “Chatgpt for language teaching and
learning,” Relc Journal, vol. 54, no. 2, pp. 537–550, 2023.
[43] Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu
et al., “Summary of chatgpt-related research and perspective towards the future of
large language models,” Meta-Radiology, p. 100017, 2023.
[44] B. D. Lund and T. Wang, “Chatting about chatgpt: how may ai and gpt impact
academia and libraries?” Library hi tech news, vol. 40, no. 3, pp. 26–29, 2023.
35
11 References
[45] H. Ma, E. Y. Lu, S. Turner, and G. Wan, “An empirical investigation of digital
cheating and plagiarism among middle school students,” American Secondary Ed-
ucation, pp. 69–82, 2007.
[46] J. Mareš, “Tradiční a netradiční podvádění ve škole,” Pedagogika, vol. 55, no. 2,
pp. 310–335, 2005.
[47] B. Martin, “Plagiarism: policy against cheating or policy for learning?” 2004.
[48] F. M. Megahed, Y.-J. Chen, J. A. Ferris, S. Knoth, and L. A. Jones-Farmer, “How
generative ai models such as chatgpt can be (mis) used in spc practice, education,
and research? an exploratory study,” Quality Engineering, vol. 36, no. 2, pp. 287–
315, 2024.
[49] M. Mejia and J. M. Sargent, “Leveraging technology to develop students’ critical
thinking skills,” Journal of Educational Technology Systems, vol. 51, no. 4, pp.
393–418, 2023.
[50] E. R. Mollick and L. Mollick, “Using ai to implement effective teaching strategies
in classrooms: Five strategies, including prompts,” The Wharton School Research
Paper, 2023.
[51] J. M. Morse and C. Mitcham, “Exploring qualitatively-derived concepts: In-
ductive—deductive pitfalls,” International journal of qualitative methods, vol. 1,
no. 4, pp. 28–35, 2002.
[52] N. Naz, F. Gulab, and M. Aslam, “Development of qualitative semi-structured in-
terview guide for case study research,” Competitive Social Science Research Jour-
nal, vol. 3, no. 2, pp. 42–52, 2022.
[53] O. I. Obaid, A. H. Ali, and M. G. Yaseen, “Impact of chat gpt on scientific research:
Opportunities, risks, limitations, and ethical issues,” Iraqi Journal for Computer
Science and Mathematics, vol. 4, no. 4, pp. 13–17, 2023.
[54] A. M. O’Connor, G. Tsafnat, J. Thomas, P. Glasziou, S. B. Gilbert, and B. Hutton,
“A question of trust: can we build an evidence base to gain trust in systematic
review automation technologies?” Systematic reviews, vol. 8, pp. 1–8, 2019.
[55] H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, and R. Karri, “Asleep at the key-
board? assessing the security of github copilot’s code contributions,” in 2022 IEEE
Symposium on Security and Privacy (SP). IEEE, 2022, pp. 754–768.
[56] N. Perry, M. Srivastava, D. Kumar, and D. Boneh, “Do users write more insecure
code with ai assistants?” in Proceedings of the 2023 ACM SIGSAC Conference on
Computer and Communications Security, 2023, pp. 2785–2799.
36
11 References
[58] B. Puryear and G. Sprint, “Github copilot in the classroom: learning to code with
ai assistance,” Journal of Computing Sciences in Colleges, vol. 38, no. 1, pp. 37–
47, 2022.
[59] J. Qadir, “Engineering education in the era of chatgpt: Promise and pitfalls of gen-
erative ai for education,” in 2023 IEEE Global Engineering Education Conference
(EDUCON). IEEE, 2023, pp. 1–9.
[61] D. A. Rettinger and Y. Kramer, “Situational and personal causes of student cheat-
ing,” Research in higher education, vol. 50, pp. 293–313, 2009.
[67] S. J. Uddin, A. Albert, A. Ovid, and A. Alsharef, “Leveraging chatgpt to aid con-
struction hazard recognition and support safety education and training,” Sustain-
ability, vol. 15, no. 9, p. 7121, 2023.
37
11 References
[70] F.-Y. Wang, Q. Miao, X. Li, X. Wang, and Y. Lin, “What does chatgpt say: The
dao from algorithmic intelligence to linguistic intelligence,” IEEE/CAA Journal of
Automatica Sinica, vol. 10, no. 3, pp. 575–579, 2023.
38
12 Appendix A
12 Appendix A
The following are the Results found from the initial phase (Phase I) of the study.
39
12 Appendix A
40
12 Appendix A
41
12 Appendix A
42
12 Appendix A
43
13 Appendix B
13 Appendix B
Here is the interview script followed while interviewing students for the study.
9. What do you think your teachers think about ChatGPT? Do they assume that you
all use it?
10. Has the teaching been adapted to use GenAI tools (like ChatGPT)?
11. Do you google first before you use ChatGPT or do you go straight to ChatGPT?
15. How often do you use ChatGPT? Do you use it daily or sometimes?
16. What do you do before using GenAI tools when you’re stuck? What does the
process look like there?
19. If you were to consider what a course leader might think about whether students
use AI or not, what would you think?
44
13 Appendix B
20. Have you had any professor who has encouraged you to use GenAI in any course?
21. What do you think teachers think about students using ChatGPT?
22. How do you make the assessment that the answer ChatGPT gives you is correct
and not that it has made something up?
23. Did you find any kind of a threat to your privacy or security when it comes to the
ChatGPT?
24. Do you think students are dependent on GenAI for their studies?
25. What are your views on students dependency on GenAI is making students less
efficient?
26. What do you think on students need proper education and training for using
GenAI tools?
27. What are yout views on usage of GenAI tools for studies is directly related to
Plagiarism?
30. When was the last time you use any GenAI tools?
32. Have you ever heard about the term Hallucination in terms of AI?
34. Do you have any views in terms of GenAI tools or future of GenAI tools?
45