Skip to main content
SearchLoginLogin or Signup

Exploring ethical implications and ethical consideration of AI in eLearning in UAE higher education using UTAUT and social constructivism: An exploratory study of UAE postgraduates

Full paper

Published onOct 28, 2024
Exploring ethical implications and ethical consideration of AI in eLearning in UAE higher education using UTAUT and social constructivism: An exploratory study of UAE postgraduates
·

Abstract

This empirical exploratory study investigates the ethical implications of artificial intelligence (AI) in education and eLearning, a pressing concern in modern digital learning, especially highlighted during the Covid-19 pandemic. The study’s significance lies in its focus on understanding how AI integration affects ethical considerations in educational settings, a critical aspect given the rapid digital transformation in learning. The research involved surveying sixteen postgraduate students in the United Arab Emirates, using the Unified Theory of Acceptance and Use of Technology (UTAUT) as the analytical framework. The UTAUT model, while primarily used for technology acceptance, is employed here in an exploratory capacity to identify potential ethical dimensions related to AI adoption, including data privacy, bias and transparency. Key findings reveal that 94% of participants were optimistic about AI’s role in enhancing education, with 50% strongly believing that AI can significantly boost academic performance. However, significant ethical concerns also emerged, with 56% of participants expressing moderate to high levels of concern regarding data privacy and 88% worried about the potential for AI to introduce biases in educational outcomes. These findings underscore the necessity for ethical oversight, robust data governance, and the creation of inclusive, transparent educational AI systems. This exploratory study contributes to the discourse on ethical AI by offering preliminary insights into user perceptions and laying the groundwork for more extensive future research. On the other hand, these findings should be considered preliminary, due to the small sample size and limited geographic scope. Essential for future research is a larger, diverse sample that uses qualitative approaches to investigate these insights and examine emerging ethical considerations related to advancing AI technologies in educational contexts.

Keywords: artificial intelligence in education; ethical implications of AI; AI and educational ethics; educational data privacy; UTAUT model in education; technology acceptance in eLearning; AI transparency in education

Part of the Special Issue Generative AI and education

1. Introduction

This exploratory research investigates the ethical aspects of AI in education and eLearning, examining AI’s role in enhancing learning alongside challenges like data privacy and algorithmic bias. As AI continues to be integrated into educational systems, the rapid expansion of technologies driven by artificial intelligence has posed substantial ethical issues. They include risks to data privacy, the possibility of algorithmic bias shaping education achievements, as well as wider implications of AI on the issue of student freedom and the role of instructors. This study focuses on postgraduate students from one academic institution in Abu Dhabi, United Arab Emirates, who were surveyed through Microsoft Forms. The motivation for this research stems from the growing dependence on AI in educational contexts, especially highlighted during the Covid-19 pandemic when online learning surged. As AI’s influence in education continues to expand, there is an urgent need to understand its ethical implications, particularly in regions like the UAE, where digital transformation in education is rapidly advancing. Despite the potential benefits, there is a lack of comprehensive studies addressing the ethical challenges associated with AI integration in higher education settings. This gap is particularly evident in the context of the UAE, where the cultural and educational landscapes are unique and may present distinct ethical considerations. This study is particularly relevant to Technology-Enhanced Learning (TEL) scholars as it intersects technology with education, offering initial insights into AI’s impact on learning and the ethical considerations involved. Given the exploratory nature of this study, the findings are not intended to be generalisable but to provide a foundation for further research in this evolving field. Drawing from Bligh and Lee (2020) and Kirkwood and Price (2014), the study articulates AI’s transformative potential in learning environments and its ethical complexities. The recent global shift towards online learning induced by the Covid-19 pandemic has been characterised by increased dependence on AI to provide a rich, adaptable learning experience designed to the specific needs of learners across various academic specialities.

This study seeks to address the following research gap: while much has been written about the technical benefits of AI in education, limited empirical research explores the ethical implications, particularly in Middle Eastern contexts like the UAE. By focusing on the perspectives of postgraduate students in the UAE, this study aims to contribute to the discourse on ethical AI integration in education, ultimately providing insights that could inform policy and practice in similar educational settings globally.

AI’s evolving role in online education is not just a technological evolution but an educational revolution, enhancing the accessibility and personalisation of learning (Vanlehn, 2011). AI’s multifaceted role in online education has become increasingly critical, particularly its capacity to analyse extensive learner behaviour and performance data to deliver customised educational content. This application of AI in education uses advanced analytics to craft individual learning paths, elevating student engagement and facilitating academic excellence. Implementing such adaptive learning technologies increases the efficacy of educational delivery and enables a more intuitive grasp of complex concepts.

AI’s role in providing instant feedback to learners is crucial, significantly improving academic performance and aiding in the quick correction of misunderstandings (Seo, 2021). This immediate feedback allows learners to continuously assess and adjust their learning strategies for better outcomes. Furthermore, AI fosters a dynamic interaction between learners and educators, enhancing engagement and collaboration through AI-powered platforms (Seo, 2021). This interaction reshapes the learner-educator dynamic into a more interactive and mutually beneficial relationship. AI’s incorporation into education brings innovations like personalised learning paths and automated tools, significantly enhancing educational experiences and student performance (Luckin et al., 2016).

However, this integration also introduces ethical concerns, including data privacy and the risk of perpetuating biases, requiring careful consideration and responsible application of AI in educational settings. This case study explores the ethical dimensions of AI application in the educational context, focusing on postgraduate students’ experiences at academic institutions in the United Arab Emirates. The research seeks to clarify the ethical effects of AI’s expanding influence on education and eLearning. It aims to analyse the myriad ways AI technologies intersect with ethical principles and map the contours of this evolving landscape. The present study explores the nuances of AI’s role in personalising learning and the ethical consequences accompanying its integration into online education. This exploration includes an analysis of AI’s capacity to shape educational content based on algorithmic interpretations of user data and the potential risks associated with such data-driven personalisation.

This case study will investigate the ethical considerations inherent in managing sensitive educational data, the integrity of AI-generated feedback, and the degree to which AI systems can maintain objectivity without infringing upon educational fairness. Moreover, the research critically examines the dialogical dynamics caused by AI tools, considering how these systems affect the interactive aspects of the educational experience. It evaluates the extent to which AI-facilitated communication platforms might enhance or impede the academic journey of postgraduate students.

2. Theoretical framework

The fast assimilation of Artificial Intelligence (AI) in education and eLearning necessitates an in-depth exploration of its ethical dimensions. This study is grounded in social constructivism as the ‘Grand Theory’, which posits that knowledge and meaning are actively constructed through human experiences and interactions (Driscoll, 2000). This theoretical lens is essential for examining how postgraduate students in the UAE interpret and engage with AI technologies in educational settings, shaping their ethical considerations. The Unified Theory of Acceptance and Use of Technology (UTAUT) serves as the ‘Middle Range Theory’, providing a structured framework to analyse the factors influencing AI adoption, including performance expectancy, effort expectancy, social influence, and facilitating conditions (Venkatesh et al., 2012). Together, these theories support the development of the research instrument and guide the analysis of the ethical implications observed in the study.

2.1 Grand theory—Social constructivism

Drawing on social constructivism theory, this study emphasises that knowledge and meaning are actively constructed through human experiences and interactions, a concept especially relevant to AI in eLearning (Driscoll, 2000). In this context, educators, learners, technologists, and policymakers are not only users but active participants in shaping AI within ethical frameworks. This approach is important as it explores how stakeholders, with their varied cultural, social, and professional backgrounds, perceive and interact with AI technologies.

The study investigates the complex ways these individuals understand and integrate the ethical aspects of AI in education, aiming to reveal how these technologies blend into educational practices. It seeks to understand how stakeholders’ experiences with AI in eLearning shape the collective ethical norms and standards in this rapidly advancing domain.

2.2 Middle range theory—UTAUT model

The Unified Theory of Acceptance and Use of Technology (UTAUT) is a theoretical model developed to explain and predict user behaviours regarding the adoption and use of technology. Proposed by Venkatesh et al. in 2003, UTAUT synthesises elements from previous models, including the Theory of Reasoned Action, the Technology Acceptance Model, and the Theory of Planned Behaviour (Venkatesh et al., 2003).

The model suggests that four key constructs - performance expectancy, effort expectancy, social influence, and facilitating conditions - are fundamental determinants of technology acceptance and usage behaviour (see Figure 1). UTAUT has been widely applied and validated in various contexts, making it a robust framework for understanding the complexities of technology adoption across different settings (Venkatesh et al., 2003).

Figure 1: UTAUT Model

Venkatesh et al.’s comprehensive study synthesised elements from existing technology acceptance models into the Unified Theory of Acceptance and Use of Technology (UTAUT), a framework renowned for predicting and explaining technology adoption behaviours. This research utilises UTAUT to examine how educators and learners accept AI in education, highlighting its relevance in analysing AI’s adoption within educational contexts. The model’s key constructs - performance expectancy, effort expectancy, social influence, and facilitating conditions - have been validated across various studies, demonstrating their applicability in different technological and educational settings (Venkatesh et al., 2003). The UTAUT model is instrumental in this study for exploring how AI in education meets performance expectations and commitment to ethical standards in academia.

Effort expectancy, another cornerstone of the UTAUT model, assesses the perceived ease associated with the operation of AI tools. This aspect could significantly influence their ethical acceptability and sustained use (Venkatesh et al., 2012). The model also acknowledges the weight of social influence, recognising that the opinions and attitudes of peers and authorities can profoundly impact one’s stance towards the ethical deployment of AI in eLearning scenarios.

Facilitating conditions, which refer to the availability of technical and infrastructural support, are also integral to this model, highlighting the practical aspects that could sway the ethical adoption of AI. This research considers these constructs to be focal in shaping the ethical landscape of AI within education, positing that their interplay could dictate the route of AI adoption in alignment with ethical norms and practices.

Applying the UTAUT model, this study is poised to dissect and comprehend the complex dynamics at play in the ethical acceptance of AI in education/eLearning. This framework is not just about acceptance in a technical sense but encompasses a broader ethical purview, accounting for the complexities of human values and societal expectations. The goal is to provide an overarching view summarising the ethical implications of AI technologies, thus guiding stakeholders through the intricate decision-making process that AI in education commands. Thus, this model, coupled with the principles of social constructivism, serves as a solid theoretical base for a comprehensive examination of the ethical considerations that AI in eLearning entails. Through this lens, the study seeks to contribute a nuanced, ethically informed narrative to the ongoing discourse on AI in education, ultimately fostering a reflective and conscientious integration of these pervasive technologies.

The UTAUT framework offers valuable insights into technology adoption by evaluating key determinants such as performance expectancy, effort expectancy, social influence, and facilitating conditions. It also helps address the ethical dimensions critical to AI integration in education. This paper aims to justify the appropriateness of the UTAUT model by supplementing it with ethical theories and frameworks. Specifically, it integrates principles from ethical theories such as deontology and consequentialism to examine issues of data privacy, algorithmic bias, and transparency, which are pivotal in the ethical deployment of AI. The study also references relevant ethical guidelines from international bodies such as The United Nations Educational, Scientific and Cultural Organization (UNESCO) and the European Union, ensuring a comprehensive ethical analysis that aligns with global standards. By bridging the gap between technology acceptance and ethical considerations, this approach provides a more holistic understanding of AI’s role in education, ensuring that the benefits of AI are realised while mitigating potential ethical risks.

3. Literature review

In the wake of the Covid-19 pandemic, there has been a pronounced assimilation of Artificial Intelligence (AI) tools and applications in education and eLearning. This case study is structured around focal insights derived from an extensive academic corpus, emphasising the transformative potential of AI in revolutionising educational approaches and facilitating customised learning experiences. At the same time, the study sheds light on the noticeable ethical challenges inherent in AI integration, addressing data privacy, potential biases within AI algorithms, the sovereignty of pedagogical decisions, and the evolving function of educators. As AI’s influence increases within educational models, the analysis highlights the need for a discerning balance, encouraging leveraging AI’s capabilities whilst steadfastly preserving ethical standards in educational endeavours.

The literature was identified through a comprehensive database search, including Lancaster University Library resources, JSTOR, and Google Scholar. The inclusion criteria were articles, books, and conference papers published between 2019 and 2023, focusing on AI applications in education and their ethical implications. Sources were also selected for their relevance to the integration of AI in eLearning, particularly in the context of the Covid-19 pandemic. Exclusion criteria involved sources older than 2019 (to ensure current relevance), and studies not directly addressing AI’s role in education. This methodological approach ensured a focused yet comprehensive collection of literature, allowing for a nuanced analysis of AI’s educational benefits and ethical complexities.

In this case study, an extensive review of the literature was conducted, categorising the findings according to the three ethical concerns that have emerged with the integration of artificial intelligence (AI) in educational settings as follows.

3.1 Privacy and autonomy

Privacy and autonomy of student data have become significant concerns in the expanding landscape of e-learning brought about by learning analytics. Thomas D. Parsons (2020) argues that personalised learning through AI tools, which are becoming more prevalent, raises critical ethical dilemmas at the same time as being useful. The growing collection and analysis of large amounts of students’ private information make learner privacy a more serious concern for educational institutions. Moreover, when algorithms start influencing how students learn, it becomes difficult to protect their autonomy since students cannot consent or understand the process.

Artificial Intelligence in Education (AIED) has transformative potential but poses great risks concerning data protection and cybersecurity. Nguyen et al. (2022) narrate a case where large volumes of AIED-related data were generated, which, if mismanaged, could lead to severe violations of privacy rights. They conclude that mechanisms such as robust data protections and informed consent should be implemented, especially in multicultural education environments where privacy may be conceived differently.

Bird et al. (2020) stress ethical challenges at the intersection of AI and privacy. As AI tools like Intelligent Personal Assistants become more common, they tend to confuse issues relating to privacy matters. In addition to being used only within academic institutions, these devices are also utilised by many other entities for surveillance purposes, threatening personal liberties. On the one hand, AI presents many positive impacts; on the other hand, it is important not to neglect individual rights and freedom.

Generally speaking, the literature points out that there is a fine balance between ethical requirements that need to be met concerning privacy versus autonomy aspects. While it is necessary for students’ autonomy and security reasons for individualised teaching via AI to be easily distinguished from each other due to its advantages over conventional approaches, the increasing use of learning analytics discussed by Parsons (2020) reflects a wider trend whereby educators must decide if they want customised instruction at the cost of data mining or algorithmic manipulation. Similarly, Nguyen et al. (2022) and Bird et al. (2020) argue that AI can pose wider societal risks in education; thus, it calls for cautiousness whenever deploying these technologies because they have to prioritise ethical considerations and individual rights.

3.2 Ethical concerns surrounding inequality

AI’s role in education is promising but raises questions about societal fairness. There are possible global challenges that AI can address (Bird et al., 2020). However, they insist that if the distribution of such technologies were not equal globally, then this could increase inequality. They suggest that AI should be seen as a common good for all and not merely a business tool. This reflects a wider ethical debate about balancing technological development and fair social practices.

According to Parsons (2020), divergent approaches to privacy and data exist worldwide, further highlighting ethical dilemmas when implementing learning analytics. In America, utilitarian thinking prevails over individual privacy issues by emphasising the collective benefits derived from data use. On the other hand, Europe often uses a deontological approach, were individual rights and privacy matter most. This indicates how difficult it is to create universally binding ethics for AI in education.

On another note, Nguyen et al. (2022) assert that specific contextualised ethical rules are essential for AI applications in schools because generic code may not incorporate full details or elements unique to AIEDs, such as educational settings’ sociotechnical nature. Instead, they propose that guidelines should differ depending on what type of education they are being applied to, including UNESCO’s international guidelines and those made available by the European Union (Nguyen et al., 2022), among others, so as to have expansive and responsible models of Artificial Intelligent Systems (AIS) in place.

The literature on inequality heavily emphasises ensuring that AI implementation promotes social justice. The authors argue against treating AI as an owned entity rather than shared resource or property (Bird et al., 2020). Additionally, these methods show some problems concerning the practical use of artificial intelligence by educators working across cultural boundaries (Parsons, 2020). Moreover, Nguyen et al. (2022) point out the need for ethical frameworks suited to AIEDs, given their complexity in terms of blending technology and social contexts. Overall, these findings imply that the problem of even AI’s distribution is not a technical one but one that has deep ethical implications that require tailor-made responses to be solved appropriately.

3.3 Transparency

Transparency is critical in ethically deploying AI in education (Wiesner, 2020). The authors argue that trust can only be built among educators and learners by clearly articulating how data will be collected, used, and owned. In addition, there is an unresolved dispute about who owns students’ data; while they generated it, a university has also acquired it. Therefore, institutions need to adopt open policies that define these boundaries.

In her article, Parsons (2020) focuses on student-generated and internal uses of student data, which are largely unmonitored within educational institutions. While some processes might be implemented to vet research proposals before utilising student data, internally produced data from teachers and technologists goes unchecked. This absence of oversight creates ethical issues around the possible misuse of information, making it necessary for AI tools used in education to be deployed openly and thus monitored.

Bird et al. (2020) focus on deep learning-based AI systems that are ‘black boxes’. Such systems often make opaque or incomprehensible decisions, which result in serious real-world repercussions like legal judgments and educational assessments. According to the authors, this requires making them more transparent and understandable, especially as they become increasingly integrated into daily life. Moreover, IEEE (2019) highlights transparency as critical for autonomous intelligent systems (A/IS), which often have far-reaching societal ramifications. This IEEE report argues for new standards that allow measurement and testing of transparency, enhancing public confidence and guaranteeing ethical operation. They consider these principles essential as they serve as guidelines to govern the ethical management of AI systems where these devices are answerable and guarantee public belief.

The transparency literature collectively emphasises its importance as a building block for ethically implementing AI in education. As such, Nguyen et al. (2022) suggest that there need to be clear transparency policies concerning data usage. Furthermore, Parsons (2020) discusses the lack of control over the process of generating statistics, in this case by students. Moreover, it is important to make AI systems transparent to enhance trust and ensure that ethical practices are maintained (Bird et al., 2020; IEEE, 2019). These works point out the fact that transparency is not just a technical requirement but an ethical imperative that determines how well responsible is the adoption of AI in education.

3.4 Identification of the research gap

This case study has accurately navigated the wide landscape of academic literature, spotlighting the transformative impact and ethical complexities introduced by integrating Artificial Intelligence (AI) in education and eLearning. From Parsons’s (2020) exploration of privacy and autonomy in the face of learning analytics to Nguyen et al.’s (2022) discourse on the need for tailored ethical frameworks, the study encapsulates a diverse array of perspectives and concerns. The literature highlights a critical divide: on the one side, AI’s capacity to transform education, and on the other, its tendency to create significant ethical dilemmas, including data privacy issues, biases, and concerns around transparency and ownership. Notably, the discussions around regional ethical approaches and the call for specialised ethical guidelines in education (Bird et al., 2020; Nguyen et al., 2022; Parsons, 2020) highlight a critical stage where global consensus and contextualised policies intersect.

Even with the rich insights provided, a visible gap emerges at the confluence of these themes. There appears to be a lack of empirical research specifically addressing the ethical dimensions of AI within the educational sector, particularly in the context of the Unified Theory of Acceptance and Use of Technology (UTAUT). This gap is where the current paper positions itself, attempting to bridge the divide by empirically investigating the nuanced interplay of ethical considerations and AI acceptance in education. Through this endeavour, the study aspires to contribute a unique perspective to the discourse, enriching the understanding of AI’s ethical integration in educational settings and providing informed guidance for stakeholders navigating this evolving area.

4. Research questions

RQ1: To what extent do postgraduate students in academic institutions in the United Arab Emirates perceive and engage with the ethical implications of AI integration in education, particularly concerning data privacy, algorithmic bias and transparency?

RQ2: How do the constructs of performance expectancy, effort expectancy, social influence, and facilitating conditions (as outlined in the UTAUT model) influence these perceptions and engagements within the context of their cultural and social settings?

5. Research design

5.1 Methodology

Given the exploratory nature of this research, the study focuses on understanding how postgraduate students engage with and interpret the ethical use of AI in eLearning environments. The study is exploratory because it aims to investigate a relatively new and evolving area of study - AI ethics in education - without the intent of making statistically significant generalisations from a small sample size. While small, the sample size is appropriate for identifying early patterns and themes rather than aiming for broad generalisation. The focus was on depth, aiming to provide insights that will guide future research with larger, more diverse groups, laying the foundation for further studies in this field.

This exploratory study employed an online survey targeting sixteen postgraduate learners from an academic institution in Abu Dhabi, United Arab Emirates. Participants were selected based on their active engagement with AI tools in their academic pursuits, making them suitable for providing initial insights into the ethical implications of AI in eLearning.

The survey was facilitated through Microsoft Forms, consisting of fifteen questions designed to explore participants’ perceptions of AI’s ethical impact. The Unified Theory of Acceptance and Use of Technology (UTAUT) model informed the survey questions, specifically examining how performance expectancy, effort expectancy, social influence, and facilitating conditions relate to ethical considerations in AI use.

Due to the small sample size, this study does not aim for statistical significance but rather seeks to uncover patterns and themes that can inform future research. Descriptive statistics, such as mean and standard deviation, were calculated to identify preliminary trends in the data, while qualitative analysis was conducted to interpret open-ended responses, providing richer insights into the participants’ perceptions.

5.1.1 Ontology and epistemology

The fast assimilation of Artificial Intelligence (AI) in education and eLearning necessitates an in-depth exploration of its ethical dimensions. Drawing from interpretivist ontology and social constructivist epistemology, the study is grounded in an interpretivist ontology and social constructivist epistemology, emphasising the socially constructed nature of ethical perceptions and experiences with AI in education (Crotty, 1998). These philosophical perspectives informed the design of the survey instrument, and the interpretative approach used in analysing the data. The UTAUT model was specifically adapted to integrate these ethical dimensions, ensuring that the survey questions reflected both the theoretical constructs of technology acceptance and the ethical concerns pertinent to AI integration in education. This framework provides a comprehensive lens through which the ethical implications of AI in eLearning can be understood and analysed, particularly in the context of higher education in the UAE.

The study is grounded in social constructivism as the ‘Grand Theory’, which posits that knowledge and meaning are actively constructed through human experiences and interactions (Driscoll, 2000). This perspective is crucial for exploring how educators, learners, technologists, and policymakers use AI and actively shape its integration within ethical frameworks in education. The Unified Theory of Acceptance and Use of Technology (UTAUT) serves as the ‘Middle Range Theory’. UTAUT (Venkatesh et al., 2003) provides a structured approach to analysing technology adoption behaviours, but this study extends its application by integrating ethical dimensions. The four key constructs of UTAUT - performance expectancy, effort expectancy, social influence, and facilitating conditions - are used to explore how these factors influence the ethical considerations related to AI adoption in education.

5.1.2 Instrument development

The survey instrument was meticulously developed based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model, with additional considerations drawn from social constructivist theory. The UTAUT model was the foundation for identifying the key determinants of technology acceptance, such as performance expectancy, effort expectancy, social influence, and facilitating conditions. However, to comprehensively address the ethical implications of AI integration in education, the survey was further informed by social constructivist theory, which emphasises the socially constructed nature of knowledge and meaning.

This dual-theoretical approach ensured that the survey questions were designed to capture the participants’ acceptance of AI and their ethical concerns, particularly data privacy, algorithmic bias, and transparency. These ethical dimensions were framed within the participants’ social and cultural contexts, acknowledging that their interactions with AI are shaped by the norms and values of their environment. For example, questions related to data privacy were crafted to understand how participants perceived the risks associated with AI based on their experiences and the broader societal discourse on privacy. Similarly, items addressing algorithmic bias were designed to elicit responses that reflected the participants’ concerns about fairness and equity in AI applications within their educational settings.

Integrating UTAUT and social constructivist theory in the data analysis phase provided a robust framework for interpreting the results. The UTAUT model allowed for a structured analysis of AI acceptance factors. At the same time, the social constructivist lens facilitated a deeper understanding of how these factors are interwoven with ethical considerations. This methodological synergy enabled a nuanced exploration of how postgraduate students in the UAE engage with AI in education, not only from a technological standpoint but also through an ethical and socially contextualised perspective.

5.2 Research methods

This study employed an online survey targeting sixteen postgraduate learners from one of the academic institutions in Abu Dhabi, United Arab Emirates. The survey was conducted online using Microsoft Forms, with participants providing informed consent before participation (see Appendix A). Invitations were sent via email to a selected group of students who had previously engaged with using AI tools in their studies. The survey consisted of fifteen questions (see Appendix B) designed to capture nuanced insights into participants’ perceptions of the ethical dimensions associated with AI’s integration in educational settings. The survey covered the four key elements of the UTAUT model: performance expectancy, effort expectancy, social influence, and facilitating conditions.

The responses were subjected to descriptive statistical analysis in the analytical phase (using MS Excel). Key metrics including mean and standard deviation were computed to distil patterns and tendencies.

5.2.1 Study area and population

This research explored the confluence of artificial intelligence (AI) with education and e-learning, concentrating on the ethical issues and observations within these digitally enhanced learning atmospheres. The focal point of this case study covered a critical examination of the integration of AI tools within educational settings, addressing the ethical considerations that this integration precipitates. Specifically, the study inspected the implications of AI deployment in teaching and learning processes, where ethical challenges are increasingly salient. This encompassed analysing how AI technologies influence educational practices and the potential ethical dilemmas they might introduce, from data privacy to the fairness of AI-mediated outcomes. By exploring the intersections of AI applications within the educational landscape, the study aimed to highlight the complex ethical terrain that educators, learners, and technologists must navigate in advancing e-learning environments.

The study focused on postgraduate students in Abu Dhabi, United Arab Emirates, due to the region’s rapidly advancing educational environment and its emphasis on integrating technology, particularly in eLearning. Following the Covid-19 pandemic, the UAE had strongly supported the digital transformation of education, making it a fitting context to explore the ethical considerations surrounding AI.

For this exploratory study, a purposive sample of sixteen postgraduate students who were using AI-powered eLearning applications was selected. The sample size, though small, is suitable for this type of study, aimed at identifying early patterns and themes rather than producing widely generalisable results. The emphasis was on depth rather than scope, with the goal of generating insights that could guide future research involving larger, more diverse groups. These findings were intended to lay the groundwork for further studies in this developing field.

In this study, the population included learners currently engaged in education and e-learning and focused on a diverse group of learners and students from various backgrounds, disciplines, of different ages, and levels of experience in education/eLearning. Data were collected through surveys to capture a range of viewpoints on the ethical impacts and ethical considerations of using AI in education/eLearning.

5.2.2 Sampling procedure

There are different methods for sampling procedures depending on the purpose of the case study. In this study, the sampling procedure was follows:

  • Define the population.

  • Determine the sample size.

  • Select a sampling technique.

  • Collect data.

  • Analyse data.

  • Draw conclusions.

5.2.3 Data collection method

In this case study, the data collection method was through a survey that included fifteen questions (see Appendix B), which was shared with the participants of learners and students through email invitations.

5.2.4 Data management

The data management for the case study involved ensuring that the collected data were accurate, relevant, and properly stored to enable meaningful conclusions to be drawn from the study. The participants’ personal information was respected as private and not shared to ensure privacy protection.

5.2.5 Data analysis

The responses were subjected to descriptive statistical analysis using Microsoft Excel. Key metrics such as mean, standard deviation, and frequency distributions were computed to distil patterns and tendencies. This analysis provided a quantitative foundation for understanding the ethical implications of AI as perceived by the participants.

5.2.6 Ethical considerations

5.2.6.1 Protection from harm

In this research, specific measures were implemented to protect participants from potential harm. Key actions included ensuring the anonymity of responses from postgraduate students, securing all data, and restricting access solely to the researcher. This approach was essential to maintain confidentiality and protect the integrity of participant contributions. Data were stored using Microsoft OneDrive with a two-way authentication password and Microsoft Authenticator One Time Password (OTP) application. As a researcher, it is impossible to authenticate who was surveyed because the survey was created using Microsoft Forms, and it did not request any personal information from the participants.

5.2.6.2 Relational ethics

A foundational aspect of the study involved establishing clear, transparent communication with participants. This required clarifying the research’s purpose, potential impacts, and the participants’ roles, thus adopting an environment of trust and respect. Informed consent was an essential part of this process, explaining that the participation is fully voluntary and allowing participants to withdraw at any point, ensuring their autonomy and comfort throughout the study. Also, there was no direct contact with students or any effect on their marks. Furthermore, the survey link was sent through Students Affairs.

5.2.6.3 Reflexive ethics

As an information technology (IT) and learning technology specialist, reflexive ethics played a central role in my approach to the research. Constant self-reflection on personal biases and assumptions, particularly regarding AI in education, was a continuous process. This reflexivity was instrumental in interpreting the research findings and in interactions with participants, ensuring that my perspective did not dominate but rather complemented the diverse viewpoints within the study.

6. Findings

The study, grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT) model, delved into the ethical implications of AI in the education and eLearning sectors. This analysis categorises the findings according to the four key UTAUT elements while explicitly connecting them to the ethical considerations of AI use in education.

Given the study’s exploratory nature, these findings should be interpreted as initial insights rather than definitive conclusions. The small sample size limits the generalisability of the results. Still, the study nonetheless provides valuable perspectives that can guide future, more extensive research on the ethical use of AI in education.

6.1 Performance expectancy

According to 94% of the survey participants (see Appendix C, Survey Question 1), AI could potentially improve education quality and accessibility. This optimism stems from AI’s capacity to adapt teaching materials and individualise learning. Moreover, as seen in Figure 2, about half of the respondents strongly agreed that the application of AI can enhance academic results (see Appendix C, Survey Question 2). From th percentage of postgraduate students who expressed optimism about AI’s potential to enhance education, 50% strongly believed that AI could significantly boost academic performance.

These findings have ethical implications in two ways. First, while improved learning outcomes have been attributed to AI-driven personalisation, privacy concerns and possible biases exist. Using algorithms to customise instructional content may lead to further widening of gaps if the base data are biased. Second, high anticipation for AI’s effect on academic performance could imply a potential over-dependence on technology, thereby overshadowing human-centred pedagogy, which is crucial.

Figure 2: Participant optimism about AI’s role in education

6.2 Effort expectancy

The study shows that 63% of the participants (see Figure 3) said that they found AI to be “neutral” in terms of its complexities (see Appendix C, Survey Question 3), which implies it was not too hard for them to adopt. Despite this, a significant chunk of users (31%) perceived it as easy. However, an appreciable learning curve still needs a user-centred design and targeted training programmes (see Appendix C, Survey Question 4).

One ethical consideration involved here is ensuring equal access to training and resources. Such tools can become exclusionary if they are not designed for the end user, especially those who are less digitally literate. This finding also raises a possible digital divide when only people with adequate resources and support can use AI appropriately, therefore expanding educational disparities.

Figure 3: Ease of use of AI tools

6.3 Social influence

The study found that social influence had a modest impact on the participants’ decisions about adopting AI-based resources. For example, there was neutral influence from peers for 44% of the respondents (see Appendix C, Survey Question 5), while 31% indicated a low level of social influence. The data in Figure 4 demonstrated that recommendations accounted for 75% of people who interacted with AI educational tools following peer recommendations, implying positive yet indirect social influences on their adoption trends (see Appendix C, Survey Question 6).

From this finding, it should be considered whether direct peer pressure itself is not very important, but rather, the larger social context in which AI tools are introduced is crucial to their adoption. The ethical issue here revolves around the possibility of individuals being influenced to embrace AI technologies without fully understanding what they entail. This highlights the importance of informed and voluntary adoption practices whereby users are aware of ethics and artificial intelligence issues as applied to education.

Figure 4: Social influence on AI adoption

6.4 Facilitating conditions

The facilitating factors, which indicated how likely respondents felt their organisations were able to provide technical assistance and organisational support for AI integration (see Figure 5), was responded to very positively by 25% and positively by 56% of them (see Appendix C, Survey Question 7), who believed their institutions were well prepared for such improvements. Nonetheless, 19% noted some doubts about institutional readiness (see Appendix C, Survey Question 8). This may imply imbalances in resource allocation when integrating AI in different learning centres.

This result raises ethical concerns regarding fairness and equality in AI deployment. The fact that almost one out of five respondents had doubts about whether their organisations were ready suggests a possible unfairness in resource allocation. This might eventually lead to a state where only well-endowed centres could fully exploit AI’s advantages, thereby deepening educational disparities as they exist today. Half of the respondents also identified technical constraints, which underscores the need to continue investing in the infrastructure needed to use AI appropriately.

Figure 5: Institutional readiness for AI integration

6.5 Ethical concerns

A large majority of respondents (56%) said they were concerned about the confidentiality of their behavioural data in interactions with AI-based educational systems. Question 9 in the survey (see Appendix C) asked about this issue. It also emerged that just half of them (50%) were familiar with what their institutions did to protect their private information, being aware of their institution’s data privacy policies, as seen in Figure 6; this means that education providers must be more open and engaged (see Appendix C, Survey Question 10).

Figure 6: Awareness of data privacy policies

In the critical domain of ethical data usage, three-quarters (75%) of those surveyed agreed that any collection and utilisation of behavioural data should be strictly confined to enhancing the learning experience (see Appendix C, Survey Question 11). This underlines a significant expectation for ethical stewardship of data by educational institutions. A resounding 94% of participants considered it important for these institutions to maintain transparency regarding the use of such data, reiterating the importance of trust and ethical responsibility in educational practices (see Appendix C, Survey Question 12).

When it comes to the potential for inherent biases within AI technologies, a vast majority (88%) expressed apprehension about the possibility of these tools introducing bias or discrimination in educational outcomes (see Appendix C, Survey Question 13). This concern is further solidified by the strong consensus (75%) on the urgent need for the development and implementation of guidelines to address and reduce such biases (see Figure 7), ensuring equity and fairness in AI-augmented educational environments (see Appendix C, Survey Question 14). This emphasises the ethical responsibility of educational institutions to develop and implement frameworks that ensure fairness and equity in AI applications.

These results emphasise the ethical obligations of data management and bias reduction in AI systems. Institutional practices must fill the gap in knowledge about privacy policies to ensure user trust. Also, there is a strong sense of concern for partiality, indicating a wider fear of AI system fairness in terms of their influence on academic life.

Figure 7: Concern about AI bias

Ultimately, the research raises concerns about informed consent, with 69% of the respondents agreeing that they were given enough information on how data collected by AI technologies is used (see Appendix C, Survey Question 15). Nevertheless, 31% responded negatively, implying an area where institutions need to improve information dissemination to inform all stakeholders fully.

This shows that while most participants know about data handling practices, some are still ignorant, raising questions of ethicality regarding the sufficiency of consent mechanisms. It is important to ensure that every user appreciates what it means for their data to be used by AI technologies so as to keep education ethics intact.

7. Discussion

This exploratory study revealed several important findings regarding the ethical implications of AI in education, particularly within the context of UAE higher education. Most participants (94%) expressed optimism about AI’s potential to enhance educational quality, with half of them strongly believing in its capacity to improve academic performance. However, significant ethical concerns also emerged, with 56% of respondents expressing moderate to high levels of concern over data privacy and 88% indicating apprehension about the potential for AI to introduce bias into educational outcomes. Furthermore, while the majority found AI tools relatively easy to use, disparities in digital literacy and access to training resources were noted. Participants also highlighted the need for transparency in data usage, with 50% aware of their institution’s data privacy policies. These findings underscore the importance of ethical oversight, transparency, and equitable access to AI-driven educational tools.

The outcomes of this exploratory study contribute to the existing literature on the ethical implications of AI in education and technology-enhanced learning (TEL). While the findings align with broader themes in the literature, such as the importance of data privacy and transparency, the small sample size limits their generalisability. Therefore, the study should be seen as a starting point for further research rather than providing conclusive evidence.

7.1 Performance expectancy

The high-performance expectancy observed in this study aligns with previous research that has emphasised AI’s potential to enhance academic outcomes. Studies by Luckin et al. (2016) and Van Lehn (2011) have supported the idea that AI can significantly improve student achievement through personalised learning environments and adaptive technologies. This optimism is corroborated by the findings of this research, where 94% of participants agreed that AI could be beneficial in enhancing education. Nonetheless, an essential addition to this discourse is the ethical dimension concerning data governance, which must ensure that AI-driven personalised learning does not compromise student privacy or introduce bias, as previously raised by Nguyen et al. (2022) regarding the safe use of personal information in AI-enabled digital spaces.

7.2 Effort expectancy

The mixed responses regarding effort expectancy, with many respondents finding AI tools relatively easy to use, reflects the challenges identified in the Technology Enhanced Learning (TEL) literature over time. Earlier works, such as those by Venkatesh et al. (2003), have demonstrated that ease of use is crucial in adopting technological systems. While these tools are generally seen as accessible, there remains a need for user-oriented design and targeted training, given the disparities in digital literacy among educators and learners. This is why Seo (2021) emphasised the importance of intuitive designs for AI in educational settings to facilitate widespread adoption.

7.3 Social influence

This finding is consistent with the UTAUT model, which suggests that social factors become less significant as users gain experience with technology. Venkatesh et al. (2012) have supported this, showing that personal experience and perceived efficacy are often more influential than peer pressure in decisions regarding technology adoption. However, the study also highlights the need for positive user experiences and the perceived benefits of the technology to ensure continued use of AI in education, despite initial encouragement from peers and authorities.

7.4 Facilitating conditions

This research’s positive assessment of facilitating conditions aligns with previous studies emphasising institutional support’s importance during technology adoption. Facilitating conditions were identified as critical determinants of technology acceptance by Venkatesh et al. (2003), and these findings underscored the importance of adequate infrastructure and resources in schools for successful AI integration. However, this study also identified a technological readiness gap that may impede the full realisation of AI’s potential in education, a concern previously raised by Parsons (2020), who noted that many educational settings lack the necessary support and resources for AI implementation.

7.5 Ethical considerations

The ethical issues identified in this research, particularly regarding data privacy, bias and transparency, are consistent with the broader literature on AI ethics in education. Studies by Bird et al. (2020) and the IEEE (2019) have emphasised the need for transparency and accountability in AI systems. This study’s findings underscore the importance of well-articulated ethical guidelines and frameworks to address these challenges. The high level of concern among participants about data privacy and the potential for AI bias echoes the findings of Parsons (2020) and Nguyen et al. (2022), who called for more robust data protection measures and ethical oversight in AI-enhanced learning environments.

8. Limitations and conclusion

8.1 Limitations

The following are the limitations that have been recognised, which need to be taken into consideration in future studies:

  • Sample Size: The small sample size of sixteen participants limits the ability to generalise the findings. This exploratory study highlights critical areas of concern that warrant further investigation with larger sample sizes.

  • Scope: The study focuses on a specific geographic region and educational context, which may not be applicable to other settings. Future research should consider a broader range of contexts to enhance the generalisability of the findings.

  • Data Collection Method: While surveys provide helpful quantitative data, they may not capture part of the full depth of participants’ experiences. Incorporating qualitative methods, such as interviews or focus groups, could provide richer insights.

8.2 Conclusion

In conclusion, this study investigated the ethical implications of AI in education and eLearning, examining how ethical concerns like data privacy and algorithmic bias counterbalance AI’s benefits in learning. Despite the limitations of this study, which are small sample size, geographic focus, and methodological constraints, this paper offers critical insights for Technology Enhanced Learning (TEL) researchers, merging advanced technology with pedagogical practices. Drawing on foundational TEL concepts (Bligh & Lee, 2020; Kirkwood & Price, 2014), the research underscores AI’s transformative potential in education and the concurrent ethical challenges it introduces.

The findings reveal significant optimism about AI’s potential to enhance educational experiences and substantial ethical concerns, particularly regarding data privacy, bias and transparency. These findings have significant implications for the future of AI in education. As AI continues to transform educational practices, addressing the ethical challenges identified in this study is crucial. Educational institutions should prioritise developing and implementing ethical guidelines that protect student privacy, ensure transparency, and mitigate biases within AI systems. Additionally, there is a need for ongoing research into the long-term effects of AI on educational equity and fairness.

This research highlights the prevailing positive sentiment and optimism regarding AI in education while emphasising the numerous areas of concern and challenges that demand attention to ensure the ethical implementation of AI in educational environments. Dedication to enhancing technological literacy is imperative, as is providing educators and learners with the necessary support to leverage AI effectively. Moreover, institutional readiness and investment in technology infrastructure are essential prerequisites for successfully integrating AI in educational settings. The study underscores the importance of ethical oversight, robust data governance, and the creation of inclusive, transparent educational AI systems. Future research should incorporate qualitative methods and a broader range of stakeholder perspectives to provide a more comprehensive understanding of AI’s ethical impact on education. By addressing these ethical considerations, stakeholders can better harness AI’s potential while safeguarding educational processes’ integrity, privacy, and fairness.

In addition, ethical considerations must occupy a central role in the deployment of AI in education. It is very important to address issues such as data privacy and security, biases embedded in AI algorithms, and obtaining informed consent, thereby ensuring transparency, trust and fairness. Prioritising these ethical considerations will foster an environment conducive to the responsible and credible use of AI in education.

Furthermore, future research should expand its methodological approach by incorporating qualitative methods, embracing diverse perspectives, and examining various contextual factors. Qualitative research, for instance, could illuminate an individual’s experiences with AI in education, uncovering their subjective interpretations and shedding light on the implications from a complete standpoint. Furthermore, incorporating a wide range of perspectives, including those from educators, students, parents, policymakers, and implementation experts, will yield a more comprehensive understanding of the ethical impacts of AI in education. By embracing these research advancements, the educational community will be better equipped to navigate the ethical complexities associated with AI integration. Ultimately, such progress will enable stakeholders to harness the full potential of AI while safeguarding the integrity, privacy, and fairness of educational processes.


About the author

Ziad Hani Rakya, Anwar Gargash Diplomatic Academy, Abu Dhabi, United Arab Emirates; and Department of Educational Research, Lancaster University, Lancaster, United Kingdom.

Ziad Hani Rakya

Eng. Ziad Hani is a computer engineer with twenty-five years of extensive professional experience in the information technology industry. He holds a bachelor’s degree in computer engineering, and a master’s degree in computer science and information security, as well as a master’s degree in management information systems.

Ziad has a special interest in Artificial Intelligence (AI). His journey in AI began in 2004 when he was a bachelor’s student, and since then, he has become a specialist in the field, contributing significantly through numerous papers.

This interest manifested in the pursuit of his Doctorate degree in enhancing learning technology through Artificial Intelligence (AI), as a testament to his commitment to advancing the intersection of education and technology, aiming to create innovative solutions that transform learning experiences. At the time of writing this paper, Ziad is in the second year of his PhD programme in Technology Enhance Learning (TEL) at the Department of Educational Research at Lancaster University,

Ziad is the Education Technology Manager at Anwar Gargash Diplomatic Academy in Abu Dhabi, United Arab Emirates. His work in AI has been marked by a dedication to exploring and solving ethical problems in AI, thereby pushing the boundaries of what technology can achieve.

Email: [email protected]

ORCID: 0009-0007-0004-0757

Article information

Article type: Full paper, double-blind peer review.

Publication history: Received: 27 July 2024. Revised: 25 September 2024. Accepted: 25 September 2024. Online: 28 October 2024.

Cover image: Badly Disguised Bligh via flickr.


References

Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E. & Winfield, A., (2020). The ethics of artificial intelligence: Issues and initiatives. Panel for the Future of Science and Technology, European Parliamentary Research Service, Scientific Foresight Unit (STOA). Accessible at: The ethics of artificial intelligence: Issues and initiatives | Think Tank | European Parliament (europa.eu)

Bligh, B., & Lee, K. (2020). Debating the status of ‘theory’ in technology enhanced learning research: Introduction to the Special Inaugural Issue. Studies in Technology Enhanced Learning, 1(1), 17-26. https://doi.org/10.21428/8c225f6e.dc494046

Bryman, A. (2012). Social Research Methods (5th Ed). Oxford: Oxford University Press.

Crotty, M. (1998). The Foundations of Social Research: Meaning and Perspective in the Research Process. London: Sage Publications Inc.

Driscoll, M. (2000). Psychology of Learning for Instruction (3rd ed.). Harlow: Pearson.

Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.). Handbook of Qualitative Research (pp. 105–117). Thousand Oaks, CA: Sage Publications, Inc.

Kirkwood, A., & Price, L. (2014). Technology-enhanced learning and teaching in higher education: what is 'enhanced' and how do we know? A critical literature review. Learning, Media, and Technology, 39(1), 6-36. https://doi.org/10.1080/17439884.2013.770404

Seo, K., Tang, J., Roll, I., Fels, S., & Yoon, D. (2021). The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18, 54. https://doi.org/10.1186/s41239-021-00292-9.

Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016). Intelligence Unleashed. An argument for AI in Education. London: Pearson.

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B.-P. T. (2022). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241. https://doi.org/10.1007/s10639- 022-11316-w

Parsons, T. D. (2020). Ethics and educational technologies. Educational Technology Research and Development, 69, 335–338. https://doi.org/10.1007/s11423-020-09846-6

IEEE. (2019). Ethically Aligned Design: A Vision for Prioritising Human Well-being with Autonomous and Intelligent Systems. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf

Van Lehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. https://doi.org/10.1080/00461520.2011.611369

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540

Venkatesh, V., Thong, J.Y.L., & Xu, X. (2012). Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly, 36(1), 157-178. Available at SSRN: https://ssrn.com/abstract=2002388


Appendix A: Consent form

A screenshot of a survey Description automatically generated

Appendix B: Survey protocol

The following are the core questions that will be asked in the survey.

  • My primary research inquiries are outlined as follows.

  • Microsoft Forms will be used for this survey.

  • The following are scale questions:

  1. On a scale of 1 to 5 (5 is high), how strongly do you believe integrating AI technologies in education/eLearning can enhance your overall learning experience?

  2. On a scale of 1 to 5 (5 is high), To what extent do you think AI-driven educational tools can improve your academic performance?

  3. How easy or difficult do you find it to use AI-powered educational platforms and tools? (Easy, Neutral, Or Hard)

  4. Do you agree that using AI technologies in education/eLearning requires a significant amount of effort on your part? (Agree, Neutral, or Disagree)

  5. To what level do your peers influence your decision to use AI-driven educational resources? (Never, Low, Neutral, or High)

  6. Have you ever used educational tools powered by AI because of recommendations from professors or classmates? (Yes/No)

  7. On a scale of 1 to 5 (5 is high) how do you rate your academic institution's readiness to support the integration of AI technologies in education?

  8. Are any technical limitations impeding your use of AI-driven learning resources? (Yes/No)

  9. On a scale of 1 to 5 (5 is high), How concerned are you about the privacy of your behavioural data when using educational tools powered by AI?

  10. Are you aware or have you been informed of the measures taken by your institution to protect your data in learning environments driven by AI? (Yes/No)

  11. Do you agree that your behavioural data should only be used for improving your learning experience and not for other purposes? (Agree, Neutral, Or Disagree)

  12. On a scale of 1 to 5 (5 is high), How important that an academic institution be transparent about how your data is used in educational tools powered by AI?

  13. Are you concerned about the potential for AI technologies that are used in education/eLearning to introduce bias or discrimination in educational outcomes? (Yes/No)

  14. Do you agree that there should be guidelines in place to address bias and discrimination in education enhanced by AI? (Agree, Neutral, or Disagree)

  15. Do you agree that you are adequately informed about how AI technologies collect and use your behavioural data for improving learning experiences? (Yes/No)

Appendix C: Survey result (n=16 participants)

Comments
0
comment
No comments here
Why not start the discussion?