Skip to main content
SearchLoginLogin or Signup

A narrative review of the potential use of generative artificial intelligence in educational research practices in higher education

Full paper

Published onOct 28, 2024
A narrative review of the potential use of generative artificial intelligence in educational research practices in higher education
·

Abstract

In recent years, generative artificial intelligence (GenAI) has been rapidly advancing and becoming increasingly prevalent in various fields, including academic research. This paper reviews the existing body of literature on the application of GenAI in the context of contemporary educational research practices. It delves into the literature to understand the possibilities of GenAI in academic research contexts. It explores how researchers in higher education contexts recognise its potential and use GenAI tools in their research practices. This narrative literature review addresses the usage of GenAI in scholarly practices and provides key insights into its uses, limitations, and ethical implications. The findings suggest that while researchers increasingly leverage GenAI tools to support them with various stages of the research process, the use of these tools is accompanied by a range of limitations and ethical considerations that require careful attention. The discussion emphasises the importance of researchers using these tools in research activities in conjunction with human intelligence while exercising caution to maintain academic integrity and research ethics. This paper offers up-to-date insights on the potential uses of GenAI in an educational research context to scholars and researchers who are authors, reviewers, editors, and readers. More broadly, it intends to contribute to ongoing discussions within the research community on the potential application of GenAI tools, their associated implications, ethical challenges, and future considerations in a new era of AI-assisted research.

Keywords: generative artificial intelligence; ChatGPT; large language models; educational research; scholarly practices; research practices; academic publishing

Part of the Special Issue Generative AI and education

1. Introduction

With a new era of artificial intelligence upon us, it is apparent these generative artificial intelligence (GenAI) technologies have the potential to transform the tertiary educational landscape, notwithstanding the significant implications for all stakeholders involved in teaching, learning and research (Dwivedi et al., 2023; Lodge et al., 2023; Nguyen et al., 2023). Within the higher education sector, much discourse to date has focused on how GenAI is impacting teaching and learning practices, particularly in the context of assessment integrity (Holmes & Tuomi, 2022; Zawacki-Richter et al., 2019). It is also evident that there are multiple variations of AI models and applications (e.g., intelligent tutoring systems, chatbots, learning analytics dashboards, adaptive learning systems, AI-enabled assessments, and automated grading of assessments) currently used across higher education to enhance teaching and learning (Baidoo-Anu & Owusu Ansah 2023; Celik et al., 2022; Holmes & Tuomi, 2022). The potential implications for educators and learners are well documented but less so has this discussion extended to researchers. The uses and impacts of GenAI in research practices have provoked less attention, remaining relatively under-explored until very recently. The effects of GenAI on academic research practices have significantly escalated alongside the popularisation of the OpenAI ChatGPT tool in November 2022. Whether ChatGPT, Gemini, Copilot, Bing AI, or equivalent tools, these large-language model chatbots have attracted significant attention, and their integration into educational research practices has become more prevalent in the literature since the end of 2022 (Crawford et al., 2023; Kooli, 2023).

Given the potential of large language models, there is a pressing need for the research community to engage in a comprehensive debate on the potential uses and limitations of these inevitable tools in higher education research practices (Dergaa et al., 2023; Qasem, 2023). This review focuses on exploring the relationship of GenAI tools in the context of contemporary educational research practices. It seeks to understand how educational researchers can incorporate such tools into their professional lives by potentially using GenAI in the research process. The review also considers possible limitations (including bias, inaccuracies, lack of domain-specific expertise, lack of ability to understand the context and limited ability to generate original insights) and some ethical considerations for researchers (including academic integrity, plagiarism, privacy and security, and transparency of GenAI models) when they engage with such tools as part of their research practice. 

1.1 Context

The author currently works in a research-intensive University within the Centre for Teaching and Learning which has a remit to support the AI literacies of faculty and staff. The Centre hosts the Teaching and Learning Conversations Series initiative, which focuses on the theme of generative artificial intelligence (AI) in education and research. Through this initiative, multiple discursive spaces including an online community of practice are offered to enable the academic community to engage in developmental conversations around integrating GenAI tools in teaching and research practices. In early community conversations, a desire for basic AI literacy, knowledge sharing and guidance on pedagogical applications and ethical implications was evident. In addition, many faculty expressed a keen interest in exploring whether GenAI tools could assist them with research. It is anticipated this review may be of particular relevance to the wider research community, as it will provide up-to-date understandings to researchers and scholars who are authors, reviewers, editors, readers and more generally anyone undertaking research. It is intended to provide useful insights to assist researchers in understanding the potential of GenAI and how to use it effectively to enhance research practice.  More broadly, the study intends to contribute to the ongoing discussions within the higher education sector on the capabilities and limitations of GenAI in research, associated implications, ethical challenges and future considerations. Based on the evidence from the existing literature, the paper provides a critical overview of GenAI in the context of educational research, providing insights into its use, while highlighting the implications which must simultaneously be considered.

1.2 Research questions

This literature review seeks to answer the following research questions:

  • RQ1: How can GenAI tools be used by educational researchers to facilitate the research process? 

  • RQ2: What are the implications of using GenAI tools in the context of contemporary educational research practices?  

2. Methodology

A narrative literature overview was conducted as a means of synthesising and critically reviewing recent publications pertinent to the research questions (Ferrari, 2015) as outlined in Section 1.2. A narrative literature overview was selected as an appropriate method to comprehensively review and provide a narrative synthesis of a previously published body of literature in the context of research questions (Ferrari, 2015; Green et al., 2006; Knopf, 2006). To begin, a preliminary search in the Scopus and Web of Science databases was conducted to identify existing literature specifically addressing the uses and implications of GenAI tools in educational research practices. These databases were selected due to their reliability and authority. This study used a combination of keywords to search for papers that specifically addressed the uses of GenAI in educational research. The search was conducted using keywords to include: generative artificial intelligence; ChatGPT; large language models; educational research; scholarly practices; and research practices. The keywords and search string were iteratively refined throughout the research process to yield more adequate results to identify the pertinent literature addressing one or both of the research questions. The search strings and selection process are presented in Table 1.

Databases

Scopus

Web of Science (WoS)

Search strings

(TITLE-ABS-KEY ( {generative artificial intelligence} OR “artificial intelligence” OR "large language models" OR "LLMs" OR {ChatGPT} ) AND TITLE-ABS-KEY ( "scholarly practices" OR "research processes"  OR " academic research " ) )

 

TS=(("generative artificial intelligence") OR ("large language models" OR LLMs OR "ChatGPT” AND “research processes” OR “academic research”))

Search fields

In title, abstract or keywords (TITLE-ABS-KEY)

In the topic = title, abstract, author, keywords

Results of search

n =40

n = 29

Inclusion criteria

Limit to English (n =39)

Peer-reviewed journals and editorials (n = 23)

Excluded

Conference paper (n =5), review (n = 2), letter (1), note (n=1)

Removed duplications (n =14)

Articles to review

n = 30

n = 9

Total articles for review, n = 39

Table 1: Literature search and selection process

To optimise for relevance, the screening process initially involved reviewing abstracts and including them if they met the following criteria: (1) discussed or evidenced how researchers are using GenAI tools; (2) discussed the benefits, challenges or limitations of GenAI in the context of educational research; and (3) addressed or demonstrated the use of AI in a higher education research context. Due to the rapid review process and scope of this review, some justified choices were made during the literature search process. Firstly, only articles in English were selected as the most widely available and accessible language. Secondly, only peer-reviewed journal articles were focussed upon in the higher education context as were identified through citation databases. Studies selected for review (n=39) were further analysed and interpreted using a qualitative approach. This involved a reflexive thematic analysis approach as described by Braun (2006, 2019) to identify, critically analyse and evaluate the papers. Specific to the research questions, relevant data were identified, extracted, and coded from which key themes were generated. These themes are detailed and discussed in the context of the research questions in Sections 3.1 and 3.2. Table 2 presents a condensed overview of the papers reviewed, their themes in the context of the research questions and implications for consideration.

Reference

Title

Theme relevant to RQs

Implications

1.

Alshater (2022)

Exploring the role of artificial intelligence in enhancing academic performance: A case study of ChatGPT

Explores the potential of GenAI, particularly natural language processing in enhancing academic performance

Reveals ChatGPT can assist researchers in several aspects of research but there are also multiple limitations to consider. Suggests it is important for researchers to use these tools in conjunction with human analysis and interpretation 

2.

Al-Zahrani (2023)

The impact of generative AI tools on research: implications for higher education

Readiness of a higher education community to integrate AI technologies in research

Higher education institutions should consider integrating ethical guidelines, policies into research practices and investing and integrating GenAI tools into research practices

3.

Anderson et al. (2023)

AI did not write the manuscript or did it? Can we trick the AI text detector into generated texts

Use of AI text generation tools to generate academic manuscripts

Editorial boards and publishers should consider the use of GenAI text tools and associated ethical, equity, accuracy and detection concerns and potential threats to scientific integrity

4.

Baidoo-Anu & Owusu Ansah (2023)

Education in the era of Generative Artificial Intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning

Exploratory study that synthesises literature to offer some potential benefits and drawbacks of ChatGPT in promoting teaching and learning

Universities should rethink their policies and practices to guide and support their students in using ChatGPT and other GenAI tools safely and constructively

5.

Berg (2023)

The case for generative AI in scholarly practice

The use of GenAI in scholarship argues for its legitimacy as a valuable tool for contemporary research practice

Presents three potential uses for AI models in research practice: AI as a mentor; AI as an analytic tool; and AI as a writing tool

6.

Bin-Nashwan et al. (2023)

Use of ChatGPT in academia: Academic integrity hangs in the balance

Provides insights into the adoption of ChatGPT by academics for academic and research practices

Stakeholders associated with research ethics should build sufficient guidelines for GenAI usage. Cooperation between AI programmers, academic institutions and publishers could curtail the spread of unethical behaviours, i.e., academic dishonesty, plagiarism

7.

Burger et al. (2023)

On the use of AI-based tools like ChatGPT to support management research

Explores relevance of GenAI in research, focusing on the practical case study of systematic literature reviews (SLRs)

Provides guidelines for the use of GenAI in literature reviews. The authors believe the instructions provided can be adjusted to many fields of research

8.

Cain et al. (2023)

Artificial intelligence and conversational agent evolution – a cautionary tale of the benefits and pitfalls of advanced technology in education, academic research, and practice

Demonstrates the ethical usage of chatbots and ethical concerns for academic researchers

Highlights the importance of ethical considerations for researchers and students when using AI. Measures to mitigate potential unethical use of this evolving technology are also discussed

9.

Carabantes et al. (2023)

ChatGPT could be the reviewer of your next scientific paper. Evidence on the limits of AI-assisted academic reviews

Considers if GenAI text-based models can carry out peer review of scientific articles for publication

Highlights the versatility of GPT models in peer review; but in their current stage there are many limitations meaning that they cannot yet holistically carry out peer review. There are also risks in terms of biases and ethics

10.

Casal & Kessler (2023)

Can linguists distinguish between ChatGPT/AI and human writing? A study of research ethics and academic publishing

Explores if humans can reliably distinguish AI-generated text, from that written by humans

Suggests experienced reviewers have limited ability to distinguish AI-generated from human-produced abstracts. GenAI tools may be useful for: (a) a facilitative tool to help researchers analyse/process data; and (b) aiding in the final writing stages of research

11.

Chan & Hu (2023)

Students’ voices on generative AI: perceptions, benefits, and challenges in higher education

Students’ perceptions of GenAI in higher education are explored, including familiarity, potential benefits, and challenges

 

Students showed understanding of capabilities and limitations of using GenAI but have concerns about the reliability, privacy, ethical issues, and uncertain policies associated with GenAI.  Educators and policymakers can tailor GenAI technologies to address students’ concerns while promoting effective learning outcomes

12.

Checco et al. (2021)

AI-assisted peer review

Explores how some elements of the peer review process could be supported by AI-assisted tools

There are implications in terms of biases and ethics, thus suggesting these tools cannot yet holistically peer review; human reviewers are still required to make judgement calls

13.

Cotton et al. (2023) 

Chatting and cheating: Ensuring academic integrity in the era of ChatGPT

Explores opportunities and challenges of using ChatGPT in higher education and discusses the potential risks and rewards of these tools

Integrating GenAI into higher education offers advantages and disadvantages. Universities must tackle the concerns by adopting proactive and ethical approaches to using GenAI, provide training/support for students and faculty, and use methods to detect/prevent academic dishonesty

14.

Dalalah & Dalalah (2023)

The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT

Focuses on false positives and false negatives detection of ChatGPT-generated text

ChatGPT is a promising tool, but ethical and responsible conduct is essential for its use. For accountable academic integrity and transparent research, researchers must also be open about how they use these tools and correctly credit sources

15.

Davison et al. (2023)

Pickled eggs: Generative AI as research assistant or co‐author?

Practicalities of applying GenAI in the academic research

Despite the limitations of GenAI tools, they potentially have value as research assistants

16.

Dergaa et al. (2023) 

From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing

Investigates advantages and disadvantages of ChatGPT and other GenAI technologies in academic writing and research publications

ChatGPT holds promise in improving academic writing and research efficiency. The paper emphasises need for academics to approach their usage cautiously and maintain transparency. These tools should complement and not replace human intelligence and critical thinking in scholarly work

17.

Dowling & Lucey (2023)

ChatGPT for (finance) research: The Bananarama conjecture

Illustrates how ChatGPT can significantly assist with finance research results that should be generalisable across research domains

ChatGPT can provide significant help in generating high-quality manuscripts for an acceptable level for publication. There are clear advantages for idea generation and data identification; however, its capabilities are comparative when synthesising literature and creating suitable testing frameworks

18.

Dogru et al. (2023)

Generative artificial intelligence in the hospitality and tourism industry: Developing a framework for future research

Critically reviews the effect of GenAI tools on higher education and research and identifies capabilities and implications of these tools  

For academic research, GenAI tools may revolutionise data collection, analysis, and writing; however, there are multiple ethical and legal concerns associated with adoption that must be considered. Proposes questions for consideration for the adoption of GenAI tools in education and research

19.

Dwivedi et al. (2023)

“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

Based on perspectives of AI experts, the paper explores opportunities, challenges, implications of GenAI and important research questions

It is critical to identify and implement policies to protect against misuse and abuse of GenAI technologies; it also indicates a clear need for scientific research on various issues related to ChatGPT

20.

Farrelly & Baker (2023)

Generative artificial intelligence: Implications and considerations for higher education practice

Explores the multifaceted impact of GenAI in academic work focusing on student life and, in particular, the implications for international students

Highlights the difficulties in reliably detecting AI-generated content, discusses biases within AI models, emphasising the need for fairness and equity in AI-based assessments with a particular emphasis on the disproportionate impact of GenAI on international students. Emphasises the importance of AI literacy and ethical considerations

21.

Jarrad et al. (2023)

Using ChatGPT in academic writing is (not) a form of plagiarism: What does the literature say? 

Reviews the literature on using ChatGPT in academic writing and its implications regarding plagiarism

ChatGPT can be a valuable writing tool; however, it is crucial to uphold academic integrity and ensure ethical use. Attributing ChatGPT’s contribution is essential in recognising its role, preventing plagiarism, and upholding the principles of scholarly writing

22.

Karakose et al. (2023)

A Conversation with ChatGPT about Digital Leadership and Technology Integration: Comparative Analysis Based on Human–AI Collaboration

Evaluates the utility of ChatGPT in generating accurate, clear, concise, and unbiased information to support research

It remains crucial to continue to critically evaluate the accuracy and utility of information generated by GenAI tools as they still lack scientific reasoning and critical thinking, and have the potential to generate hallucinations

23.

Kooli (2023)

Chatbots in education and research: A critical examination of ethical implications and solutions

Explores the potential use of AI systems and chatbots in the academic field and their impact on research and education from an ethical perspective

Concludes that the potential benefits of AI systems and chatbots in academia are substantial. However, to fully realise the potential use of GenAI in research and education, it is important for researchers to critically evaluate the ethical and technical implications of AI systems and ensure that they are used in responsible and transparent ways

24.

Lin (2023)

Why and how to embrace AI such as ChatGPT in your academic life

Focuses on employing GenAI in academic settings, unique strengths, constraints and implications through the lens of philosophy, science and epistemology

The versatility of ChatGPT and similar tools makes it useful in multiple capacities, such as a coach, research assistant and co-writer but suggests that guidelines for using GenAI such as ChatGPT in academic research are urgently needed. Policing its usage in terms of plagiarism or AI-content detection is likely to be of limited use

25.

Lodge et al. (2023)

Mapping out a research agenda for generative artificial intelligence in tertiary education

Outlines key areas of tertiary education impacted by large language models that require re-thinking and research to address them

Proposes critical areas for future research are: explainable AI in the context of tertiary educators; assessment integrity; assessment redesign; and ethics and AI. The authors also propose GenAI tools, such as ChatGPT, should not be treated as an author or cited as an agent responsible for intellectual property

26.

Lund et al. (2023)

ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing

Focuses on how ChatGPT can be used in academia to create and write research and scholarly articles, and the associated ethical issues

ChatGPT and similar technologies have the potential to significantly impact academia, scholarly research and publishing. The paper advocates the importance of considering ethical implications of these technologies, ensure that they are used ethically and responsibly for scholarly research

27.

Morocco-Clarke et al. (2023)

The implications and effects of ChatGPT on academic scholarship and authorship: a death knell for original academic publications?

Examines ChatGPT and similar GenAI tools, their pros and cons, impact on research and possible intellectual property conflicts

This paper addresses issues raised particularly with regards to education and academic/scientific publishing and puts forward recommendations which will hopefully ensure that future uses of GenAI tools are carried out with the appropriate ethical considerations and responsibility

28.

Nakazawa et al. (2022)

Does the use of AI to create academic research papers undermine researcher originality?

Explores authorship in the context of writing support services using AI technology

Argues that use of GenAI support would not necessarily compromise researcher originality. Suggests AI may diminish the importance of the author’s role. Proposes researcher’s originality might be reconsidered

29.

Niszczota & Conway (2023)

Judgements of research co-created by Generative AI: Experimental evidence

Tests whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work

Suggests people have negative views of delegating any aspect of the research process to large language models such as ChatGPT compared to a junior human scientist. Suggests researchers should employ caution when considering whether to incorporate ChatGPT or other large language models into their research

30.

Ollivier et al. (2023)

A deeper dive into ChatGPT: history, use and future perspectives for orthopaedic research

Investigates ChatGPT, defines its limits and strengths and explores validity of outputs

Advocates that human users should critically evaluate AI-generated outputs to determine its scientific validity as they are not capable of independent scientific reasoning or experimentation

31.

Peres et al. (2023)

On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice

Explores how ChatGPT, and other forms of GenAI, affect the way we have been conducting academic research

Research is needed to create more transparency around biases of AI models and an area for future investigation is also the development of skills to ask the right prompts of GenAI tools. Given that different prompting strategies exist, different outputs and hence potentially different conclusions on the part of the human researcher may be yielded

32.

Qasem (2023)

ChatGPT in scientific and academic research: future fears and reassurances

Explores negative and positive aspects of using ChatGPT in conducting research

ChatGPT has potential and is useful if exploited wisely and ethically at scientific and academic levels. Suggests more ethical control over ChatGPT and similar GenAI tools when used for writing research

33.

Rane et al. (2023)

ChatGPT is not capable of serving as an author: ethical concerns and challenges of large language models in education

Investigates how ChatGPT can be applied to scientific writing and publishing

Suggests that ChatGPT should be regarded as a supplementary tool rather than a substitute for human authors. Striking a balance between leveraging ChatGPT’s capabilities and maintaining the integrity and rigour of academic research is key

34.

Rusandi (2023)

No worries with ChatGPT: building bridges between artificial intelligence and education with critical thinking soft skills

Discusses the role of ChatGPT in education and research, focusing on developing critical thinking skills and maintaining academic integrity

Advocates that collaboration between AI and humans in learning and research will significantly benefit individuals and society. Emphasises the importance of developing critical thinking skills among students and researchers to effectively use AI and distinguish accurate information from misinformation 

35.

Tang et al. (2023)

The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing

Explores transparency of use of GenAI in nursing academic research journals

Declaring generative AI tool usage is crucial for maintaining transparency and credibility in academic writing. Emphasises the need for explicitly declaring the use of GenAI by authors in manuscripts and also highlights need for discussing standardisation of GenAI declaration in academic writing

36.

Thomas et al. (2023)

Impact and perceived value of the revolutionary advent of artificial intelligence in research and publishing among researchers

Explores perceptions of researchers, authors, editors, publishers of the role and impact of AI on academic publishing

Call to action to support and facilitate researchers’ understanding and usage of AI tools in practice. The need to provide insights into the implications GenAI will have for the publishing processes, and the ways in which its performance can be enhanced

37.

Tülübaş (2023)

An interview with ChatGPT on emergency remote teaching: A Comparative Analysis Based on Human–AI Collaboration

Conducted an AI-enabled research process using ChatGPT

Based on results, the author suggests that the cooperation of human and artificial intelligence is still warranted to ensure an accurate and reliable output from AI-based scientific queries

38.

Wang et al. (2023)

Can ChatGPT write a good boolean query for systematic review literature search?

Investigates effectiveness of ChatGPT in generating Boolean queries for systematic literature review

Offers various strategies for generating prompts for Boolean queries designed for systematic review literature search generated by ChatGPT. Highlights some issues that must be considered around variability, robustness and reproducibility

39.

Whitfield & Hofmann (2023)

Elicit: AI literature review research assistant

Explores Elicit, a literature review search tool that uses LLM to aid the research process

There is potential for Elicit as it develops further; it may not replace the scholarly activity of the literature review, but it may replace some of the more routine tasks involved with the literature review process

Table 2: Literature reviewed with themes and implications

3. Findings and discussion

3.1 Research question 1 

RQ1: How can generative AI tools be used by educational researchers to facilitate the research process? 

In the context of addressing this research question, the review revealed pertinent themes to include: GenAI and educational research; applying GenAI to the research process; GenAI as a research assistant; and GenAI and authorship. These are outlined and discussed in subsequent sub-sections.  

3.1.1 GenAI and educational research 

While AI tools such as spell checkers and grammar correcting software have been available for some time, GenAI represents a range of more powerful tools which can offer various supports across many educational and research endeavours (Lodge et al., 2023). GenAI is an advanced and versatile natural language processing tool that simulates human-like intelligence to generate multiple forms of output which closely resemble human-created content (Alshater, 2022; Dogru et al., 2023; Farrelly & Baker, 2023; Peres et al., 2023). These tools are referred to as ‘generative’ because of their capabilities to create a range of content including text, audio, video, images and code that mimic patterns and styles of the input data they are trained on (Berg, 2023; Dogru et al., 2023). One of the best-known examples of GenAI is the Generative Pre-training Transformer (GPT) model, developed by OpenAI, which provides the basis of applications such as ChatGPT and BingAI (Berg, 2023). GPT models are based upon publicly available digital content data (natural language processing) to read and produce human-like text in several languages” (Baidoo-Anu & Owusu Ansah, 2023, p. 53). Although ChatGPT has dominated the early discourse about GenAI tools, it is important to acknowledge there are thousands of other AI-powered tools some of which are equally capable  (Farrelly & Baker, 2023). Other popular GenAI tools include Gemini (a chatbot and content generation developed by Google), Copilot (AI chatbot and content generation developed by Microsoft, available via Microsoft 365 accounts), Claude AI assistant (developed by Anthropic), Perplexity AI (AI-powered search engine and chatbot), Jasper AI (AI-powered copyright assistant), DALL-E2 (image and art generation by OpenAI).

3.1.2 Applying GenAI to the research process  

It is evident from the literature that GenAI tools potentially offer a range of supports across the research process such as generating ideas, searching and summarising the literature, providing writing aids, grammar checks, assisting with data collection and analysis, presenting findings, illustrating complex data, providing feedback on manuscripts and disseminating findings (Alshater, 2022; Berg, 2023; Cotton et al., 2024; Dalalah & Dalalah, 2023; Dergaa et al., 2023; Dogru et al., 2023; Dwivedi et al., 2023; Lin, 2023; Morocco-Clarke et al., 2023; Whitfield & Hofmann, 2023). Dwivedi et al. (2023) note that ChatGPT is not the first or only AI tool to alter and shape research practices. For instance, Grammarly is an example of a popular AI tool used for checking spelling and grammar in writing (Dwivedi et al., 2023). Research Rabbit is an AI tool for finding and organising research papers. Scite is an AI-powered research platform that analyses and provides citation context for scientific papers, potentially assisting researchers in evaluating the credibility and impact of scholarly articles. Consensus is an AI-powered search engine that answers questions based on peer-reviewed literature which it scans and summarises. Elicit is an AI-powered search tool for scientific literature that provides credible sources and summaries for research queries. Indeed, according to Whitfield and Hofmann (2023), Elicit has the potential to make the literature review process more efficient, supplementing traditional library database searching, although it still has limitations in that the researcher needs to verify the accuracy of the returned results. 

Stakeholders within the academic research ecosystem, such as publishers, editors, reviewers and authors are continually experimenting with GenAI tools to streamline the workflow and efficiency of research processes (Thomas et al., 2023). Several studies report that ChatGPT and similar GenAI tools have the potential to enhance academic writing and research productivity by streamlining the literature search and citation processes, reducing data analysis time and reporting the findings (Al-Zahrani, 2023; Alshater, 2022; Burger et al., 2023; Dergaa et al., 2023; Lund et al., 2023; Qasem, 2023). The capacity of GenAI-enabled analytical tools to process vast amounts of data enables researchers to quickly gain insights and identify complicated patterns (Dogru et al., 2023). A case study exploring the potential of ChatGPT in economics and finance found that, although there are limitations, ChatGPT could practically assist researchers with data analysis and interpretation, scenario generation and communication of findings and that these potential benefits outweighed the drawbacks (Alshater, 2022). Wang et al. (2023) reported ChatGPT demonstrated potential in generating useful Boolean queries for systematic literature review searches but with a caveat that it generated different queries even though the same prompt is used, which could have implications for effectiveness and reproducibility. Burger et al. (2023) demonstrated that GenAI tools can productively support research specifically for data analysis and systematic literature reviews but again with a caveat that “the researcher is always fully responsible for the results they get from AI models” (Burger et al., 2023 p.238). Students perceive GenAI as a supportive tool in assisting them with research activities (Chan & Hu, 2023; Dalalah & Dalalah, 2023). Like academics, students note that these tools can facilitate literature searching, summarising readings and generating hypotheses based on data analysis but again the need for responsible and ethical use of GenAI is advised (Chan & Hu, 2023). In contrast, some do not recommend using ChatGPT in the literature review process as it provides some missing or fake references (Lund et al., 2023; Haman & Školník, 2023) while others observe it may be important to ensure the credibility of sources (Kim & Wong, 2023).

In addition, some scholars have explored the capabilities of text-based GenAI tools to conduct peer reviews of manuscripts proposed for publication  (Carabantes et al., 2023; Checco et al., 2021; Lund et al., 2023). These studies acknowledge the potential of AI-assisted peer review in some components of the process, for example, in completing repetitive and tedious tasks such as correcting grammatical errors and formatting citations (Lund et al., 2023). The literature notes the application of GenAI tools is in its infancy and there are risks in terms of biases and ethics, thus suggesting these tools cannot yet holistically peer review and that human reviewers are still required to make judgement calls (Carabantes et al., 2023; Checco et al., 2021; Lund et al., 2023). Given that GenAI has multiple obvious applications within the research process relevant to researchers, it raises the question if it has the potential to be a research assistant.

3.1.3 GenAI as a research assistant

GenAI tools have many characteristics that make them valuable to researchers. For example, Lin (2023) claims the versatility of ChatGPT in understanding and generating content across a broad spectrum of disciplines while having proficiency in a wide range of human and computer languages, making it useful in various capacities including that of a research assistant. Berg (2023) offers three distinct categories of use for GenAI tools in research: as a writing tool; as an analytic tool to analyse large volumes of data; and as a mentor acting as a critical reviewer of manuscripts. It is also reported in the literature that GenAI tools hold the potential to offer substantial benefits to researchers whose native language is not English, serving as useful tools for editing and refining text (Kim & Wong, 2023).

GenAI tools potentially have a practical value as research assistants as they can complement research processes to support human researchers in developing familiarity with a research domain. While they offer assistance with the literature review, analysing data and transcribing text, human oversight is still required due to various existing limitations and inaccuracies of current tools (Davison et al., 2023; Rusandi et al., 2023). For instance, Whitfield and Hofmann (2023) report that while the tool Elicit performs a literature search, the returned results may not always be accurate, thus requiring verification by the researcher. Kooli (2023, p.8) acknowledges that although GenAI tools aid researchers in various ways, the ultimate responsibility for research remains with humans, suggesting that like human research assistants “these chatbots need close and continuous supervision to avoid derivations”. 

3.1.4 GenAI and authorship  

In academia, peer-reviewed publications are an essential component of success and career advancement and the race to ‘publish or perish’ holds sway (Morocco-Clarke et al., 2023; Thomas et al., 2023). While the potential benefits of GenAI tools in research are undeniable, key concerns pertinent to current discussions within the research community are authorship and the attribution of AI-generated content (Tang et al., 2023). In recent years, significant developments in AI technology-based support for writing academic papers means that some researchers claim GenAI tools now have the capabilities to generate drafts of sections of papers such as abstracts and introductions (Nakazawa et al., 2022).  However, a key question arises regarding the appropriate use of GenAI in writing, particularly whether ChatGPT (or similar tools) should be allowed to co-author academic papers. The use of GenAI tools in academic writing challenges the traditional notions of authorship and intellectual property, potentially undermining researcher originality (Dogru et al., 2023; Nakazawa et al., 2022).  Additionally, it raises the question of who should be credited for the work - the GenAI tool, the individual who provided the input for the tool, or the individual whose work inspired the GenAI tool to generate the content (Dogru et al., 2023).

In an empirical study, Dowling and Lucey (2023) illustrated that ChatGPT can provide significant help in generating high-quality manuscripts for an acceptable level of publication. However, Crawford et al. (2023, p.4) propose that “artificial intelligence is not accountable for its research output and cannot be an author”. This perspective is shared by others who believe it is not appropriate to credit a chatbot like ChatGPT as the author of a research paper (Dalalah & Dalalah, 2023; Peres et al., 2023; Thorp, 2023). However, this has been challenged by the fact that ChatGPT has been listed as a co-author on a published paper (O’Connor, 2022) and consequently raised concerns because it challenges the core values of human-based authorship in academic publishing (Morocco-Clarke et al., 2023). If ChatGPT or similar tools are considered legal authors, they could potentially claim ownership of the copyright of content it generates, having significant consequences for both researchers and publishers (Lund et al., 2023; Morocco-Clarke et al., 2023). As AI models like GPT are trained on a massive corpus of data, it is questionable if they are infringing copyright and if authors replicate this content in scholarly publications this infringement could be passed on to them (Lund et al., 2023). According to Tang et al. (2023), it is crucial to explicitly declare the use of GenAI by authors to maintain credibility and transparency of academic writing. There is growing consensus within academic journals about the importance of transparently declaring the use of GenAI in research practices to maintain the integrity and credibility of academic research writing (Tang et al., 2023).

Some publishers have articulated their view of chatbots as authors while others are less transparent (Crawford et al., 2023; Lodge et al., 2023). In some current authorship guidelines, AI text generation is implicitly excluded (Anderson et al., 2023), with updated editorial policies informing researchers that GenAI tools (including ChatGPT) cannot be an author or co-author on papers submitted (Morocco-Clarke et al., 2023). For example, publishers of scientific journals (Science and Nature) have updated their editorial policies and guidelines informing researchers that GenAI tools cannot be an author or co-author on papers submitted (Morocco-Clarke et al., 2023). One of these journals, Nature, stipulates that text generated by GenAI tools cannot be used in work submitted, while Nature outlines that the use of GenAI must be acknowledged (Morocco-Clarke et al., 2023). The reasons cited for not recognising GenAI tools are that they lack accountability, since work generated by a GenAI tool cannot be considered the original work of the submitting authors (Peres et al., 2023; Thorp, 2023).

3.1.5 Discussion

The proliferation of GenAI tools is viewed by some as a powerful force driving transformative changes in the field of research with potential to revolutionise research practices (Al-Zahrani, 2023; Alshater, 2022).  The aforementioned GenAI tools (including ChatGPT, Gemini, Elicit, etc.) are just the tip of the iceberg as several other forms of GenAI are continually emerging (Peres et al., 2023). To date, arguably, it is the text-generating capacity of GPT models which has attracted the most attention and has particular relevance to research practices due to their ability to mimic human-like academic writing. It is evident that a range of GenAI tools can potentially offer many capabilities and supports across research processes. While researchers are experimenting with several GenAI tools (Thomas et al., 2023), in the literature there is an overwhelming dominance of studies which specifically focus on ChatGPT (Alshater, 2022; Burger et al., 2023; Dalalah & Dalalah, 2023; Haman & Školník, 2023; Lund et al., 2023; Karakose et al., 2023; Kim & Wong, 2023; Wang et al., 2023). This is perhaps reflective of the recent widespread global adoption of ChatGPT, which caught the attention of many scholars regardless of discipline (Dwivedi et al., 2023).

Although it is evident that researchers are increasingly leveraging GenAI tools to support them with various stages of the research process, i.e., assisting with idea generation and academic writing, literature searches, citation processes, data analysis, and peer review (Dergaa et al., 2023; Whitfield & Hofmann, 2023; Burger et al., 2023; Carabantes et al., 2023; Cheeco et al., 2021; Lund et al., 2023), all papers identify the various limitations of these same tools. There is consensus that caution must be exercised while utilising GenAI tools as research assistants. Several studies emphasise the importance of using these tools in research activities to complement human intelligence and critical thinking (Alshater, 2022; Dergaa et al., 2023; Kooli, 2023; Rusandi et al., 2023). The human researcher should be at the forefront of research processes and critical in ensuring reliability by interpreting and evaluating GenAI results and responses (Alshater 2022; Lin, 2023). Rusandi et al. (2023, p.602) reiterate that “AI like ChatGPT, is a tool designed to support human researchers and not to replace them”. As researchers grapple with authorship and attribution of AI-generated content, it may be reasonable to expect publishers to continue to update their policies to reflect that AI tools are used responsibly and ethically so they do not undermine the integrity and transparency of academic research (Anderson et al., 2023; Morocco-Clarke et al., 2023). Moreover, establishing guidelines or best practices for authors and reviewers to disclose their use of GenAI would prove advantageous, ensuring the consistent and responsible integration of GenAI technologies in scholarly research (Tang et al., 2023).

3.2 Research question 2 

RQ2: What are the implications of using generative AI tools in contemporary educational research practices?  

In addressing this research question, the review revealed several limitations and ethical considerations, presented in subsequent sub-sections.

3.2.1 Limitations of Gen AI in research

GenAI has limitations in all domains, and within the context of research these limitations are notable. One of the limitations of GenAI technologies relates to a phenomenon known as ‘artificial hallucination’, where some GenAI tools have the potential to generate artificial hallucinations, fabricating facts to provide answers that seem plausible, including legitimate-looking citations which are inaccurate (Burger et al., 2023; Lin, 2023; Karakose et al., 2023; Tang et al., 2023; Tülübaş et al., 2023). Some researchers have reported that hallucinating was particularly true when ChatGPT was asked for literature references; in some instances it provided titles and authors which did not exist (Burger et al., 2023; Haman & Školník, 2023; Kim & Wang, 2023).

Another reoccurring limitation in the literature is that GenAI tools can be biased, often reflecting the biases of their trainers or the data they were trained on (Jarrah et al., 2023; Lund et al., 2023). As they are inherently biased by their trainers and the vast datasets they are trained on, this could potentially perpetuate existing inequalities or lead to unfair treatment based on gender, race, ethnicity, or disability (Dogru et al., 2023). This could also lead to unintended consequences and discriminatory outcomes while compromising the validity of research findings (Davison et al., 2023; Kooli, 2023). Furthermore, when authors utilise GenAI tools in their research without disclosing it, there is a risk that these biases could go unnoticed or unaddressed (Tang et al., 2023).  Another consideration is the cut-off date beyond which GenAI models may have no training data or lack of information on emerging research or other topics not included in its proprietary data (Burger et al., 2023; Cain et al., 2023). Some studies also highlighted that chatbots lack empathy; they cannot understand emotion, human behaviour or context,  all of which are important elements of educational research (Lin, 2023; Kooli, 2023).

3.2.2 Ethical implications of GenAI in research

In the research community, the use of GenAI tools is controversial, and the source of ongoing debate with much concern regarding the ethical implications of using these tools in research practices (Bin-Nashwan et al., 2023; Davison et al., 2023; Morocco-Clarke et al., 2023). Across the literature reviewed, ethical considerations emerge as a significant concern (Al-Zahrani, 2023; Bin-Nashwan et al., 2023; Morocco-Clarke et al., 2023), specifically those relating to academic integrity and plagiarism, privacy and security and lack of transparency regarding the use of GenAI technologies (Berg, 2023; Cotton et al., 2023; Dwivedi et al, 2023).

Academic integrity has long been a concern in higher education, and with the pervasive technological revolution of GenAI, it has become more critical than ever (Bin-Nashwan et al., 2023). An obvious implication is plagiarism of original content as we rely more on GenAI tools in educational research and academic writing (Anderson et al., 2023; Qasem, 2023). Another perspective emerging is that, as AI-generated content becomes more sophisticated, it is more challenging to detect plagiarism (Dogru et al., 2023). Plagiarism detection is potentially flawed; the traditional detection software struggles to recognise AI-generated content (Anderson et al., 2023; Dogru et al., 2023) as it can be increasingly challenging to differentiate between human and machine-generated writing (Al-Zahrani, 2023; Cotton et al., 2023; Dwivedi et al., 2023). For example, Morocco-Clarke et al. (2023) found ChatGPT capable of generating text nearly indistinguishable from that written by humans. Other recent studies also suggest that experienced reviewers have limited ability to distinguish AI-generated from human-produced abstracts (Casal & Kessler, 2023; Thorp, 2023).

Another ethical dilemma is privacy, as the collection and processing of data can pose risks to the exposure of personal information (Dogru et al., 2023). It may be tempting for researchers to use GenAI tools to gather, store and analyse large amounts of data from study participants, but this could raise privacy and security issues (Cain et al., 2023; Kooli, 2023). Finally, the lack of transparency in GenAI models is another potential ethical dilemma for researchers. Some GenAI tools operate behind the scenes; it can be difficult to understand their sources of information and how these systems make decisions and generate answers and recommendations (Dwivedi et al., 2023; Kooli, 2023). This primarily raises concerns about transparency, accountability and reliability of their outputs, making it difficult for researchers to assess the rationale behind recommendations or to trust the outputs (Dwivedi et al., 2023; Cain et al., 2023; Kooli, 2023).

3.2.3 Discussion

It is evident from the literature that using GenAI tools in educational research practices requires consideration of several limitations including: inaccuracies; bias; lack of domain-specific expertise; and limited ability to understand context or to generate original insights (Alshater, 2022; Al-Zahrani, 2023; Davison et al., 2023). It is also important to acknowledge that some limitations are specific GenAI tools. For example, hallucinations may be relevant to ChatGPT but not to other similar tools. Therefore, researchers need to continually experiment with these tools, and keep informed on their latest capabilities within the context of their research practices (Jarrah et al., 2023). Furthermore, using GenAI in research raises ethical issues, such as academic integrity and plagiarism, data privacy and lack of transparency (Kooli, 2023). These ethical issues are significant as they highlight valid concerns about the impact on the authenticity, transparency and credibility of academic writing that uses text-based GenAI tools (Dergaa et al., 2023).

As an ever-increasing range of GenAI tools are more frequently used in academic writing, plagiarism may go undetected by the traditional detection software, or reliable AI output detection may become like a ‘cat-and-mouse’ game (Peres et al., 2023). Manual checks with topic experts may be required to mitigate the potential threat to the integrity of published literature (Anderson et al., 2023). To navigate the lack of transparency of AI models, several studies advocate that researchers should strive to use only transparent, interpretable GenAI models (Al Zahrani, 2023; Cain et al., 2023; Dwivedi et al., 2023). When dealing with AI-generated content containing sensitive data, authors must exercise caution, ensuring meticulous handling and implementation of robust data privacy and security measures (Jarrah et al., 2023). Consistently, the literature advocates that GenAI tools should be used in conjunction with human intelligence because such tools are not capable of independent scientific reasoning (Davison et al., 2023; Dergaa et al., 2023; Ollivier et al., 2023).

Irrespective of these limitations and ethical implications, GenAI tools can undeniably contribute in various ways to research practices; however, it is essential to acknowledge that their role should be complementary rather than a substitute for human intelligence and creativity (Dergaa et al., 2023; Rane et al., 2023). To reinforce the safeguarding of research ethics, it is imperative to ensure that GenAI technologies are used ethically and integrated responsibly, with strong emphasis on collaboration between humans and AI to enhance outcomes within the context of higher education research environments (Dogru et al., 2023; Dwivedi et al., 2023; Lund et al., 2023; Rane et al., 2023).  

4. Conclusion

Although in the early stages of development, the usage of GenAI in educational research is well underway, with the current range of tools having great potential to aid and possibly transform research practices. Like a pandora’s box for research that is already open (Ollivier et al., 2023), there is no doubt that GenAI tools will continue to bring more utility for researchers across disciplines with many opportunities, despite the associated challenges.

The primary objective of this paper was to conduct a literature review to consider how educational researchers can use GenAI tools to facilitate the research process and the associated implications. On the one hand, the findings support the argument that GenAI tools have great potential to assist educational researchers in enhancing efficiency and effectiveness at various stages of the research process when used in conjunction with human intelligence. However, on the other hand, there are risks of inaccuracies, bias, privacy and security and lack of understanding with GenAI tools still falling short of holistically completing research activities. The literature consistently suggests that it is the human researcher who has ultimate responsibility for their research. If researchers use GenAI tools, caution must be exercised as human interpretation and evaluation of AI outputs are still critical. That being said, higher education researchers must be adept in the use and understanding of GenAI tools and their workings in the context of scholarly research. Given the context of this review and in the interests of transparency, it is important to disclose that the only GenAI tool used in this paper was Grammarly (basic/free version) to check grammar, spelling and punctuation.

It is anticipated that this review will help formulate an understanding of the current position of GenAI technologies in educational research and stimulate research studies that can assist in better understanding the implications of various GenAI technologies in educational research practices. Overall, it is an exciting ground, and with the continued advancement of GenAI, there is an ongoing need for further empirical studies which look beyond popular tools like ChatGPT to understand better how best to utilise the ever-increasing range of GenAI technologies across the spectrum of contemporary educational research practices while minimising potential implications and ethical concerns. By exploring these areas in more depth through further research, we can ensure that GenAI tools are used responsibly, effectively, and ethically by educational researchers and students in their ever-evolving research environments in higher education.


About the author

Leone Gately, UCD Teaching and Learning, University College Dublin, Dublin, Ireland; and Department of Educational Research, Lancaster University, Lancaster, United Kingdom.

Leone Gately

Leone Gately is an experienced higher education professional who currently works as an Educational Technology Coordinator in UCD Teaching and Learning at University College Dublin (UCD), Ireland. Leone is also a doctoral researcher in e-Research and Technology-Enhanced Learning at the Department of Educational Research, Lancaster University. Leone is passionate about improving teaching and learning in higher education, particularly within digital learning contexts. Current research interests include online pedagogies, generative artificial intelligence in academic and research practice and higher education third space professionals.

Email: [email protected]

ORCID: 0009-0003-5105-8165

X: @leonegately

Article information

Article type: Full paper, double-blind peer review.

Publication history: Received: 16 August 2024. Revised: 25 September 2024. Accepted: 25 September 2024. Online: 28 October 2024.

Cover image: Badly Disguised Bligh via flickr.


References

Al-Zahrani, A. M. (2023). The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innovations in Education and Teaching International, 1-15. https://doi.org/10.1080/14703297.2023.2271445

Alshater, M. M. (2022). Exploring the role of artificial intelligence in enhancing academic performance: A case study of ChatGPT. SSRN. https://dx.doi.org/10.2139/ssrn.4312358

Anderson, N., Belavy, D. L., Perle, S. M., Hendricks, S., Hespanhol, L., Verhagen, E., & Memon, A. R. (2023). AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in Sports & Exercise Medicine manuscript generation. BMJ Open Sport & Exercise Medicine, 9(1), e001568e001568. https://doi.org/10.1136/bmjsem-2023-001568

Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning [Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning]. Journal of AI, 7(1), 52-62. https://doi.org/10.61969/jai.1337500

Berg, C. (2023). The case for generative AI in scholarly practice. SSRNhttp://dx.doi.org/10.2139/ssrn.4407587

Burger, B., Kanbach, D. K., Kraus, S., Breier, M., & Corvello, V. (2023). On the use of AI-based tools like ChatGPT to support management research. European Journal of Innovation Management, 26(7), 233-241. https://doi.org/10.1108/EJIM-02-2023-0156

Carabantes, D., González-Geraldo, J. L., & Jover, G. (2023). ChatGPT could be the reviewer of your next scientific paper. Evidence on the limits of AI-assisted academic reviews. Profesional de la información/Information Professional, 32(5).  

https://doi.org/10.3145/epi.2023.sep.16

Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends, 66(4), 616-630. https://doi.org/10.1007/s11528-022-00715-y

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. https://doi.org/10.1186/s41239-023-00411-8

Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchi, G. (2021). AI-assisted peer review. Humanities and Social Sciences Communications, 8(1), 1-11. https://doi.org/10.1057/s41599-020-00703-8

Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228-239. https://doi.org/10.1080/14703297.2023.2190148

Crawford, J., Cowling, M., Ashton-Hay, S., Kelder, J.-A., Middleton, R., & Wilson, G. S. (2023). Artificial intelligence and authorship editor policy: ChatGPT, bard bing AI, and beyond. Journal of University Teaching & Learning Practice, 20(5), 1. https://doi.org/10.53761/1.20.5.01

Dalalah, D., & Dalalah, O. M. (2023). The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT. The International Journal of Management Education, 21(2), 100822. https://doi.org/10.1016/j.ijme.2023.100822

Davison, R. M., Laumer, S., Tarafdar, M., & Wong, L. H. (2023). Pickled eggs: Generative AI as research assistant or co‐author? Information Systems Journal, 33(5), 989-994.
https://doi.org/10.1111/isj.12455

Dergaa, I., Chamari, K., Zmijewski, P., & Saad, H. B. (2023). From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2), 615-622. https://doi.org/10.5114/biolsport.2023.125623

Dogru, T., Line, N., Mody, M., Hanks, L., Abbott, J. A., Acikgoz, F., Assaf, A., Bakir, S., Berbekova, A., & Bilgihan, A. (2023). Generative artificial intelligence in the hospitality and tourism industry: Developing a framework for future research. Journal of Hospitality & Tourism Research, 10963480231188663. https://doi.org/10.1177/10963480231188663

Dowling, M., & Lucey, B. (2023). ChatGPT for (finance) research: The Bananarama conjecture. Finance Research Letters, 53, 103662. https://doi.org/10.1016/j.frl.2023.103662

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., . . . Wright, R. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/https://doi.org/10.1016/j.ijinfomgt.2023.102642

Farrelly, T., & Baker, N. (2023). Generative artificial intelligence: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. https://doi.org/10.3390/educsci13111109

Ferrari, R. (2015). Writing narrative style literature reviews. Medical Writing, 24(4), 230-235. https://doi.org/10.1179/2047480615Z.000000000329

Green, B. N., Johnson, C. D., & Adams, A. (2006). Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine, 5(3), 101-117. https://doi.org/https://doi.org/10.1016/S0899-3467(07)60142-6

Haman, M., & Školník, M. (2023). Using ChatGPT to conduct a literature review. Accountability in Research, 1-3. https://doi.org/10.1080/08989621.2023.2185514

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542-570. https://doi.org/10.1111/ejed.12533

Kim, S.-K. A., & Wong, U.-H. (2023). ChatGPT impacts on academia. 2023 International Conference on System Science and Engineering (ICSSE). https://doi.org/10.1109/ICSSE58758.2023.10227188

Knopf, J. W. (2006). Doing a literature review. PS: Political Science & Politics, 39(1), 127-132. https://doi.org/10.1017/S1049096506060264

Kooli, C. (2023). Chatbots in education and research: A critical examination of ethical implications and solutions. Sustainability, 15(7), 5614. https://doi.org/10.3390/su15075614

Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your academic life. Royal Society Open Science, 10(8), 230658. https://doi.org/10.1098/rsos.230658

Lodge, J. M., Thompson, K., & Corrin, L. (2023). Mapping out a research agenda for generative artificial intelligence in tertiary education. Australasian Journal of Educational Technology, 39(1), 1-8. https://doi.org/10.14742/ajet.8695

Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570-581. https://doi.org/10.1002/asi.24750

Morocco-Clarke, A., Sodangi, F. A., & Momodu, F. (2023). The implications and effects of ChatGPT on academic scholarship and authorship: a death knell for original academic publications? Information & Communications Technology Law, 1-21. https://doi.org/10.1080/13600834.2023.2239623

Nakazawa, E., Udagawa, M., & Akabayashi, A. (2022). Does the use of AI to create academic research papers undermine researcher originality? AI, 3(3), 702-706. https://doi.org/10.3390/ai3030040

Niszczota, P., & Conway, P. (2023). Judgements of research co-created by Generative AI: Experimental evidence. Economics and Business Review, 9(2), 101-114. https://doi.org/10.18559/ebr.2023.2.744

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B.-P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241. https://doi.org/10.1007/s10639-022-11316-w

O'Connor, S. (2022). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Education in Practice, 66, 103537-103537. https://doi.org/10.1016/j.nepr.2022.103537 

Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice.
International Journal of Research in Marketing, 40(2), 269-275. https://doi.org/10.1016/j.ijresmar.2023.03.001

Qasem, F. (2023). ChatGPT in scientific and academic research: future fears and reassurances. Library Hi Tech News, 40(3), 30-32. https://doi.org/10.1108/LHTN-03-2023-0043

Rusandi, M. A., Ahman, Saripah, I., Khairun, D. Y., & Mutmainnah. (2023). No worries with ChatGPT: building bridges between artificial intelligence and education with critical thinking soft skills. Journal of Public Health, 45(3), e602-e603. https://doi.org/10.1093/pubmed/fdad049

Rane, N. L., Choudhary, S. P., Tawde, A., & Rane, J. (2023). ChatGPT is not capable of serving as an author: Ethical concerns and challenges of large language models in education. International Research Journal of Modernization in Engineering Technology and Science, 5(10), 851-874. https://www.doi.org/10.56726/IRJMETS45212

Tang, A., Li, K. K., Kwok, K. O., Cao, L., Luong, S., & Tam, W. (2023). The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship, 56(2), 314-318. https://doi.org/10.1111/jnu.12938

Thomas, R., Bhosale, U., Shukla, K., & Kapadia, A. (2023). Impact and perceived value of the revolutionary advent of artificial intelligence in research and publishing among researchers: a survey-based descriptive study. Science Editing, 10(1), 27-34. https://doi.org/10.6087/kcse.294

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879

Wang, S., Scells, H., Koopman, B., & Zuccon, G. (2023). Can ChatGPT write a good boolean query for systematic review literature search? Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. https://doi.org/10.1145/3539618.3591703

Whitfield, S., & Hofmann, M. A. (2023). Elicit: AI literature review research assistant. Public Services Quarterly, 19(3), 201-207. https://doi.org/10.1080/15228959.2023.2224125

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1-27.  https://doi.org/10.1186/s41239-019-0171-0

Comments
0
comment
No comments here
Why not start the discussion?