Hostname: page-component-6bf8c574d5-7jkgd Total loading time: 0 Render date: 2025-03-02T21:36:37.884Z Has data issue: false hasContentIssue false

AI in Education – Promise, Peril and a Path Forward

Published online by Cambridge University Press:  28 February 2025

Rights & Permissions [Opens in a new window]

Abstract

LexisNexis’ Matthew Leopold explains how his team conducted a wide range of interviews with those involved in the legal education system – including librarians, academics, heads of law schools and university leaders – in order to gauge their feelings and thoughts on the impact artificial intelligence (AI) will have, and is having, on the sector. Matthew then goes through the findings, which shows a diverse set of views on AI.

Type
Main Features
Copyright
Copyright © The Author(s), 2025. Published by British and Irish Association of Law Librarians

INTRODUCTION

Artificial intelligence is rapidly transforming society, and education is no exception. From personalised learning pathways to automated grading systems, AI (Artificial Intelligence) will change how students are taught and learn. However, as educational institutions begin to adopt AI, the impact on students, faculty, and the learning process is complex. We conducted interviews with 20 individuals across the legal education spectrum, including academics, heads of law schools, librarians, and university leaders from undergraduate, postgraduate, and professional qualification schools. These interviews and round tables, conducted under Chatham House rules, revealed diverse perspectives on AI's role in legal education.

With 20 contributors, there are 20 different approaches to AI. While some institutions enthusiastically embrace AI as a tool for innovation, others are more cautious, concerned about the ethical implications and long-term consequences. One of the most pressing concerns is the impact of AI on academic integrity and learning. As AI tools become increasingly sophisticated, the potential for misuse – such as in plagiarism detection or automated essay writing – raises serious questions about the future of assessment and the steps needed to authenticate student work.

Academic faculty must be prepared for the AI transition. As AI tools become more prevalent, they must learn to use these technologies effectively while understanding the broader implications of their use in the classroom. This requires comprehensive training and support.

Regulatory guidance is essential, particularly when training for a regulated career in the law. Without a clear direction on regulators’ expectations, law schools will find it difficult to chart an appropriate course.

Finally, the future of assessment and employability in the AI age must be considered. How might AI reshape what it means to be ‘employable’ in the 21st Century? As AI continues to evolve, the skills that are valued in the workplace are likely to change, prompting a rethink of educational priorities and assessment methods.

Underlying all these discussions is a focus on ethical considerations and the human element. While AI offers significant opportunities to enhance education, it also poses risks that must be carefully managed. The successful integration of AI in education will depend on maintaining a balance between technological innovation and the preservation of core educational values.

METHOD STATEMENT

This study examined how law schools are responding to generative AI, focusing on approaches to integrating AI into teaching, as well as its perceived impact on academic rigour and curriculum design. A qualitative methodology was used, employing semi-structured interviews to investigate these themes.

Participants were selected through a two-stage process. Initially, a call for interest was circulated via an academic mailing list targeting law school academics and librarians who had an interest in examining AI's role in education. After reviewing expressions of interest, academics from both large and small law schools were chosen to create a representative sample. Where gaps emerged, additional participants were identified through LinkedIn and academic profiles. A brief screening process refined the selection, with some candidates recommending colleagues who were more actively engaged in curriculum discussions.

The study used semi-structured group interviews, each lasting around 45 minutes and conducted on Microsoft Teams. Groups consisted of up to four participants and were led by the author, Matthew Leopold. A set of core questions guided each session, enabling participants to explore specific areas of interest. The overarching questions and discussion topics were developed with input from the LexisNexis academic team to ensure relevance. Follow-up one-on-one interviews were held with two participants to further investigate particular viewpoints.

All interviews were recorded with participants’ consent and transcribed using Microsoft Copilot. Transcripts were manually coded and analysed to identify recurring themes and insights, which were then categorised to provide a structured understanding of the various approaches and concerns regarding AI integration within legal education.

Throughout the study, RELX's code of conduct and confidentiality guidelines were strictly followed. Participants were informed at the outset of each session that interviews would be recorded for transcription purposes. To preserve anonymity, no direct quotes or attributions are included in the final report, and the identities of participants and their institutions have been kept confidential. Participants were also reminded to respect the confidentiality of their peers’ contributions.

TWENTY LAW SCHOOLS, TWENTY STRATEGIES

The integration of AI into education represents one of the most significant shifts to education in recent history. However, the adoption of AI across educational institutions is uneven. While some universities pioneer AI-driven initiatives, others are approaching the technology with caution and scepticism.

From our research, we developed the Academic AI Adoption Framework, classifying institutions into one of three categories.

Table 1 Academic AI Adoption Framework

PROACTIVE ADOPTERS

These institutions have fully embraced AI, integrating it across their curricula, assessments, and often their administrative processes. They view AI as a transformative force that can enhance the learning experience, improve operational efficiency, and prepare students for a future in which AI will play an increasingly dominant role.

For example, some universities plan to build AI-powered personalised learning platforms that adapt to individual students' needs, offering customised learning paths. Others aim to use AI-driven assessment tools to provide real-time feedback, allowing students to track their progress more effectively and enabling academics to focus on more complex aspects of teaching. Administrative processes will also benefit from AI, with automated systems managing everything from student enrolment to predictive analytics for student retention.

The rationale behind proactive adoption is varied. For some institutions, the drive to stay competitive in a technology-oriented market is a significant motivator. Others see AI as a way to innovate teaching methods. Resource availability is another critical factor; well-funded institutions with access to cutting-edge technology and expertise are better positioned to adopt AI.

CAUTIOUS INTEGRATORS

These institutions are adopting AI more cautiously, integrating it into specific areas while carefully monitoring its impact. They are not opposed to AI but are wary of potential challenges and disruptions. As a result, AI implementation tends to focus on non-critical areas, such as administrative support or supplemental learning tools, rather than core academic functions.

For instance, some universities use AI for data analysis in admissions and enrolment or as a supplement to traditional teaching methods. However, they are adamant that core teaching and assessment must remain in the hands of humans – at least for now.

This cautious approach often stems from concerns about potential academic integrity loss or AI's broader ethical implications in education. These universities strive to balance innovation while ensuring that the human element of teaching is not overshadowed by technology. Limited resources may also contribute to a more cautious or fragmented approach.

RESISTANT SCEPTICS

Some institutions are resistant or sceptical about AI, delaying its adoption or actively pushing back against its use. This resistance often stems from a commitment to traditional educational methods and ethical concerns.

These universities may limit AI's role to non-academic functions, such as basic administrative tasks, deliberately avoiding its use in teaching, learning, or student assessment. Some may also engage in public discourse about AI's potential dangers, such as the dehumanisation of education and the erosion of essential skills.

This resistance is often philosophical or ethical. These institutions may argue that education is inherently a human endeavour, requiring nuance, empathy, and creativity that AI cannot replicate. They also express concerns about data privacy and the potential for bias in AI algorithms.

IMPLICATIONS OF THE ACADEMIC AI ADOPTION FRAMEWORK

The varying approaches across the Academic AI Adoption Framework may have significant implications for the future of the education sector. Institutions that are early adopters of AI could set the standard for future educational practices, influencing how AI is integrated into teaching, learning, and administration. These institutions may also gain a competitive edge by offering innovative and personalised learning experiences.

However, the more measured approach to AI raises important questions about the role of technology in education. Preserving traditional educational values, such as critical thinking, creativity, and human interaction, may require a detailed review of AI's role in education. As AI technology evolves and regulatory frameworks catch up, it remains to be seen if institutions will converge towards a more unified approach.

ACADEMIC INTEGRITY

A common theme across all institutions was the importance of maintaining academic integrity, regardless of technology. The ICAI (International Center for Academic Integrity, 2021) defines academic integrity as a commitment to six key values: trust, honesty, fairness, respect, responsibility and courage. While AI offers tools that can uphold integrity, such as plagiarism detection or exam monitoring, it also presents ethical dilemmas and risks that could undermine this integrity.

USING AI TO UPHOLD ACADEMIC INTEGRITY

Plagiarism detection tools like Turnitin are now baseline tools for identifying unoriginal content (Graham-Matheson and Starr, Reference Graham-Matheson and Starr2013). These tools often use AI to compare texts against vast databases, flagging potential plagiarism for human review.

AI is also likely to play an increasingly important role in invigilation. Following (or at least, during) the Covid pandemic, many academic institutions embraced online exams, with software monitoring students via webcams to detect potential cheating. Some participants suggested this technology could also be used in physical exam halls.

AI's analytical capabilities can spot patterns in student performance and behaviour, potentially identifying students who may be tempted to engage in dishonest practices. Although, as de Jager and Brown (Reference de Jager and Brown2010) note, plagiarism has many levels and can be deliberate acts, negligent behaviour or just ignorance as to what plagiarism might be. Early identification of potential plagiarism can allow institutions to intervene with support mechanisms that reduce the likelihood of misconduct.

However, while AI-driven invigilation might ensure fairness in assessments, it also raises concerns about privacy and the potential for over-surveillance. Faculty universally expressed discomfort with the intrusive nature of monitoring, which can feel like an invasion of personal space. Balancing the need for academic integrity with respect for privacy requires careful navigation.

One participant explained how their institution successfully integrated AI analytical monitors and plagiarism detectors to uphold integrity without compromising student privacy. They established clear policies and guidance on the acceptable use of AI by students and staff. The contributor believed this technology allowed them to maintain high standards of academic honesty while fostering a culture of trust and respect.

AI THAT UNDERMINES ACADEMIC INTEGRITY

Large Language Models can produce essays that are often indistinguishable from those written by students. Differentiating between genuine student work and AI-generated content complicates academic assessment. This was, by far, the biggest concern among our contributors. This is not a new challenge for academics. Seirup and Pedhazur Schmelkin (Reference Seirup and Pedhazur Schmelkin2003) note that academic dishonestly is “ubiquitous” due to the diversity of values held by students.

While the risk of cheating is at the heart of the issue, our contributors outlined broader concerns. As generative AI simplifies writing, students may begin to see assignments as tasks to be completed with minimal effort rather than opportunities to develop critical thinking and problem-solving skills. Resistant Sceptics were particularly worried that over-reliance on AI could create technically proficient graduates who lack the deeper understanding and cognitive abilities that higher education is meant to cultivate.

Faculty face the challenge of integrating AI in a way that enhances learning without compromising its authenticity. This requires a careful balance, where AI is used as a tool to support the learning process. For example, Proactive Adopters advocated using AI to provide personalised feedback on drafts, helping students improve their work and engage meaningfully with their studies.

One contributor articulated the challenges of AI misuse, noting that widespread AI-generated content in assessments has led to a crisis in academic standards at their university. To prevent the issues from escalating, they have restricted AI use and trained faculty on recognising and managing AI's potential misuse.

RECOMMENDATIONS

The impact of AI on academic integrity will continue to grow in relevance and importance. To safeguard academic integrity, contributors identified several recommendations:

  • Regularly review and update policies to address AI-related issues, ensuring they remain relevant and effective

  • Train faculty on AI tools and the ethical considerations to empower them to guide students responsibly and recognise challenges

  • Promote transparency in AI use, ensuring students are fully informed of the opportunities, risks, and impact on them

  • Seek to offer equitable access to AI technology, minimising disparities that could impact fairness in education

CRITICAL THINKING IN THE AI AGE

The rise of AI presents a paradox: it holds the potential to enhance critical thinking by providing new tools and perspectives, yet it also risks undermining this skill by simplifying complex tasks and decisions.

Critical thinking is a complex discipline, with a range of different definitions – both broad and discipline specific. Pithers and Soden (Reference Pithers and Soden2000) describe critical thinking as “being able to identify questions worth pursuing, being able to pursue one's questions through self-directed search and interrogation of knowledge, a sense that knowledge is contestable and being able to present evidence to support one's arguments”. Behar-Horenstein and Niu (Reference Behar-Horenstein and Niu2011) describe it as “intellectually engaged, skilful and responsible thinking that facilitates good judgment”.

It was clear from conversations with contributors, that these broad definitions remain true, and must evolve in the context of AI. Critical thinking must now include the ability to critically assess AI-generated content, understand the limitations and biases inherent in AI systems, and make informed decisions about when and how to rely on AI tools. This requires an understanding of the technology, its implications, and its potential to both aid and mislead. Academics must help students develop these skills.

AI HELPING WITH CRITICAL THINKING SKILLS

Ironically, AI can be harnessed as a teaching aid for this evolved definition of critical thinking. AI can prompt students to ask deeper questions, engage in complex simulations, or explore perspectives they might not otherwise encounter. Personalised learning experiences offer bespoke solutions that meet the precise learning needs of individual students. AI can provide instant feedback on students' reasoning processes, helping them refine their skills.

However, to leverage AI's potential in this area, universities should design learning experiences that encourage active engagement rather than passive consumption.

AI UNDERMINING CRITICAL THINKING

Repeatedly mentioned by interviewees, there is a risk that AI use may lead to superficial engagement with material. Students may focus on obtaining the correct answer rather than understanding the underlying concepts. They may also begin to accept AI-generated content without scrutiny.

AI systems are not infallible. They can include biases, make errors, and generate outputs that appear detailed but lack the nuance of human reasoning. If students are not taught to critically evaluate AI's outputs, they risk becoming passive recipients of information rather than active thinkers.

As AI tools become more sophisticated, there is a growing concern that students might rely on AI to perform tasks intended to develop their critical thinking skills, diluting the learning process. Proactive Adopters are seeking new methods of assessment and teaching to address this issue, while Resistant Sceptics are restricting access to maintain the status quo.

Students should be encouraged to question AI, explore its limitations, and engage in critical dialogue about its outputs.

HOW TO TEACH CRITICAL THINKING

One approach advocated by a Cautious Integrator is to promote metacognition, or “thinking about thinking” (Flavell, Reference Flavell1979, p906). Kuhn and Dean (Reference Kuhn and Dean2004) argue that metacognition allows students to take learnings from one context and apply them to another. This can help students become more aware of how AI influences their reasoning and decision-making.

Another strategy, endorsed by all three categories of our model, is to teach students how to critically evaluate AI-generated content. This involves understanding the principles of AI, recognising potential biases, and learning how to assess the reliability and accuracy of AI outputs.

Case-based learning is another approach used by both Proactive Adopters and Cautious Integrators. Students are presented with real-world scenarios where they must collaboratively discuss and debate issues related to AI outputs.

Finally, integrating discussions on the ethics of AI into the curriculum is crucial. As AI becomes more prevalent, understanding its societal implications and ethical considerations is an essential part of critical thinking. Universities should encourage students to engage critically with AI's broader impacts, fostering a holistic understanding of its role in society.

PREPARING FACULTY FOR AI

While educational theory has evolved and changed, there have been few step changes in delivery. However, with AI, faculty are now expected to consider how it should be taught and used in education. A common theme across all contributors was that the successful implementation of AI in legal education hinges on one crucial factor: the preparation and training of faculty staff. Without the necessary skills, knowledge, or mindset, faculty may struggle to teach the opportunities and risks adequately.

Even Resistant Sceptics recognised that some form of AI literacy is now a foundational competency. AI literacy is more than just a basic understanding of AI technologies; it requires an understanding of how the tools function, their potential applications in law, and the ethical considerations they present. Faculty need to understand how AI can enhance and challenge traditional teaching methods and legal practice.

TRAINING THE FACULTY

The consequences of ignoring AI literacy are significant. Faculty who lack a deep understanding of AI risk misapplying these tools, leading to ineffective teaching, perpetuating biases, or even a decline in student engagement.

Effective AI training programmes for legal faculty should include several key components:

  • Providing faculty with a basic understanding of how legal AI works and how it can be applied in both educational settings and practice

  • Ensuring faculty are aware of and can explain ethical issues critical to legal practice, such as data privacy and bias

  • Helping faculty adapt their teaching methods to incorporate AI in ways that enhance learning outcomes

  • Faculty need to use legal AI tools for real-life use cases to ensure they can confidently train and advocate appropriate use of the technology

Institutional support is vital for the success of these training programmes. Access to resources and willingness to support predictably follows the Academic AI Adoption Framework. Proactive Adopters are often most generous in providing the necessary resources, time, and encouragement for AI training.

CHANGING DEPARTMENTAL MINDSETS

Views within individual departments varied. Even when a faculty member is excited by the technology and the opportunities it presents, they may face a department that views AI with suspicion or fear. Some express concern that AI technologies may replace them or diminish their role in the student experience. Addressing these fears is crucial to fostering a positive attitude towards AI.

One strategy suggested by a Cautious Integrator for overcoming resistance is to emphasise AI as a tool designed to enhance rather than replace the teacher's role. By automating routine tasks, AI can help academics save time and focus on more creative and human aspects of teaching, such as tutorials. For example, a Proactive Adopter envisioned an AI-driven marking system that would allow more time for personalised feedback and support.

Fostering a growth mindset among academics is also helpful. Rolley (Reference Rolley2020) argued that this mindset is directly linked to the ability to learn and adapt to new technology. If the AI transition is seen as an opportunity for professional growth and improved teaching outcomes, it is more likely to succeed. This shift in perspective could be encouraged by highlighting success stories and innovative practices where AI has improved the learning experience. Additionally, fostering collaboration among faculty can help build confidence and competence in using AI.

THE ROLE OF REGULATORS AND THE NEED FOR GUIDANCE

Effective regulation can help safeguard against the misuse of AI, protect the integrity of educational systems, and ensure that AI's benefits are distributed equitably. However, our panellists believe the current regulatory environment is fragmented and inconsistent. Participants flagged significant gaps that need to be addressed regarding AI's use in legal practice and legal education.

Most contributors highlighted a significant lack of guidance from legal market regulators, such as the SRA (Solicitors Regulation Authority) and BSB (Bar Standards Board), on the acceptable use of AI in legal education. This lack of direction causes hesitation in universities about how much to integrate AI into their curricula. The absence of clear regulatory guidance is holding back some Proactive Adopters eager to embrace AI. They fear teaching skills that might not be accepted or could jeopardise students' future opportunities.

Several participants noted that regulators have retained a cautious approach to examinations and assessment – insisting, in one case, on retaining traditional exam methods. This makes it difficult for universities to integrate new teaching techniques and technologies.

Due to regulatory uncertainties, many universities, particularly Cautious Integrators and Resistant Sceptics, are adopting a defensive stance. They have invested time and effort in AI-proofing assessments to prevent academic misconduct – rather than integrating AI into learning and teaching processes.

ASSESSING EMPLOYABILITY

Assessments have long been a cornerstone of measuring student achievement, certifying knowledge, and signalling employability to future employers. For decades, these assessments have relied on standardised tests, essays and examinations to gauge students’ understanding and readiness for the workforce. However, as AI and other technological advancements reshape society, traditional methods of assessment are facing disruption.

SHIFT FROM KNOWLEDGE TO SKILLS

Traditionally, educational assessments have focused heavily on knowledge acquisition and the ability to recall information. This approach, while valuable, often falls short in evaluating the practical skills and competencies increasingly valued in today's job market. The future of assessment is shifting towards a greater emphasis on evaluating skills such as knowledge application, problem-solving, creativity, and collaboration. Project-based and experiential learning assessments are becoming more common across academia, allowing students to demonstrate their abilities in real-world contexts rather than through abstract tests.

As AI continues to automate routine tasks, the demand for human skills that cannot be easily replicated by machines is expected to grow.

CONTINUOUS AND FORMATIVE ASSESSMENTS

AI's capacity for continuous monitoring and real-time feedback is likely to transform assessment. Already embraced by Proactive Adopters, continuous assessment allows faculty to track student progress throughout a course, providing ongoing feedback rather than relying solely on high-stakes exams at the end of a term. This approach not only reduces pressure on students but also creates a more dynamic and responsive learning environment.

Formative assessment, which focuses on providing feedback to improve learning rather than simply measuring it, is another area where Proactive Adopters are keen for AI to make an impact. By identifying areas where students are struggling in real time, AI tools could help faculty tailor their teaching strategies to meet individual needs. This kind of personalised learning experience is increasingly seen as a way to enhance student engagement and improve educational outcomes.

IMPACT ON EMPLOYABILITY

While the legal profession requires a professional qualification to practise, degrees remain the primary indicators of a candidate's readiness. However, firms are increasingly looking beyond traditional qualifications, seeking evidence of practical skills and competencies that align with their organisational needs. This shift is driven in part by AI, which enables more sophisticated assessments of candidates’ abilities during the recruitment process.

AI-powered tools are likely to be integrated into recruitment processes, offering new ways to assess potential employees. Video interview analysis, for example, uses AI to evaluate not just what a candidate says, but also how they say it – their tone, body language, and facial expressions. Gamified assessments and skill-based testing platforms are also becoming more common, allowing firms to gauge a candidate's problem-solving abilities, creativity and teamwork skills in a more engaging and dynamic way.

For students, this means preparation for the job market now requires more than just academic achievement or traditional career advice. They must be ready to navigate AI-driven recruitment processes that assess a broader range of skills and attributes. This shift places a premium on adaptability, as students must demonstrate their readiness for a job market where AI plays a significant role in hiring decisions.

As traditional degrees become less central to employability, alternative forms of certification and evidence of skills are gaining importance. Skills-based hiring is becoming more prevalent, with firms using AI-driven assessments to evaluate a candidate's actual capabilities rather than relying solely on their educational credentials.

Micro-credentials and digital badges are also emerging as new forms of certification that can demonstrate specific skills. These credentials are often awarded for completing short courses or achieving proficiency in particular areas, providing a flexible, modular way for students to build and showcase their expertise. As these alternatives to traditional degrees gain traction, they are likely to play an increasingly important role in the future of employability – something Proactive Adopters have already started to implement.

CONCLUSION

The potential benefits of AI – from personalised learning experiences to enhanced administrative efficiencies – are immense. However, as this article has explored, integrating AI into education is a complex process that brings with it a host of challenges and ethical considerations.

The divergent approaches taken by educational institutions reflect broader uncertainties surrounding AI's role in the classroom. Some universities are leading the charge, embracing AI as a tool for transformation, while others are more reticent, wary of the implications for academic integrity and the authenticity of the educational experience. These varied perspectives underscore the need for ongoing dialogue and reflection as AI becomes more entrenched in educational practices.

One of the most pressing issues highlighted in this piece is the impact of AI on academic integrity and learning. The ease with which AI can automate tasks traditionally performed by students raises serious questions about the nature of learning and the future of assessment. Faculty must grapple with the challenge of fostering critical thinking and ensuring that students develop the deep, analytical skills that AI cannot replicate.

Preparing academics for this transition is another crucial aspect of the AI revolution in education. As AI tools become more prevalent, faculty will need not only technical training but also a broader understanding of how these tools can enhance or hinder the educational experience. The role of the teacher is evolving, and institutions must provide the support necessary to help teachers navigate this new terrain.

Regulators also have a pivotal role in shaping the future of AI in education. Clear, thoughtful guidelines are essential to ensure that AI is used ethically and that its integration does not exacerbate existing inequalities. Without such guidance, the risks associated with AI could overshadow its potential benefits.

The future of assessment and employability in the AI age presents another area of significant change. As AI transforms the skills required in the workforce, educational institutions will need to rethink how they prepare students for this new reality. This will involve revising curricula and developing new assessment methods that reflect the changing nature of work.

Finally, at the heart of the AI revolution in education lies a fundamental question: how do we ensure that the human element is not lost? While AI can enhance and streamline many aspects of education, it cannot replace the human connection essential to the learning process. As we move forward, we must balance technological innovation and preserve the core values that define education.

In conclusion, integrating AI in education is a journey filled with both promise and peril. By carefully considering the divergent approaches, the impact on academic integrity, the preparation of faculty, the role of regulators, and the future of assessment, we can navigate this journey thoughtfully and responsibly. The key to success will be maintaining a focus on the human element and ensuring that AI serves as a tool to enhance, rather than diminish, the educational experience. We must approach the future with both optimism and caution, always keeping in mind the ultimate goal of education: to empower learners to reach their full potential.

References

Behar-Horenstein, L. S. & Niu, L., 2011. ‘Teaching Critical Thinking Skills In Higher Education: A Review of the Literature’. Journal of College Teaching & Learning, 8(2), pp. 2542.Google Scholar
de Jager, K. & Brown, C., 2010. ‘The tangled web: investigating academics’ views of plagiarism at the University of Cape Town’. Studies in Higher Education, 35(5), pp. 513528.CrossRefGoogle Scholar
Flavell, J. H., 1979. ‘Metacognition and cognitive monitoring: A new area of cognitive developmental inquiry’. American Psychologist, 34(10), pp. 906911.CrossRefGoogle Scholar
Graham-Matheson, L. & Starr, S., 2013. ‘Is it cheating – or learning the craft of writing? Using Turnitin to help students avoid plagiarism’. Research in Learning Technology, Volume 21.CrossRefGoogle Scholar
International Center for Academic Integrity, 2021. The Fundamental Values of Academic Integrity. [Online] Available at <https://academicintegrity.org/images/pdfs/20019_ICAI-Fundamental-Values_R12.pdf> [Accesed 7 November 2024].+[Accesed+7+November+2024].>Google Scholar
Kuhn, D. & Dean, D., 2004. ‘A bridge between cognitive psychology and educational practice’. Theory into Practice, 43(4), pp. 268273.CrossRefGoogle Scholar
Pithers, R. & Soden, R., 2000. ‘Critical thinking in education: A review’. Educational Research, 42(3), pp. 237249.CrossRefGoogle Scholar
Rolley, T. A., 2020. Faculty mindset and the adoption of technology for online instruction, Dissertation: Grand Canyon University.Google Scholar
Seirup, H. & Pedhazur Schmelkin, L., 2003. ‘Faculty Perceptions of Academic Dishonesty: A Multidimensional Scaling Analysis’. The Journal of Higher Education , 74(2), pp. 196209.Google Scholar
Figure 0

Table 1 Academic AI Adoption Framework