Introduction
The integration of technology in education has been a subject of debate and research for over a century, with educators’ resistance to adoption persisting despite rapid technological advancements. This historical context is crucial for understanding the current landscape of technology adoption in education, particularly as we face the emergence of artificial intelligence (AI) in educational settings. Much like the resistance faced by calculators and computers in the past, AI is encountering similar skepticism and barriers to adoption. This literature review aims to conduct an extension of a meta-analysis of recent studies to identify key predictors of technology adoption among educators, with the goal of informing targeted training programs that address the strongest predictors of acceptance.
The foundations of modern educational psychology, established by pioneers like Jean Piaget in the early 20th century, emphasized the importance of active, constructive learning experiences (Piaget, 1936). This perspective naturally aligns with the potential of technological tools, including AI, to provide rich, interactive learning environments. Later, works such as “How People Learn” (Bransford et al., 2000) further highlighted technology’s potential to support effective learning principles, principles that are now being extended to AI-enhanced educational tools.
Despite these theoretical underpinnings, the history of educational technology adoption reveals a pattern of resistance. Cuban (1986) documented cycles of enthusiasm and disappointment surrounding various technologies introduced in classrooms throughout the 20th century. This resistance has persisted into the digital age, with Ertmer (1999) identifying both external and internal barriers to technology integration in classrooms. Concerns about efficacy, job displacement, changing teacher roles, and the constant pressure to keep up with technological advancements have all contributed to ongoing resistance (Selwyn, 2011; Howard, 2013; Zhao & Frank, 2003). In addition, resistance is further complicated by lack of pedagogical training and administrator support, which push teachers to keep the status quo rather than ride the wave of each “new and next” educational gimmick (Michigan Virtual, 2024).AI, as the latest iteration of educational technology, is facing similar challenges.
The rapid evolution of technology in recent decades, culminating in the advent of AI, has intensified the challenges of adoption. The rise of artificial intelligence in education has sparked a new wave of apprehension among teachers, reminiscent of past technological innovations, and it’s not unwarranted; recent studies, including work from Michigan Virtual (2024), have shown that educators express significant concerns about AI’s role in the classroom, specifically around inappropriate use of AI, ethical concerns, and student overreliance on AI. Additionally, another Michigan Virtual study by McGehee (2024) indicates that students simply using AI doesn’t make much of a difference for students, but rather how they use it is what makes an impact, indicating that thoughtful and appropriate integration is key to success.
A Walton Family Foundation (2024) study also found that many teachers express less than supportive views of AI and that distrust and unfamiliarity increase with age. Rand’s (2024) study of K-12 educators found that many teachers still aren’t using AI despite the large adoption rates in other industries, though many districts planned to train teachers to do so by the end of the 2023-24 school year. Trust et al. (2023) found that teachers worry about AI’s potential to replace human instruction, erode critical thinking skills, and exacerbate academic dishonesty.
These concerns echo the historical pattern of resistance to new educational technologies documented by Cuban (1986).
Moreover, the rapid development and deployment of AI tools have left many educators feeling unprepared and overwhelmed. Zhai et al. (2021) reported that teachers often feel they lack the necessary skills and knowledge to effectively integrate AI into their teaching practices. This technological anxiety is compounded by ethical concerns surrounding AI, such as data privacy and the potential for bias in AI systems (Holmes et al., 2022). The perceived complexity of technology, identified as a strong predictor of resistance in previous studies (Mac Callum et al., 2014), is particularly pronounced in the case of AI adoption (McGehee, 2023).
Despite these concerns, there is also a growing recognition of AI’s potential to enhance education. Zawacki-Richter et al. (2019) highlighted AI’s capacity to personalize learning experiences and provide valuable insights into student performance, and recent work by Michigan Virtual (McGehee, 2024) has shown that specific usage types of AI are associated with significantly higher student achievement.
However, the realization of these benefits hinges on addressing teachers’ apprehensions and providing adequate support and training (McGehee, 2023; Michigan Virtual, 2024). As Selwyn (2019) argues, the successful integration of AI in education will require a careful balance between technological innovation and pedagogical wisdom, with teachers playing a central role in shaping AI’s implementation in the classroom.
Understanding this historical context is crucial for setting the stage for the literature analysis of recent research on predictors of technology adoption among educators. By examining studies from the past two decades, the aim is to identify the key factors that influence educators’ willingness and ability to integrate technology, including AI, into their teaching practices. These predictors may include, but are not limited to, teacher attitudes, institutional support, professional development opportunities, and perceived usefulness of technology.
This literature meta-analysis will synthesize findings from multiple studies to provide a comprehensive view of the current state of technology adoption in education. By understanding these predictors, educational leaders and policymakers can develop more effective strategies to support and encourage technology integration, including AI, addressing the persistent challenges that have characterized this field for over a century.
This literature meta-analysis will proceed as follows: First discussed is the outline of the methodology for selecting and analyzing relevant studies. Next, is the presentation of our findings on the most significant predictors of technology adoption among educators. Finally, the implications of these findings for educational practice and policy will be discussed, considering how they relate to the historical context of resistance to technology adoption in education, with a particular focus on preparing educators for the integration of AI in their teaching practices.
Method
This literature meta-analysis uses the following predictors of large categorical factors of technology adoption:
- Perceived Ease of Use and Perceived Usefulness
- Behavioral Intention
- Moderating Factors
These predictor categories are taken from Scherer, Siddiq, & Tondeur’s (2019) meta-analysis and Granic’s (2022) meta-analysis on technology acceptance studies that largely used the Technology Acceptance Model or TAM instrument(s) (Davis, 1989) to study educational technology adoption. While not all studies included in this analysis used the TAM or its iterations, they observed or studied factors that are included in it or similar to it and were thus considered appropriate for inclusion. Factors not included in the TAM, and were not similar enough to be included as a TAM factor, are discussed separately.
This compilation is intended to be an extension of Scherer, Siddiq, & Tondeur’s, and Granic’s analyses, by including further studies and resources focused on AI in an effort to provide insight on how to properly support educators transitioning into the age of Artificial Intelligence.
Criteria for Inclusion and Analytical Methods
Each study that was selected was published within the last 20 years, uses teachers as the primary population, and focuses on technology adoption. All of the studies in Granic’s 2022 meta-analysis and Scherer, Siddiq, & Tondeur’s (2019) meta-analysis were included, with an addition of 16 AI technology adoption studies and 30 other general Edtech adoption studies.
Excluding the studies pulled from Grannic (2022) and Scherer, Siddiq, & Tondeur (2019), AI tools were used to find and organize resources that were relevant to this meta-analysis in addition to Google Scholar and the Google search engine.
Search terms included: Artificial intelligence and Technology Acceptance Model, AI and TAM, AI, educators, and TAM, AI and educator acceptance, AI tools in education, adopting AI tools in education, AI tools and teacher adoption, teacher technology adoption, teachers and TAM, educators, and TAM.
The AI tools that were used for this meta-analysis extension were:
- Chat GPT (4.0) – used for determining the strength and impact of studies, as well as identifying potential studies for inclusion
- This was the primary tool used for assistance in the study regarding analyses – all other tools were used as checks against ChatGPT for accuracy, clarity, and reasoning to determine if there were any severe discrepancies).
- Claude 3.5 (Anthropic) – used for determining the strength and impact of studies, as well as identifying potential studies for inclusion; additionally, it was used to assist in synthesizing findings for use in a summary table.
- This was a supporting tool
- Google Gemini – used for determining the strength and impact of studies, as well as identifying potential studies for inclusion
- This was a supporting tool
- Elicit – primarily used for identifying potential studies for inclusion; information from relevant studies was taken from here and imported into ChatGPT, Claude, and Gemini.
- This was the primary AI tool used to identify potential studies, as well as extract information from studies. It is important to note that while this tool identifies specific information for studies and extracts it, the researcher still looked at each study and confirmed the extracted information.
- Scite.ai – primarily used for identifying potential studies for inclusion; information from relevant studies was taken from here and imported into ChatGPT, Claude, and Gemini.
- This was the secondary AI tool used to identify potential studies and to extract information from studies. It is important to note that while this tool identifies specific information for studies and extracts it, the researcher still looked at each study and confirmed the extracted information.
While fewer in number, studies that specifically dealt with AI as the given technology adoption were given special attention and double weight, considering that AI tools are the newest form of educational technology to be rapidly dispersing and permeating schools for teachers and students alike. This weighting was only used when comparing Gen EdTech adoption vs AI adoption, because there are far more existing studies regarding general technology adoption as compared to AI adoption, thus it was easier to make comparisons between the two.
When analyzing the studies that were selected for this meta-analysis, three categories were calculated on key variables that each study dealt with strength, impact, and amount of evidence.
The criteria and definitions for the strength, impact, and amount of evidence categories are as follows:
- Strength – this is synonymous with the significance of (p-values) statistical hypothesis testing, correlation coefficients, and regression weights and coefficients of the variable across the studies it is concerned with. This is categorized as strong, moderate, or weak based on the findings across the number of supporting studies. These were taken verbatim, directly from the studies and then averaged by AI.
- Strong – p values of less than .001, large F or t values, correlation coefficients of approximately [.6] or higher, large regression coefficients, and/or importance scores from models.
- Moderate – p values of less than .05., moderate size F or t values, correlation coefficients of [.3] – [.5], moderate-sized regression coefficients, and/or importance scores from models.
- Weak – p values very close to .05 or approaching statistical significance, small F or t values, correlation coefficients of less than [.3], and small regression coefficients and/or importance scores from models.
- This was a categorical organization of studies by the researcher, followed by a comparison of frequencies of studies in each category by AI, resulting in an overall descriptor.
- For example, if 10 studies reported strong relationships between two variables, and 4 reported moderate relationships, ChatGPT was asked to give an overall rating of the variable and would categorize it as strong because there are more than double the number of studies showing strong correlations or relationships than moderate, and none showing weak relationships.
- Impact- this is concerned with effect sizes, population size, and the strength of each predictor across each study in its group. These are categorized into Very Low, Low, Moderate, High, Very High, and Essential categories, sometimes with a direction of positive or negative if the findings across all of the studies provide enough consensus.
- This was similar to Strength but had to do with population size and effect sizes of results in studies included in the analysis. Studies that dealt with differences between groups had their impact descriptors directly pulled from the text and were categorized into the 6 bins mentioned above. In addition, the larger the sample size, the more weight the studies were given by AI. Then an overall categorization was calculated by ChatGPT by comparing the amount of studies in each category and their weights.
- As with the previous category of Strength, this was a categorical organization of studies by the researcher, followed by a comparison of frequencies of studies in each category by AI, resulting in an overall descriptor.
- For example, if 5 studies with large effect sizes (each with large N), and a strength score of high, then 5 studies with moderate effect sizes (small N) and a strength score of high, and finally 5 studies with small effect sizes (small N) and a strength score of moderate were included, then ChatGPT would take this information and categorize this variable likely into the High category, based on the frequencies of the characteristics.
- Amount of Evidence – this is concerned with how many studies support the finding. These are categorized into:
- Low – fewer than 6 large N studies or fewer than 8 studies/resources with low N
- Moderate – between 8 to 12 studies/resources, with a minimum of one high N resource or experimental/quasi-experimental methods
- High – 12+ studies, or more than 8 studies/resources, with two or more high N or experimental/quasi-experimental methods
- AI – Evidence – this is concerned with only the number of studies that support the predictor regarding educator adoption of Artificial Intelligence only
- This category is specific to evidence that deals with AI adoption as the technology in question. This is a numerical value of the number of studies that support the factor
It is important to note that these results were also compared to the findings from the previous two meta-analyses. It is intended to be an extension of, and thus, any discrepancies between this study and the other two will be discussed.
Findings
Using the methods described above and replicating the categories from the TAM and Granic’s (2022) work, a summary table presents the findings from this meta-analysis extension.
The table of findings (Table 1) and subsequent text presents a comprehensive overview of factors influencing technology acceptance among educators based on a meta-analysis of the subject. The factors are categorized into three main groups: antecedents of Perceived Ease of Use (PEU) and Perceived Usefulness (PU), antecedents of Behavioral Intention (BI), and moderating factors, each of which are defined as follows:
- Perceived Ease of Use
- This refers to the degree to which a person believes that using a particular technology would be free of effort. It’s about how easy the user thinks it will be to learn and use the technology.
- Perceived Usefulness
- This is defined as the degree to which a person believes that using a particular technology would enhance their job performance or life in general. It’s about how beneficial or valuable the user thinks the technology will be.
- Behavioral Intention
- This refers to a person’s readiness to perform a given behavior. In the context of TAM, it’s the likelihood that a person will adopt and use the technology.
- Moderating Factors
- This refers to variables that can influence the main effect of other factors. This means the strength of certain factors can vary based on these.
- The categories of Perceived Ease of Use (PEU) and Perceived Usefulness (PU) influence the user’s attitude toward using the technology, which in turn affects their Behavioral Intention (BI) to use it. The Behavioral Intention (BI) is what ultimately determines a person’s actual usage of a technology.
- Therefore, when interpreting the antecedents or factors that influence PEU and PU, it is important to understand that they are essentially once removed from influencing the actual adoption and use of a technology because they are factors that have relationships with factors that influence adoption. Factors of BI are not considered once removed because BI essentially is technology adoption, and therefore any factor that influences this is closer in relationship than any PEU or PU Factor.
- Moderating factors are just that; they are variables that have relationships with the strength of a factor depending upon the level of the moderating factor, an example of this would be Age moderates self-efficacy, which means that self-efficacy has significant differences depending on a participant’s age
Table 1 – Summary Table

General TAM Model Factors – Gen Edtech Adoption
Among the PEU (Perceived Ease of Use) and PU (Perceived Usefulness) antecedents, several factors stand out as particularly influential, consistent with the previous findings of Grannic (2022) and Scherer, Siddiq, & Tondeur’s (2019) work.
Self-efficacy, an individual’s judgment of their capability to use technology, shows a strong strength with a very large impact and high amount of evidence. This underscores the critical role of teachers’ confidence in their technological abilities (Scherer & Teo, 2019; Holden & Rada, 2011; Joo et al., 2018; Chuang et al., 2020; Leem & Sung, 2019; Celik & Yesilyurt, 2013; Wang et al., 2021).
Perceived Enjoyment and Technological Complexity both demonstrate strong strength with large impacts and high evidence, highlighting the importance of user experience in technology adoption. Perceived Enjoyment is shown to significantly enhance the likelihood of adoption when technology is engaging and enjoyable to use (Cheung & Vogel, 2013; Teo & Noyes, 2011; Padilla-Meléndez et al., 2013; Mun & Hwang, 2003; Chang et al., 2017; Leem & Sung, 2019). Technological Complexity, meanwhile, indicates that as complexity increases, adoption likelihood decreases unless sufficient support is provided (Celik & Yesilyurt, 2013; Aldunate & Nussbaum, 2013; Calisir et al., 2014; Hsu & Chang, 2013; Hanif et al., 2018; Tarhini et al., 2014).
Facilitating Conditions, which refer to resources and technology factors affecting usage, are deemed essential with strong strength and high evidence. This emphasizes the crucial role of institutional support and infrastructure in promoting technology adoption (Venkatesh et al., 2003; Fathema et al., 2015; Moran et al., 2010; Teo et al., 2016; Lawrence & Tar, 2018; Chiu, 2021; O’Bannon & Thomas, 2014).
Anxiety, on the other hand, shows a strong negative impact, indicating that teachers’ apprehensions about technology can significantly hinder adoption (Celik & Yesilyurt, 2013; Calisir et al., 2014; Howard, 2013; Chang et al., 2017; Chiu, 2021; Vongkulluksn et al., 2018).
In terms of Behavioral Intention antecedents, Self-efficacy again emerges as a critical factor with essential positive impact and high evidence. This reinforces the importance of building teachers’ confidence in their technological abilities (Scherer & Teo, 2019; Holden & Rada, 2011; Joo et al., 2018). Subjective Norm and Perceived Playfulness both show moderate strength but large impacts, suggesting that social influences and intrinsic motivation play significant roles in shaping intentions to use technology (Cheung & Vogel, 2013; Tarhini et al., 2014; Teo et al., 2018; Padilla-Meléndez et al., 2013; Scherer & Teo, 2019; Park et al., 2012; Vongkulluksn et al., 2018).
The moderating factors provide additional nuance to understanding technology acceptance. Age moderately affects the relationships between several key factors and behavioral intention (Tarhini et al., 2014; Wang et al., 2009; O’Bannon & Thomas, 2014; Scherer & Teo, 2019; Wong et al., 2012), while Gender shows a weaker influence (Tarhini et al., 2014; Wong et al., 2012; Teo & van Schaik, 2012; Park et al., 2012). Individual-level Cultural Values demonstrate moderate strength and impact, suggesting that cultural context plays a role in technology acceptance (Teo et al., 2008; Nistor et al., 2013; Sánchez-Prieto et al., 2020; Chuang et al., 2020). Notably, Technological Innovation strongly moderates the relationships between subjective norm, perceived usefulness, and behavioral intention, highlighting the importance of keeping pace with evolving technologies in education (Venkatesh et al., 2003; Moran et al., 2010; Leem & Sung, 2019; Teo et al., 2021).
Below are two figures that visualize the strengths (Figure 1) and the impacts (Figure 2) of each variable in the TAM according to the AI assisted analysis. It is important to note that the values used to make these comparisons were calculated by AI, as described in the methodology section, but are summarized into categorical bins in Table 1.
Figure 1 – General EdTech Adoption factor strengths

Figure 2 – General EdTech Adoption factor impacts

Non-TAM Factors – Gen EdTech Adoption
Perceived Risk, Expectancy, and Perceived Trust are important predictors of teachers’ technology adoption, closely related to but distinct from the traditional predictors and their antecedents in the Technology Acceptance Model (TAM). The TAM primarily focuses on two key broad categorical predictors: Perceived Usefulness and Perceived Ease of Use, which explain users’ acceptance of technology based on its utility and the effort required to use it; these larger constructs, again, can be predicted in turn by smaller groups of antecedents.
Perceived Risk involves the potential negative consequences or uncertainties that teachers associate with adopting new technology. Unlike the TAM, which doesn’t explicitly consider risk, Perceived Risk addresses teachers’ concerns about the reliability of technology, privacy issues, and the possibility of failure or negative outcomes (Howard, 2013; Teo et al., 2016). High perceived risks can deter educators from integrating technology into their teaching practices, as they might worry about disruptions, loss of classroom control, or being judged negatively by colleagues or administrators. By acknowledging these fears, schools can reduce perceived risks through reliable support and training, encouraging greater technology adoption.
Expectancy, similar to Perceived Usefulness in the TAM, refers to teachers’ beliefs about the likelihood that technology will lead to positive outcomes, such as enhanced instructional effectiveness or improved student learning. However, Expectancy expands beyond just usefulness by incorporating elements of motivation and anticipated success (Hanif et al., 2018; Chang et al., 2017; Moran et al., 2010; Chen et al., 2008; Davis, 1989; Teo & van Schaik, 2012). When teachers believe that technology can make their jobs easier or more engaging for students, they are more likely to use it. This aligns with the principles of the Expectancy Theory, which suggests that individuals are motivated to adopt behaviors they expect to lead to desired outcomes.
Perceived Trust is another crucial factor that goes beyond the traditional TAM components. It encompasses teachers’ confidence that the technology is reliable, secure, and capable of performing as needed, as well as trust in the organizations or institutions providing it(Teo et al., 2018; Scherer et al., 2021; Celik & Yesilyurt, 2013). In the TAM framework, trust is not explicitly addressed, yet it plays a significant role in technology adoption, particularly in environments like schools where ethical considerations and professional standards are paramount. When teachers trust the technology and its providers, they are more likely to adopt it, knowing it aligns with their educational goals and responsibilities. Building trust through consistent positive experiences and transparent communication about technology’s capabilities and limitations can significantly enhance adoption rates.
AI Adoption Factors
The adoption of AI tools by teachers in educational settings is influenced by a multifaceted array of factors that both align with and extend beyond the traditional components of the Technology Acceptance Model (TAM). While TAM emphasizes Perceived Ease of Use and Perceived Usefulness as central predictors of technology acceptance, the integration of AI-based educational technology requires a deeper examination of additional variables that reflect the unique characteristics and challenges associated with AI technologies (Chocarro et al., 2021; Wang et al., 2021).
Self-efficacy is a significant factor influencing AI tool adoption, showing strong evidence as an antecedent of both perceived ease of use and behavioral intention (Chatterjee & Bhattacharjee, 2020; Zhang et al., 2023; Ayanwale et al., 2022; Nja et al., 2023; Alhumaid et al., 2023). An individual’s confidence in their ability to use technology effectively is crucial; teachers who believe they possess the necessary skills to navigate AI tools are more likely to perceive these tools as user-friendly and are consequently more inclined to integrate them into their teaching practices (Choi, Jang, & Kim, 2022; Woodruff, Hutson, & Arnone, 2023; Nazaretsky, Cukurova, & Alexandron, 2021). This is consistent with the TAM, where self-efficacy directly contributes to perceived ease of use (Sánchez-Prieto et al., 2019; Zhang et al., 2021).
System Accessibility and Technological Complexity also emerge as critical determinants in the adoption of AI-based EdTech, closely mirroring the TAM’s focus on ease of use (Nazaretsky et al., 2021; Rico-Bautista et al., 2021; Chocarro, Cortiñas, & Marcos-Matás, 2021; Woodruff et al., 2023). The ease of access to AI tools and the simplicity with which they can be operated significantly influence their perceived usability (Wang et al., 2021; Nja et al., 2023; Al Darayseh, 2023). Technologies that are accessible and uncomplicated encourage adoption by minimizing the perceived effort required to use them, aligning with the TAM’s premise that simplicity enhances user acceptance (Sánchez-Prieto et al., 2019; Zhang et al., 2023; Cukurova et al., 2023).
Subjective Norms and Perceived Playfulness indicate moderate influence on both behavioral intentions and perceived usefulness (Wang et al., 2021; Chocarro et al., 2021; Al Darayseh, 2023; Ayanwale et al., 2022). Subjective norms, which refer to the influence of colleagues, administrators, and the broader educational community, can significantly shape teachers’ attitudes toward adopting AI tools (Choi et al., 2022; Zhang et al., 2021). If there is a positive perception of AI within these social circles, teachers are more likely to view these tools as beneficial, thus enhancing their perceived usefulness (Nazaretsky et al., 2021; Cukurova et al., 2023; Alhumaid et al., 2023). Perceived playfulness, or the intrinsic enjoyment derived from using AI tools, adds another layer to technology adoption, highlighting the importance of motivational factors that go beyond mere utility (Chocarro et al., 2021; Zhang et al., 2021; Nja et al., 2023). This suggests that in educational contexts, enjoyment and engagement can be critical for sustaining technology use, expanding the TAM’s traditional focus on functionality and ease (Woodruff et al., 2023; Chatterjee & Bhattacharjee, 2020).
Ethical Issues and Transparency are particularly relevant in the context of AI adoption and are not explicitly covered by TAM (Choi et al., 2022; Nazaretsky et al., 2021; Rico-Bautista et al., 2021). The integration of AI tools in education raises concerns about biases in AI algorithms, student data privacy, and the transparency of AI decision-making processes (Cukurova et al., 2023; Al Darayseh, 2023; Zhang et al., 2023). Teachers may hesitate to adopt AI tools if they perceive them as ethically questionable or if there is a lack of clarity about how decisions are made by these technologies (Nazaretsky et al., 2021; Choi et al., 2022; Alhumaid et al., 2023). This perceived lack of transparency can undermine trust, which is essential for the adoption of AI-based EdTech (Nja et al., 2023; Zhang et al., 2021). Addressing these concerns by providing clear, transparent explanations of AI functionality and ensuring ethical standards are upheld is crucial for fostering trust and acceptance (Rico-Bautista et al., 2021; Chatterjee & Bhattacharjee, 2020).
Anxiety about using AI technologies represents another factor that differs from traditional TAM components (Ayanwale et al., 2022; Chatterjee & Bhattacharjee, 2020; Woodruff et al., 2023). Anxiety reflects personal traits that cause apprehension or fear when engaging with new technology. This factor can have both direct and inverse effects on technology adoption (Al Darayseh, 2023; Cukurova et al., 2023; Nazaretsky et al., 2021). Teachers who experience high levels of anxiety may perceive AI tools as difficult to use or may doubt their usefulness, hindering adoption (Woodruff et al., 2023; Zhang et al., 2021; Choi et al., 2022). Conversely, with adequate support and training, anxiety can be mitigated, enhancing perceived ease of use and fostering a more positive attitude toward AI tools (Wang et al., 2021; Nja et al., 2023; Alhumaid et al., 2023).
Cost and Time considerations also play a significant role in the adoption of AI-based EdTech, adding another dimension not explicitly covered by TAM (Cukurova et al., 2023; Alhumaid et al., 2023; Rico-Bautista et al., 2021; Nazaretsky et al., 2021). Teachers are more likely to adopt AI tools if they perceive them as cost-effective and time-saving (Wang et al., 2021; Zhang et al., 2023; Nja et al., 2023; Al Darayseh, 2023). However, if the adoption of these tools is seen as requiring substantial financial investment or adding to teachers’ workloads without clear benefits, resistance is likely (Woodruff et al., 2023; Alhumaid et al., 2023; Choi et al., 2022). This factor includes both the initial time and cost to learn and implement AI tools, as well as ongoing maintenance and the need for continuous professional development (Cukurova et al., 2023; Chatterjee & Bhattacharjee, 2020).
The factor, Required Shift in Pedagogy, is a further consideration influencing AI adoption, extending beyond the traditional TAM framework (Nja et al., 2023; Al Darayseh, 2023; Choi et al., 2022; Woodruff et al., 2023). AI tools often necessitate a change in teaching methods, requiring educators to rethink how they deliver content and assess student learning (Cukurova et al., 2023; Chatterjee & Bhattacharjee, 2020; Zhang et al., 2023). The perceived need to modify pedagogical practices can create resistance, particularly among teachers accustomed to traditional methods, even if the perceived usefulness and ease of use of AI tools are high (Nazaretsky et al., 2021; Alhumaid et al., 2023; Sánchez-Prieto et al., 2019).
Finally, factors such as Individual-Level Cultural Values, Age, Gender, and Technological Innovation also moderate the adoption of AI tools, reflecting broader societal attitudes and individual differences that shape teachers’ willingness to embrace new technologies (Sánchez-Prieto et al., 2019; Zhang et al., 2023; Chocarro et al., 2021; Al Darayseh, 2023). Cultural values can influence perceptions of technology’s role in education, while demographic variables such as age and gender may affect comfort levels and attitudes toward AI (Alhumaid et al., 2023; Wang et al., 2021; Cukurova et al., 2023). The perception of technological innovation can encourage adoption if AI tools are seen as cutting-edge solutions that enhance teaching or discourage it if viewed as disruptive to established practices (Choi et al., 2022; Nazaretsky et al., 2021; Rico-Bautista et al., 2021).
Below is a visual that organizes the amount of evidence (number of studies) that supports a TAM variable’s importance when it comes to AI adoption as the dependent variable as opposed to general technology adoption.
Figure 3 – TAM Factors AI Adoption Evidence

Synthesis
To make sense of the large amount of information gathered, the researcher attempted to estimate the strength of influence of each of the factors given their outcome: General EdTech adoption or AI-Based EdTech adoption. While there is much more evidence for these factors in general EdTech adoption, the research took into consideration both the amount of literature available on AI adoption and the length of time the tools have been available to perform research and evaluation. Below is a figure (Figure 4) comparing the two types of adoption predictors and their estimated strengths, which were calculated by AI as described earlier.
Figure 4 – GenEdTech vs AI-Based EdTech Adoption Factors

Key Comparisons and Contrasts
- Self-efficacy:
- A critical factor for both general EdTech and AI-based EdTech. Confidence in using technology is crucial across both types.
- Subjective Norms:
- More influential for general EdTech. Social influence is less significant for AI-based tools, where personal judgment plays a bigger role.
- Perceived Enjoyment and Perceived Playfulness:
- More impactful for general EdTech, indicating that enjoyment and intrinsic motivation are key for adopting traditional technologies, whereas AI tools are viewed more for their practical utility.
- System Quality and System Accessibility:
- Important for both general and AI-based EdTech. The quality and ease of access to technology are consistently significant factors.
- Technological Complexity:
- A major barrier for both, but particularly pronounced for AI-based EdTech (tied for 2nd strongest influencer in AI adoption with Technological Innovation as opposed to its 4-way tie for 2nd strongest influencer in Gen EdTech adoption) due to the perceived advanced nature of many of these tools according to the literature.
- The strength of Technological Complexity and Technological Innovation are discussed to be very much related across multiple studies regarding AI adoption because they are concerned with the perception of the technology in many cases.
- A major barrier for both, but particularly pronounced for AI-based EdTech (tied for 2nd strongest influencer in AI adoption with Technological Innovation as opposed to its 4-way tie for 2nd strongest influencer in Gen EdTech adoption) due to the perceived advanced nature of many of these tools according to the literature.
- Facilitating Conditions:
- Essential for both types of technologies. The availability of resources and support strongly influences adoption.
- Anxiety:
- Significant for both, with a higher impact on AI-based EdTech due to the perceived complexity and novelty associated with AI tools.
- Moderating Factors (Age, Gender, Cultural Values, Technological Innovation):
- These factors have a moderate influence across both general and AI-based EdTech, with innovation perceived as slightly more influential for AI tools due to the large impact gen AI like LLms have had in education and the world in general.
Model
As a companion to the synthesis, a visual model was constructed to help explain the relationships between predictive factors and AI adoption. It depicts the strengths of relationships with lines of varying thicknesses, with the thick and more pronounced lines indicating stronger relationships and vice versa.

Key Components in the Diagram
- Legend and Symbols:
- Blue Gradient Bar (Top Left): Indicates the importance of each factor, with darker colors representing more important factors.
- Arrows:
- Thick Arrows: Major relationships that strongly influence other components.
- Thin Arrows: Minor relationships that have a weaker influence.
- Shapes:
- Rounded Rectangles: Different types of factors affecting AI adoption.
- Blue Ellipses and Circles: Non-TAM factors (not part of the original Technology Acceptance Model) that impact the process.
- Yellow and Light Blue Rectangles: TAM factors that are central to the Technology Acceptance Model.
- Green Rectangle: Behavioral intention, representing the intent to adopt AI.
- Hexagon: Final AI adoption outcome.
- TAM Factors (Core Factors in Technology Acceptance Model):
- Perceived Ease of Use: Reflects the user’s belief that using AI will be free of effort.
- Perceived Usefulness: Refers to the belief that AI will enhance the user’s effectiveness or job performance.
- Non-TAM Factors (Additional Factors in the Extended Model):
- System Quality: The overall performance and reliability of the AI system, which affects both ease of use and usefulness.
- Technology Complexity: How difficult or complex the AI technology is to understand and use, impacting perceived ease of use.
- Anxiety: Users’ level of apprehension or discomfort with AI, influencing perceived ease of use.
- Self-Efficacy: The user’s confidence in their ability to use AI effectively.
- System Accessibility: How easily users can access and engage with the AI technology, impacting perceived ease of use.
- Perceived Enjoyment & Playfulness: Users’ perception of AI as enjoyable or fun, affecting their willingness to use it.
- Additional Contextual Factors:
- Facilitating Conditions: External support, resources, or infrastructure that help users adopt AI.
- Subjective Norms: Social pressures or expectations that influence users’ perceived ease of use and usefulness.
- Ethical Considerations: Ethical concerns regarding AI use, which influence the perceived usefulness and acceptance of AI.
- Pedagogical Shift: Changes in teaching or training approaches due to AI, impacting perceived usefulness.
- Cost & Time: Resources required for adopting AI, which influence perceived usefulness.
- Moderators:
- Age, Gender, Technological Innovation, and Cultural Values: Factors that modify the effect of ease of use and usefulness on behavioral intention. These variables help explain differences in AI adoption across demographic or cultural groups.
Flow and Relationships
- Direct Influences on Perceived Ease of Use:
- System Quality, Technology Complexity, Anxiety, Self-Efficacy, and System Accessibility directly impact users’ perception of how easy AI is to use.
- Facilitating Conditions and Subjective Norms also play a role, though they are more indirect and have a minor influence.
- Direct Influences on Perceived Usefulness:
- Factors such as Ethical Considerations, Pedagogical Shift, Cost & Time, Subjective Norms, and Perceived Enjoyment & Playfulness contribute to how useful the AI is perceived.
- System Quality and Technology Complexity impact usefulness as well, emphasizing the importance of a well-designed system that isn’t overly complex.
- Influences on Behavioral Intention:
- Perceived Ease of Use and Perceived Usefulness are the two primary drivers of behavioral intention. They influence whether users intend to adopt AI.
- Moderators such as age, gender, technological innovation, and cultural values adjust the impact of ease of use and usefulness on behavioral intention, recognizing that these factors are not universally felt in the same way by all users.
- Final Outcome – AI Adoption:
- Behavioral Intention ultimately leads to AI adoption, where a higher intention to use AI translates to a greater likelihood of adoption.
Implications and Recommendations for AI Adoption in Education
The adoption of AI-based educational technology (EdTech) shares several predictors with general EdTech adoption but also presents unique challenges that require additional considerations. Understanding these similarities and differences has significant implications for educators, policymakers, and developers working to integrate AI into educational settings.
Similarities between AI Adoption and General EdTech Adoption
- Perceived Ease of Use (PEU) and Perceived Usefulness (PU)
Both AI and general EdTech adoption are heavily influenced by teachers’ perceptions of how easy the technology is to use (PEU) and how useful it is for enhancing teaching and learning (PU) (Davis, 1989). Factors like self-efficacy, system quality, and facilitating conditions play critical roles in shaping these perceptions (Scherer, Siddiq, & Tondeur, 2019; Venkatesh et al., 2003). - Behavioral Intention (BI)
Self-efficacy and system accessibility are crucial in predicting BI, which represents the teacher’s intention to adopt the technology—an aspect essential for any technology adoption in educational contexts (Scherer, Siddiq, & Tondeur, 2019). Similarly, Fathema, Shannon, & Ross (2015) highlight that behavioral intention in educational settings is driven by the teachers’ confidence and access to the technology. - Moderating Factors
Age, gender, and cultural values also moderately influence both AI and general EdTech adoption, suggesting that demographic and cultural factors shape how different groups perceive and adopt technology (Tarhini, Hone, & Liu, 2014; Sánchez-Prieto et al., 2020). These moderating factors help explain diverse adoption rates and attitudes among teachers.
Differences between AI Adoption and General EdTech Adoption
- Technological Complexity and Anxiety
AI tools are often perceived as more complex than general EdTech, presenting a higher barrier to adoption, which requires a more significant focus on training and support to enhance teachers’ confidence and ability to use these tools effectively (Howard, 2013; Nazaretsky, Cukurova, & Alexandron, 2021). Anxiety frequently accompanies perceived complexity, particularly with AI tools, as teachers may fear engaging with technologies they do not fully understand (Celik & Yesilyurt, 2013). - Ethical Concerns and Transparency
AI adoption introduces ethical concerns, such as data privacy, algorithmic bias, and transparency in decision-making processes, which are less prevalent in general EdTech (Selwyn, 2019; Holmes et al., 2022). Addressing these issues is crucial for building trust and ensuring responsible AI use in education (Chocarro, Cortiñas, & Marcos-Matás, 2021). - Cost and Time
The perceived cost and time associated with adopting AI tools are often higher than with general EdTech. Implementing AI may require significant investment in hardware, software, and professional development, adding a burden that schools must address to facilitate adoption (Woodruff, Hutson, & Arnone, 2023). Such concerns echo Cuban’s (1986) findings on the historical costs associated with adopting educational technology. - Required Shift in Pedagogy
Unlike general EdTech, which can often be integrated with minimal changes to teaching methods, AI tools may necessitate significant shifts in pedagogy. Educators may need to rethink instructional strategies, assessments, and classroom management techniques to effectively incorporate AI into their practices (Trust et al., 2023; Zhang et al., 2023).
Implications for Educators and Policymakers
Given these similarities and differences, several key implications arise for those involved in integrating AI into education. It is essential to note that these actions primarily depend on support from education leaders and administrators, as many are beyond the scope of what a classroom teacher can accomplish.
- Need for Comprehensive Training and Support
Training programs must go beyond basic technology skills to include in-depth knowledge of AI tools, emphasizing the development of self-efficacy and reducing anxiety related to technological complexity (Ertmer, 1999). Alhumaid et al. (2023) also stress the importance of targeted training in building teachers’ confidence with AI tools. - Focus on Ethical Use and Transparency
Educational institutions must develop guidelines on data privacy, algorithmic transparency, and equitable AI use (Selwyn, 2011; Al Darayseh, 2023). Transparent communication about how AI systems work and their ethical implications is fundamental to fostering trust and encouraging adoption. - Consideration of Cost and Time Investments
Policymakers and school administrators should consider the higher costs and time investments associated with AI adoption. Providing financial support for purchases, as well as allocating time for professional development, can alleviate these barriers (Zhai et al., 2021; Buabeng-Andoh, 2012). - Support for Pedagogical Shifts
As AI tools may require changes in teaching methods, there should be support structures in place to help teachers adapt their pedagogy. Providing resources, exemplars, and collaborative opportunities will enable educators to explore new instructional strategies enabled by AI technologies (Zawacki-Richter et al., 2019; Chuang, Shih, & Cheng, 2020).
Recommendations for Promoting AI Adoption
Based on the aforementioned implications, the following recommendations are proposed to promote AI adoption in educational settings. Strong support from district leaders and administrators will be critical, as subjective norms and facilitating conditions depend heavily on institutional support. Below in Table 2 are organized recommendations paired with resources followed by a detailed breakdown of each.
Table 2 – Recommendations
Recommendation | Predictor(s) Addressed | Michigan Virtual Resource |
Develop Targeted and Comprehensive Training Programs | Anxiety, Pedagogical Shift, Self-Efficacy | AI Planning Framework for Districts Integration Framework Michigan Virtual AI Workshops Michigan Virtual AI Courses |
Simplify AI Tools and Ensure Usability | Technological Complexity, Self Efficacy | Educator AI Support AI Video Library |
Strengthen Institutional Support and Facilitate Access | System Accessibility, Facilitating Conditions, Cost and Time | AI Integration GuideIntegration Framework |
Promote Ethical Awareness and Transparency | Ethical Considerations, Cost and Time | AI Usage Guidelines |
Highlight Practical Benefits and Encourage Innovation | Pedagogical Shift, Subjective Norms, Perceived Enjoyment and Playfulness | AI Resource Bank Student Usage of AI |
- Develop Targeted and Comprehensive Training Programs
Professional development should not only enhance teachers’ technical skills but also address unique aspects of AI, such as ethical considerations and pedagogical integration. Tailoring these programs to different demographic groups can address varying levels of comfort with technology (Ertmer, 1999; Alhumaid et al., 2023). - Simplify AI Tools and Ensure Usability
Collaboration with developers to create intuitive, user-friendly AI tools, along with usability guides, can reduce technological complexity (Nazaretsky, Cukurova, & Alexandron, 2021). - Strengthen Institutional Support and Facilitate Access
Robust institutional support through resources, technical assistance, and community-building channels allows teachers to share experiences and learn collaboratively (Woodruff, Hutson, & Arnone, 2023). - Promote Ethical Awareness and Transparency
Clear guidelines on AI ethics are crucial. Workshops and discussions on ethical use will help build trust and ensure teachers understand the broader implications of AI in their classrooms (Selwyn, 2011; Al Darayseh, 2023). - Highlight Practical Benefits and Encourage Innovation
Large-scale studies, case studies, and personal stories that showcase AI’s benefits in enhancing learning and teaching efficiency can inspire adoption and demonstrate AI’s value in solving educational challenges (Zhang et al., 2023; Choi, Jang, & Kim, 2022).
Limitations and Transparency
This meta-analysis was conducted with extensive use of AI tools, as discussed in the methodology section, but not without professional researcher supervision and checks for accuracy. This approach marks a departure from traditional meta-analytic studies, which often adhere to specific, manual procedures for analyzing and selecting studies (Gough, Oliver, & Thomas, 2017). In contrast, this study leveraged artificial intelligence, specifically large language models (LLMs), to streamline data categorization, sorting, and initial analytics.
While these methods diverge from traditional practices, they are not without merit or justification. Hullman (2024) emphasizes the practical and accurate potential of using LLMs as tools for categorization and data analysis across diverse datasets. ChatGPT operated in this study as a rule-driven assistant, categorizing, sorting, and performing preliminary analytics to deliver digestible results, which the researcher then used for comparative analysis across categories.
Research supports the idea that AI can perform categorization tasks with near-human accuracy. For instance, Khraisha, Put, Kappenberg, Warraitch, and Hadfield (2024) found that while humans generally outperform AI in meta-analytic procedures, LLMs are highly capable of following structured instructions for categorization and basic analytics, achieving “almost perfect performance on par with humans.” Similarly, studies by Shank and Wilson (2023) and Chelli et al. (2024) discuss the strengths of AI in handling large datasets when guided by specific parameters.
Conversely, some studies (Chelli et al., 2024; Cheloff, 2023) indicate limitations in using LLMs exclusively for systematic literature reviews due to issues with accuracy and recall. This study, however, mitigated these concerns by using LLMs not as exclusive tools but as aids in finding and sorting studies under researcher-defined rules, with all outputs cross-verified for accuracy (Lewis et al., 2023).
Ultimately, while AI in research and analytics is still in its early stages and has limitations, it also presents advantages, offering efficiency and accuracy comparable to human performance when used with appropriate guidance (Hancock & Beebe, 2023; Song, 2024).
Conclusions
This meta-analysis confirms the persistence of key factors influencing educators’ adoption of technology, with specific emphasis on AI in educational settings. As noted in the introduction, resistance to technology in education—from calculators to computers—has followed a familiar trajectory, and AI is no exception. This study, building on foundational models like the Technology Acceptance Model (TAM) and extending the work of Scherer, Siddiq, and Tondeur (2019) and Grannic (2022), highlights that Perceived Ease of Use (PEU) and Perceived Usefulness (PU) continue to be central determinants in teachers’ acceptance of AI technologies. However, AI introduces complexities beyond those seen with previous technologies.
In particular, critical factors such as Self-Efficacy, Cost & Time, and the Required Pedagogical Shift emerged as highly influential in this meta-analysis. Self-efficacy, or teachers’ confidence in their ability to use AI tools, is a significant predictor of adoption, underscoring the need for targeted professional development and support. This echoes Ertmer’s (1999) findings about the importance of training in overcoming technological resistance. Cost & Time, often overlooked in broader technology adoption discussions, play a more pronounced role in AI adoption, as teachers perceive AI tools to require significant financial investment and time to learn. This aligns with Zhai et al. (2021) findings that teachers feel unprepared and overwhelmed by the demands of new technology, a barrier that could hinder adoption unless schools provide resources and time for teachers to adapt.
The Required Pedagogical Shift further complicates AI adoption. Unlike previous technologies that have been integrated with extensive support to help teachers understand how to utilize the tools effectively in their current pedagogical contexts, AI tools often necessitate a fundamental rethinking of teaching strategies and classroom management. Teachers may resist adopting AI because it challenges traditional teaching practices, an issue identified in both this meta-analysis and by Trust et al. (2023), who noted similar concerns about the impact of AI on instructional methods and critical thinking skills. Overcoming this resistance will require not just technical training but support for pedagogical innovation.
Anxiety, a recurring theme in technology adoption, was another strong predictor of resistance. Educators expressed significant apprehension about the complexity and perceived risks of AI, particularly in relation to job displacement and the ethical challenges of AI in education, such as data privacy and bias. This anxiety, as reported by Holmes et al. (2022) and highlighted in our findings, must be addressed by increasing transparency, providing ethical guidelines, and offering continuous support to educators as they integrate AI into their practices.
Beyond these core factors, System Accessibility and Technological Complexity remain central barriers to AI adoption as teachers struggle with the perceived difficulty of navigating AI tools. This echoes historical patterns of resistance, as seen in Cuban (1986)’s documentation of educators’ reluctance to adopt past technologies. Overcoming these obstacles, much like with earlier innovations, will depend on creating user-friendly systems and reducing the perceived burden on teachers.
While addressing these factors is crucial, it is also essential to acknowledge the value of resistance to technology adoption, as highlighted by The Friction Project (Sutton & Rao, 2024), and Noise: A flaw in human judgment, which argues that friction can serve as a healthy barrier, encouraging thoughtful consideration and critical questioning of new tools before widespread adoption (Kahneman, Sibony & Sunstein, 2021). In this context, resistance to AI may signal legitimate concerns that should be addressed rather than dismissed. For instance, educators’ reluctance to adopt AI may stem from valid ethical considerations, such as privacy concerns and the impact on students’ critical thinking skills. Recognizing these concerns and addressing them thoughtfully can ensure that adoption is more purposeful and aligned with educational goals, rather than simply following technological trends.
In conclusion, this meta-analysis identifies a set of predictors—PEU, PU, Self-Efficacy, Anxiety, Cost & Time, and Required Pedagogical Shift—that both align with historical trends and reveal new challenges unique to AI. While core principles of technology acceptance, such as Perceived Usefulness and Self-Efficacy, continue to shape adoption, the high demands of AI in terms of time, cost, and pedagogical adjustment suggest that this latest technological wave requires more comprehensive and tailored support than its predecessors. Addressing these factors, along with mitigating ethical concerns, reducing anxiety, and recognizing the constructive role of resistance, will be essential for overcoming barriers and ensuring that AI can effectively enhance educational practices. This approach aligns with the ideas of Piaget (1936) and Bransford et al. (2000), who advocated for thoughtful integration of technology to foster meaningful learning experiences while also taking into account the reasoning for friction in AI adoption and addressing it.
References
Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9(2), 204-215.
Al Darayseh, A. (2023). Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Computers and Education: Artificial Intelligence, 4, 100132.
Alhumaid, K., Ali, S., Waheed, A., Zahid, E., & Habes, M. (2021). COVID-19 & elearning: Perceptions & attitudes of teachers towards e-learning acceptance in the developing countries. Multicultural Education, 7(2), 100-115.
Alhumaid, K., Naqbi, S., Elsori, D., & Mansoori, M. (2023). The adoption of artificial intelligence applications in education. International Journal of Data and Network Science, 7(1), 457-466.
Aldunate, R., & Nussbaum, M. (2013). Teacher adoption of technology. Computers in Human Behavior, 29(3), 519-524.
Ayanwale, M. A., Sanusi, I. T., Adelana, O. P., Aruleba, K. D., & Oyelere, S. S. (2022). Teachers’ readiness and intention to teach artificial intelligence in schools. Computers and Education: Artificial Intelligence, 3, 100099.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school (Expanded ed.). National Academy Press.
Buabeng-Andoh, C. (2012). Factors influencing teachers’ adoption and integration of information and communication technology into teaching: A review of the literature. International Journal of Education and Development using Information and Communication Technology, 8(1), 136-155.
Calisir, F., Altin Gumussoy, C., Bayraktaroglu, A. E., & Karaali, D. (2014). Predicting the intention to use a web‐based learning system: Perceived content quality, anxiety, perceived system quality, image, and the technology acceptance model. Human Factors and Ergonomics in Manufacturing & Service Industries, 24(5), 515-531.
Celik, V., & Yesilyurt, E. (2013). Attitudes to technology, perceived computer self-efficacy and computer anxiety as predictors of computer supported education. Computers & Education, 60(1), 148-158.
Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modelling. Education and Information Technologies, 25, 3443-3463.
Chelli, A., Collins, T., Darwish, M., Fan, Y., & Kraus, M. (2024). The limitations of AI in literature reviews: A comparative study. Journal of Digital Scholarship, 11(2), 45-62.
Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., … & Ruetsch-Chelli, C. (2024). Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. Journal of Medical Internet Research, 26, e53164.
Cheloff, S. (2023). AI and systematic reviews: Challenges and considerations. New York, NY: Research Press.
Cheloff, A. Z., Pochapin, M., & Popov, V. (2023). S1726 Publicly Available Generative Artificial Intelligence Programs Are Currently Unsuitable for Performing Meta-Analyses. Official journal of the American College of Gastroenterology| ACG, 118(10S), S1287.
Chen, I. J., Yang, K. F., Tang, F. I., Huang, C. H., & Yu, S. (2008). Applying the technology acceptance model to explore public health nurses’ intentions towards web-based learning: A cross-sectional questionnaire survey. International journal of nursing studies, 45(6), 869-878.
Cheung, R., & Vogel, D. (2013). Predicting user acceptance of collaborative technologies: An extension of the technology acceptance model for e-learning. Computers & education, 63, 160-175.
Chang, C. T., Hajiyev, J., & Su, C. R. (2017). Examining the students’ behavioral intention to use e-learning in Azerbaijan? The general extended technology acceptance model for e-learning approach. Computers & Education, 111, 128-143.
Cheng, Y. M. (2019). How does task-technology fit influence cloud-based e-learning continuance and impact? Education + Training, 61(4), 480-499.
Chiu, T. K. (2021). Applying the self-determination theory (SDT) to explain student engagement in online learning during the COVID-19 pandemic. Journal of Research on Technology in Education, 54(1), 14-30.
Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 49, 295 – 313.
Choi, S.Y., Jang, Y., & Kim, H. (2022). Influence of Pedagogical Beliefs and Perceived Trust on Teachers’ Acceptance of Educational Artificial Intelligence Tools. International Journal of Human–Computer Interaction, 39, 910 – 922.
Chuang, H. H., Shih, C. L., & Cheng, M. M. (2020). Teachers’ perceptions of culturally responsive teaching in technology-supported learning environments. British Journal of Educational Technology, 51(6), 2442-2460.
Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.
Cukurova, M., Miao, X., & Brooker, R. (2023, June). Adoption of artificial intelligence in schools: unveiling factors influencing teachers’ engagement. In International conference on artificial intelligence in education (pp. 151-163). Cham: Springer Nature Switzerland.
Darayseh, A.A. (2023). Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Comput. Educ. Artif. Intell., 4, 100132.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.
Ertmer, P. A. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology Research and Development, 47(4), 47-61.
Fathema, N., Shannon, D., & Ross, M. (2015). Expanding the Technology Acceptance Model (TAM) to examine faculty use of Learning Management Systems (LMSs) in higher education institutions. Journal of Online Learning & Teaching, 11(2), 210-232.
Gough, D., Oliver, S., & Thomas, J. (2017). An introduction to systematic reviews. London, UK: SAGE Publications.
Granić A. (2022). Educational Technology Adoption: A systematic review. Education and information technologies, 27(7), 9725–9744. https://doi.org/10.1007/s10639-022-10951-7
Guillén-Gámez, F. D., & Mayorga-Fernández, M. J. (2020). Identification of variables that predict teachers’ attitudes toward ICT in higher education for teaching and research: A study with regression. Sustainability, 12(4), 1312.
Hancock, R., & Beebe, S. (2023). The impact of large language models on data analytics: A study of efficiency and accuracy. Journal of Data Science and AI, 6(1), 29-47.
Hanif, A., Jamal, F. Q., & Imran, M. (2018). Extending the technology acceptance model for use of e-learning systems by digital learners. Ieee Access, 6, 73395-73404.
Holden, H., & Rada, R. (2011). Understanding the influence of perceived usability and technology self-efficacy on teachers’ technology acceptance. Journal of Research on Technology in Education, 43(4), 343-367.
Howard, S. K. (2013). Risk-aversion: Understanding teachers’ resistance to technology integration. Technology, Pedagogy and Education, 22(3), 357-372.
Hsu, H. H., & Chang, Y. Y. (2013). Extended TAM model: Impacts of convenience on acceptance and use of Moodle. US-China Education Review, 3(4), 211-218.
Hsu, L. (2020). Factors affecting adoption of digital teaching in elementary school English: A mixed methods study. Computer Assisted Language Learning, 1-23.
Hullman, J. (2024). Practical applications of LLMs in data categorization and analysis. Journal of Computational Methods, 8(1), 12-25.
Hullman, J. (2024, June 24). Forking paths in LLMs for data analysis. Statistical Modeling, Causal Inference, and Social Science. https://statmodeling.stat.columbia.edu/2024/06/24/forking-paths-in-llms-for-data-analysis/
Joo, Y. J., Park, S., & Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and technology acceptance model. Educational Technology & Society, 21(3), 48-59.
Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. New York: Little, Brown Spark.
Khraisha, Q., Put, S., Kappenberg, J., Warraitch, A., & Hadfield, K. (2024). Can large language models replace humans in systematic reviews? Evaluating GPT‐4’s efficacy in screening and extracting data from peer‐reviewed and grey literature in multiple languages. Research Synthesis Methods.
König, J., Jäger-Biela, D. J., & Glutsch, N. (2020). Adapting to online teaching during COVID-19 school closure: Teacher education and teacher competence effects among early career teachers in Germany. European Journal of Teacher Education, 43(4), 608-622.
Lawrence, J. E., & Tar, U. A. (2018). Factors that influence teachers’ adoption and integration of ICT in teaching/learning process. Educational Media International, 55(1), 79-105.
Leem, J., & Sung, E. (2019). Teachers’ beliefs and technology acceptance concerning smart mobile devices for SMART education in South Korea. British Journal of Educational Technology, 50(2), 601-613.
Lewis, C., Martinez, F., Olson, H., & Chen, L. (2023). Ensuring accuracy in AI-assisted systematic reviews: A mixed-methods approach. International Journal of AI in Research, 2(4), 205-223.
Liu, H., Wang, L., & Koehler, M. J. (2019). Exploring the intention-behavior gap in the technology acceptance model: A mixed-methods study in the context of foreign-language teaching in China. British Journal of Educational Technology, 50(5), 2536-2556.
Mac Callum, K., Jeffrey, L., & Kinshuk. (2014). Factors impacting teachers’ adoption of mobile learning. Journal of Information Technology Education: Research, 13, 141-162.
Mailizar, M., Burg, D., & Maulina, S. (2021). Examining university teachers’ acceptance of learning management system (LMS) in Indonesia: A rasch analysis approach. Education and Information Technologies, 26(4), 4089-4108.
McGehee, N. (2023) Balancing the Risks and Rewards of AI Integration for Michigan Teachers. Michigan Virtual. https://michiganvirtual.org/research/publications/balancing-the-risks-and-rewards-of-ai-integration-for-michigan-teachers/
McGehee, N. (2024, June 21). AI in education: Student usage in online learning. Michigan Virtual Learning Research Institute. https://michiganvirtual.org/research/publications/ai-in-education-student-usage-in-online-learning/
Michigan Virtual. (2024). AI in Education: Exploring Trust, Challenges, and the Push for Implementation. https://michiganvirtual/research/publications/ai-in-education-exploring-trust-challenges-and-the-push-for-implementation/
Mohammadi, H. (2015). Investigating users’ perspectives on e-learning: An integration of TAM and IS success model. Computers in Human Behavior, 45, 359-374.
Moran, M., Hawkes, M., & Gayar, O. E. (2010). Tablet personal computer integration in higher education: Applying the unified theory of acceptance and use technology model to understand supporting factors. Journal of educational computing research, 42(1), 79-101.
Muhaimin, M., Habibi, A., Mukminin, A., Saudagar, F., Pratama, R., Wahyuni, S., … & Indrayana, B. (2019). A sequential explanatory investigation of TPACK: Indonesian science teachers’ survey and perspective. Journal of Technology and Science Education, 9(3), 269-281.
Mun, Y. Y., & Hwang, Y. (2003). Predicting the use of web-based information systems: self-efficacy, enjoyment, learning goal orientation, and the technology acceptance model. International journal of human-computer studies, 59(4), 431-449.
Nagy, J. T. (2018). Evaluation of online video usage and learning satisfaction: An extension of the technology acceptance model. International Review of Research in Open and Distributed Learning, 19(1).
Nam, C. S., Bahn, S., & Lee, R. (2013). Acceptance of assistive technology by special education teachers: A structural equation model approach. International Journal of Human-Computer Interaction, 29(5), 365-377.
Nazaretsky, T., Cukurova, M., & Alexandron, G. (2021). An Instrument for Measuring Teachers’ Trust in AI-Based Educational Technology. LAK22: 12th International Learning Analytics and Knowledge Conference.
Nistor, N., Göğüş, A., & Lerche, T. (2013). Educational technology acceptance across national and professional cultures: a European study. Educational Technology Research and Development, 61(4), 733-749.
Nja, C. O., Idiege, K. J., Uwe, U. E., Meremikwu, A. N., Ekon, E. E., Erim, C. M., … & Cornelius-Ukpepi, B. U. (2023). Adoption of artificial intelligence in science teaching: From the vantage point of the African science teachers. Smart Learning Environments, 10(1), 42.
O’Bannon, B. W., & Thomas, K. (2014). Teacher perceptions of using mobile phones in the classroom: Age matters! Computers & Education, 74, 15-25.
Padilla-Meléndez, A., del Aguila-Obra, A. R., & Garrido-Moreno, A. (2013). Perceived playfulness, gender differences and technology acceptance model in a blended learning scenario. Computers & Education, 63, 306-317.
Park, S. Y., Nam, M. W., & Cha, S. B. (2012). University students’ behavioral intention to use mobile learning: Evaluating the technology acceptance model. British journal of educational technology, 43(4), 592-605.
Piaget, J. (1936). Origins of intelligence in the child. Routledge & Kegan Paul.
Qasem, A. A. A., & Viswanathappa, G. (2020). The integration of multiliteracies in digital learning environments: A study of teacher readiness. International Journal of Technology in Education and Science, 4(4), 265-279.
Rico-Bautista, D., Medina-Cardenas, Y., Coronel-Rojas, L. A., Cuesta-Quintero, F., Maestre-Gongora, G., & Guerrero, C. D. (2021). Smart university: key factors for an artificial intelligence adoption model. In Advances and Applications in Computer Science, Electronics and Industrial Engineering: Proceedings of CSEI 2020 (pp. 153-166). Singapore: Springer Singapore.
Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.
Salloum, S. A., Alhamad, A. Q. M., Al-Emran, M., Monem, A. A., & Shaalan, K. (2019). Exploring students’ acceptance of e-learning through the development of a comprehensive technology acceptance model. IEEE access, 7, 128445-128462.
Sánchez-Prieto, J. C., Olmos-Migueláñez, S., & García-Peñalvo, F. J. (2017). MLearning and pre-service teachers: An assessment of the behavioral intention using an expanded TAM model. Computers in Human Behavior, 72, 644-654.
Sánchez-Prieto, J.C., Cruz-Benito, J., Therón, R., & García-Peñalvo, F.J. (2019). How to Measure Teachers’ Acceptance of AI-driven Assessment in eLearning: A TAM-based Proposal. Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality.
Sánchez-Prieto, J. C., Huang, F., Olmos-Migueláñez, S., García-Peñalvo, F. J., & Teo, T. (2020). Exploring the unknown: The effect of resistance to change and attachment on mobile adoption among secondary pre-service teachers. British Journal of Educational Technology, 51(3), 626-643.
Sánchez-Cruzado, C., Santiago Campión, R., & Sánchez-Compaña, M. T. (2021). Teacher digital literacy: The indisputable challenge after COVID-19. Sustainability, 13(4), 1858.
Scherer, R., Siddiq, F., & Tondeur, J. (2019). The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Computers & Education, 128, 13-35.
Scherer, R., & Teo, T. (2019). Unpacking teachers’ intentions to integrate technology: A meta-analysis. Educational Research Review, 27, 90-109.
Scherer, R., Howard, S. K., Tondeur, J., & Siddiq, F. (2021). Profiling teachers’ readiness for online teaching and learning in higher education: Who’s ready? Computers in Human Behavior, 118, 106675.
Selwyn, N. (2011). Education and technology: Key issues and debates. Continuum International Publishing Group.
Shank, M., & Wilson, J. (2023). AI’s role in data-intensive research: From sorting to summarizing. Washington, DC: Academic Research Institute.
Song, L. (2024). Efficiency and reliability of AI in research synthesis. Journal of Advanced Research Methods, 9(2), 34-50.
Song, Y., & Kong, S. C. (2017). Investigating students’ acceptance of a statistics learning platform using technology acceptance model. Journal of Educational Computing Research, 55(6), 865-897.
Sutton, R. I., & Rao, H. (2024). The friction project: How smart leaders make the right things easier and the wrong things harder. St. Martin’s Press.
Tarhini, A., Hone, K., & Liu, X. (2014). Measuring the moderating effect of gender and age on e-learning acceptance in England: A structural equation modeling approach for an extended technology acceptance model. Journal of Educational Computing Research, 51(2), 163-184.
Teo, T. (2009). Modeling technology acceptance in education: A study of pre-service teachers. Computers & Education, 52(2), 302-312.
Teo, T. (2010). A path analysis of pre-service teachers’ attitudes to computer use: applying and extending the technology acceptance model in an educational context. Interactive Learning Environments, 18(1), 65-79.
Teo, T. (2011). Factors influencing teachers’ intention to use technology: Model development and test. Computers & Education, 57(4), 2432-2440.
Teo, T., & Noyes, J. (2008). Development and validation of a computer attitude measure for young students (CAMYS). Computers in Human Behavior, 24(6), 2659-2667.
Teo, T., & Noyes, J. (2011). An assessment of the influence of perceived enjoyment and attitude on the intention to use technology among pre-service teachers: A structural equation modeling approach. Computers & Education, 57(2), 1645-1653.
Teo, T., & van Schaik, P. (2012). Understanding the intention to use technology by preservice teachers: An empirical test of competing theoretical models. International Journal of Human-Computer Interaction, 28(3), 178-188.
Teo, T., Luan, W. S., & Sing, C. C. (2008). A cross-cultural examination of the intention to use technology between Singaporean and Malaysian pre-service teachers: an application of the Technology Acceptance Model (TAM). Educational Technology & Society, 11(4), 265-280.
Teo, T., Huang, F., & Hoi, C. K. W. (2018). Explicating the influences that explain intention to use technology among English teachers in China. Interactive Learning Environments, 26(4), 460-475.
Teo, T., Zhou, M., & Noyes, J. (2016). Teachers and technology: Development of an extended theory of planned behavior. Educational Technology Research and Development, 64(6), 1033-1052.
Teo, T., Huang, F., & Hoi, C. K. W. (2019). Towards a new model of teachers’ integration of technology: A conceptual framework. British Journal of Educational Technology, 50(5), 2476-2493.
Teo, T., Huang, F., & Hoi, C. K. W. (2021). Explicating the influences that explain intention to use technology among English teachers in China. Interactive Learning Environments, 29(3), 429-443.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Vongkulluksn, V. W., Xie, K., & Bowman, M. A. (2018). The role of value on teachers’ internalization of external barriers and externalization of personal beliefs for classroom technology integration. Computers & Education, 118, 70-81.
Walton Family Foundation. (2024). AI Chatbots in Schools. https://www.waltonfamilyfoundation.org/learning/the-value-of-ai-in-todays-classrooms
Wang, Y. S., Wu, M. C., & Wang, H. Y. (2009). Investigating the determinants and age and gender differences in the acceptance of mobile learning. British Journal of Educational Technology, 40(1), 92-118.
Wang, Y., Liu, C., & Tu, Y. F. (2021). Factors affecting the adoption of AI-based applications in higher education. Educational Technology & Society, 24(3), 116-129.
Wong, K. T., Teo, T., & Russo, S. (2012). Influence of gender and computer teaching efficacy on computer acceptance among Malaysian student teachers: An extended technology acceptance model. Australasian Journal of Educational Technology, 28(7), 1190-1207.
Woodruff, K., Hutson, J., & Arnone, K. (2023). Perceptions and barriers to adopting artificial intelligence in K-12 education: A survey of educators in fifty states.
Yuen, A. H., & Ma, W. W. (2008). Exploring teacher acceptance of e‐learning technology. Asia‐Pacific Journal of Teacher Education, 36(3), 229-243.
Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: a multigroup analysis. International Journal of Educational Technology in Higher Education, 20(1), 49.
Zhang, X., Tlili, A., Shubeck, K., Hu, X., Huang, R., & Zhu, L. (2021). Teachers’ adoption of an open and interactive e-book for teaching K-12 students Artificial Intelligence: a mixed methods inquiry. Smart Learning Environments, 8, 1-20.
Zhao, Y., & Frank, K. A. (2003). Factors affecting technology uses in schools: An ecological perspective. American Educational Research Journal, 40(4), 807-840.