Abstract
E-learning readiness (ELR) is critical for implementing digital education strategies, particularly in developing countries where online learning faces unique challenges. This study aims to provide a concise and actionable framework for assessing and predicting ELR in Algerian universities by combining the ADKAR model with advanced machine learning algorithms and Shapley Additive Explanations (SHAP). Data were collected through semi-structured interviews and questionnaires from 530 students and professors across Algerian universities, focusing on the five ADKAR factors: Awareness, Desire, Knowledge, Ability, and Reinforcement. Eight machine learning models were employed to analyze the data, selected for their ability to manage complex, high-dimensional datasets and capture non-linear relationships: k-nearest neighbors, support vector machines, partial least squares, random forests, gradient boosting, decision trees, CatBoost, and XGBoost. Results show that ensemble methods CatBoost and XGBoost achieving the best predictive performance (\(R^{2}\) = 0.811), reflecting their ability to explain the variance in ELR. SHAP analysis identified “Ability” as the most influential factor, followed by “Desire” and “Reinforcement.” This combination of SHAP and ADKAR provides novel insights by highlighting critical areas for intervention, such as enhancing digital skills and promoting knowledge acquisition. The use of SHAP in this study enhances model interpretability, enabling stakeholders to identify the most impactful factors and implement targeted, effective interventions to address key barriers to e-learning readiness.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Higher education worldwide, including in Algeria, faced significant disruption during the COVID-19 pandemic, necessitating a rapid shift from in-person to e-learning. This transition highlighted the unpreparedness of Algerian universities, which struggled to deliver quality online education due to limited infrastructure, lack of digital skills, and inadequate support systems. Similar challenges were observed across Africa, emphasizing the critical need for a comprehensive assessment of e-learning readiness (ELR) before implementing digital education strategies (Nwagwu, 2020). Even as universities return to traditional learning models, questions about their preparedness for future disruptions persist. In Algeria, the lack of systematic readiness for e-learning has created barriers to integrating digital learning effectively, particularly in fostering interaction, engagement, and accessibility (Abdelaliem and Elzohairy, 2023).
Integrating e-learning into Algerian higher education reveals specific individual and institutional readiness challenges. While technology and infrastructure are essential, success ultimately depends on individuals’ preparedness to embrace this model. For instance, fostering interaction and reducing isolation among students remain significant hurdles (Zarei and Mohammadi, 2022). In Algerian universities, many students and educators lack the digital skills, training, and support needed for effective participation in e-learning (Sarker et al., 2019). Additionally, technical disruptions during online sessions exacerbate these issues, further diminishing the quality of education (Zine et al., 2023). Students may struggle to meet their educational goals without adequate preparation, as evidenced in studies from other contexts, such as Turkey, where ELR has been strongly linked to academic success and satisfaction (Yavuzalp and Bahcivan, 2021). Addressing these gaps in readiness at both individual and institutional levels is crucial for building a sustainable e-learning ecosystem in Algerian universities.
Change management and integration remain critical challenges for educational institutions, particularly during transitions to e-learning. Effective change management requires structured models guiding institutions through each process phase, from raising awareness and fostering motivation to providing the necessary knowledge and skills to implement change and sustain outcomes (Haffar et al., 2023). While models like ADKAR provide a robust framework for managing change, existing studies primarily focus on organizational-level dynamics and general frameworks without fully exploring the connection between individual readiness factors and institutional strategies for e-learning adoption (Ngang Tang, 2019; Mohammadi et al., 2021). This gap is particularly evident in Algerian universities, where unpreparedness for digital transformation has hindered the effective integration of e-learning, highlighting the need for a more targeted approach that bridges institutional and individual readiness.
The ADKAR model (Awareness, Desire, Knowledge, Ability, and Reinforcement), developed by Jeff Hiatt in 2006, offers a structured method for assessing readiness to adopt e-learning in higher education (Faishol and Subriadi, 2022; Ruele, 2019). The model’s sequential stages-from raising awareness to reinforcing change-are essential for identifying barriers and tailoring interventions to enhance acceptance and sustainability (Faishol and Subriadi, 2022; Adelman-Mullally et al., 2023; Kaminski, 2022). While previous research has demonstrated ADKAR’s value in managing transitions, such as the integration of e-learning during crises like the COVID-19 pandemic (Chipamaunga et al., 2023), its application in academic contexts remains underexplored, particularly in connection with predictive tools like machine learning. This study aims to address this gap by applying the ADKAR model within Algerian universities, where e-learning adoption faces unique challenges. By integrating ADKAR with advanced machine learning techniques, we provide a dual perspective: assessing readiness and predicting ELR while identifying the key factors influencing adoption. This approach connects the theoretical underpinnings of change management with practical solutions and offers actionable insights for addressing specific challenges in Algerian higher education. The main contributions of this study are summarized below.
-
Firstly, the research combines the desirable features of the ADKAR model with advanced machine learning techniques, developing a comprehensive approach to predict ELR and identify key influencing factors. Machine learning models, known for their ability to handle complex, non-linear relationships in data, offer significant advantages over traditional methods by providing more accurate predictions and deeper insights into the variables affecting ELR.
-
Secondly, a robust predictive framework is proposed, incorporating multiple machine learning models and SHAP (SHapley Additive exPlanations) analysis to determine the most influential ADKAR factors. SHAP enhances the interpretability of complex models, allowing stakeholders to understand how each factor contributes to ELR. This approach provides actionable insights, guiding targeted interventions and strategic planning to improve e-learning adoption in higher education.
-
Thirdly, the study evaluates the performance of various machine learning models, including k-Nearest Neighbors (kNN), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting (GB), Decision Trees (DT), CatBoost, XGBoost, and Partial Least Squares (PLS). These models were selected for their strengths in predictive analytics: kNN offers simplicity and effectiveness in regression tasks, SVM is adept at handling high-dimensional data with non-linear relationships, and RF provides robustness against overfitting with its ensemble approach. GB and its advanced versions, CatBoost and XGBoost, are known for their superior predictive accuracy, scalability, and ability to model complex non-linear interactions. PLS is employed for its capability to handle multicollinearity and reduce dimensionality in predictive modeling. This evaluation aims to identify the most effective model for predicting ELR with high accuracy, providing a data-driven basis for targeted interventions in higher education.
-
Finally, the study ensures the reliability and validity of the measurement tool through expert reviews, pilot testing, and statistical validation techniques. This meticulous approach strengthens the credibility of the findings and sets a standard for future studies on ELR assessment. By pinpointing key factors that influence ELR, this study provides actionable insights for university administrators and educators. The findings can guide targeted interventions to enhance e-learning strategies, ensuring a smoother transition to digital education platforms in Algerian universities and beyond.
These contributions collectively advance the understanding of ELR, offering a robust framework that blends theoretical insights with practical applications in the evolving landscape of digital education. The paper is structured as follows: Section 2 reviews recent studies on ELR assessments in academic settings. Section 3 presents the proposed framework and SHAP analysis for feature importance. Section 4 discusses the case study results from Tlemcen University in Algeria, and Section 5 concludes the study.
2 Related works
In this section, we will explore previous studies on ELR, focusing on the factors influencing its successful adoption in higher education, and examine the application of the ADKAR model in managing educational transformations, particularly in the context of e-learning integration.
2.1 E-learning readiness
ELR is essential for the success of digital education, involving technological, individual, and institutional factors, as explored in studies across diverse educational contexts. In Mirabolghasemi et al. (2019), Mirabolghasemi et al. demonstrated the significance of university support and effective computer use in enhancing students’ readiness for e-learning in Iran, highlighting the role of institutions in creating a conducive environment. Similarly, Al-Araibi et al.’s study in Malaysia emphasized the importance of technological infrastructure, including software, hardware, and faculty skills, while noting the limited impact of cloud computing and the need to improve academics’ technological capabilities (Al-araibi et al., 2019). In Russia, Vershitskaya et al. identified obstacles to effective e-learning and stressed the need for solid institutional support to overcome these challenges (Vershitskaya et al., 2020). In Giray (2021), Gray’s study in Turkey emphasized the role of video-based educational content and direct teacher support in enhancing students’ e-learning experiences and improving engagement and active participation. In Ghana, Yali’s study highlighted the importance of resource commitment and student support, noting that cultural context significantly shapes the e-learning experience (Yalley, 2022). Similarly, Wagiran et al. in Indonesia underscored the need for technological equipment, technical competence, and user satisfaction, demonstrating that intrinsic and technological motivation are pivotal for effective e-learning, particularly after the COVID-19 pandemic (Wagiran et al., 2022). In China, He et al. (2023) combined social support and technology acceptance theories, showing that educational and emotional support significantly increase students’ acceptance of e-learning, stressing the importance of a supportive and inclusive environment (He et al., 2023). Finally, Mullu and Nyoni’s (2023) study on nursing education in low- and middle-income countries found that continuous improvement and adherence to rigorous standards are essential to align e-learning with diverse social and economic challenges, thereby enhancing educational quality and access (Mulu and Nyoni, 2023).
While these studies emphasize institutional and technological factors, individual readiness for change remains crucial to the effectiveness of e-learning. Adapting to technological changes and adopting new learning strategies largely depends on an individual’s preparedness, which includes flexibility in learning, openness to using new tools and technologies, and resilience in overcoming challenges within the e-learning environment. Strengthening these personal traits can significantly enhance the e-learning experience by fostering deeper engagement with digital platforms and encouraging innovative learning approaches. Therefore, focusing on individual readiness for change is essential for advancing e-learning in the future.
2.2 ADKAR Model
The ADKAR model is crucial for guiding organizations through sustainable change and transformational initiatives. In today’s rapidly evolving environment, maintaining organizational resilience requires comprehensive transformations across missions, visions, strategies, structures, systems, processes, and behaviors, all key components of effective change management (Helmold, 2023). Figure 1 displays the key elements of the ADKAR model.
Recently, several studies have explored the ADKAR model. For instance, in Antoniades et al. (2022), Antoniades et al. emphasize the significance of knowledge, awareness, and reinforcement in political marketing, particularly within democratic governance, as these elements positively influence the dynamics of change. Similarly, in Vázquez (2022), Vázquez highlights how the ADKAR model can enhance change efforts, specifically in improving access to web portals in higher education institutions, thereby facilitating the successful adoption of new technologies and practices. In Mudjisusatyo et al. (2024), Mudjisusatyo et al. explored the role of change management competency (CMC) in managing independent campus programs at private universities in North Sumatra, Indonesia. By utilizing the ADKAR model and surveying 65 program heads, they underscored its pivotal role in defining the CMC profile essential for effective organizational change. The study concluded with a recommendation to develop a sustainable CMC framework that integrates the ADKAR model with a systems approach to better equip program managers with crucial change management skills. In the study (Al-Alawi et al., 2019), Al-Alawi et al. explored change management challenges in educational institutions across Gulf Cooperation Council countries, using the ADKAR model to identify failure points and enhance change processes. In Glegg et al. (2019), Glegg et al. applied the ADKAR model to develop a tool for managing child healthcare programs and found the model to be highly effective for planning and evaluation. In Siriporananon and Visuthismajarn (2018), Sattapong examined teachers’ readiness for change in Bangkok with the ADKAR model, observing reduced resistance and improved strategic management in urban schools. Finally, Arthur-Nyarko et al. Arthur-Nyarko et al. (2020) reviewed students’ readiness for digital tools in distance education in Ghana, revealing that while students were willing to use these tools, they encountered challenges accessing digital learning materials.
Table 1 presents a summary of recent studies that apply the ADKAR model across various domains.
2.3 Machine learning models
In the context of predicting ELR (Employee Learning Readiness), several machine learning models are explored for their ability to capture complex patterns in data and make accurate predictions. These models are selected for their flexibility, robustness, and ability to handle diverse data types, which are critical when predicting ELR, where relationships between predictors (such as individual readiness factors, organizational environment, and training programs) can be intricate and non-linear.
-
1.
K-Nearest Neighbors (KNN): KNN is chosen for its simplicity and adaptability in predicting ELR. As a non-parametric algorithm, it does not assume any specific data distribution and is effective for small to medium-sized datasets (Kramer, 2011). Its ability to handle outliers and straightforward implementation (Harrou et al., 2020, 2019; Hu et al., 2014). makes it an ideal candidate for predicting ELR, especially when relationships between predictors are less clear and require localized predictions based on similar instances.
-
2.
Decision Tree Regression (DTR): DTR is valued for its interpretability and capacity to model non-linear relationships (Loh, 2011). It recursively splits data based on input features to create a tree-like structure, which can help in identifying important factors influencing ELR. Its suitability lies in its ability to handle both categorical and continuous variables and produce transparent models. However, care must be taken to prevent overfitting, which can be mitigated by pruning or regularization techniques.
-
3.
Partial Least Squares Regression (PLSR): PLSR combines principal component analysis and linear regression, making it effective for handling high-dimensional, collinear data (Geladi and Kowalski, 1986). This model is particularly useful in ELR prediction when there is a large number of predictors, and reducing dimensionality is necessary to preserve relevant information. PLSR handles multicollinearity well but may require careful interpretation of latent variables, making it less transparent compared to simpler models (Harrou et al., 2020).
-
4.
Support Vector Regression (SVR): SVR excels in capturing non-linear relationships between features and target variables, which is crucial in ELR prediction, where interactions can be complex (Smola and Schölkopf, 2004). SVR uses kernel functions to map data to higher-dimensional spaces for better separation, making it effective in scenarios where traditional regression models may fail. Its robustness to outliers is another advantage when dealing with noisy data.
-
5.
Random Forest: As an ensemble method, Random Forest builds multiple decision trees to improve model robustness and accuracy (Breiman, 2001). It is particularly effective in handling large, high-dimensional datasets and can capture complex interactions between features. For ELR prediction, Random Forest can identify the most influential factors, offering insights into feature importance. While it provides high performance, it can be computationally intensive and less interpretable than individual decision trees.
-
6.
Gradient Boosting and CatBoost: These models build sequential decision trees where each tree corrects the errors of the previous one, improving predictive accuracy over time (Friedman, 2001; Prokhorenkova et al., 2018). Gradient Boosting and its variant, CatBoost, are selected for their ability to model complex, non-linear relationships. CatBoost, in particular, efficiently handles categorical data and reduces the need for extensive preprocessing. Both models are highly effective but require careful tuning and can be computationally expensive.
-
7.
XGBoost: XGBoost is an advanced gradient boosting algorithm known for its speed and accuracy (Chen and Guestrin, 2016). It constructs an ensemble of decision trees and uses regularization to prevent overfitting, making it ideal for handling large datasets with many features. Its flexibility with hyperparameters and loss functions, along with its efficiency, makes it particularly suitable for ELR prediction, where the relationships among features may vary widely. In education, XGBoost has been used to predict student performance with high accuracy through real-time data analysis using the Modified XGBoost (MXGB) model (Nadar, 2023).
Table 2 provides an overview of the fundamental concepts, advantages, and drawbacks of the studied machine learning models.
In summary, these AI models are chosen for their ability to handle diverse and complex data structures, capture non-linear relationships, and improve prediction accuracy. Each model offers unique strengths, such as interpretability, robustness to outliers, and the ability to handle high-dimensional data, making them valuable tools in predicting ELR outcomes and providing insights into the factors influencing ELR. Current research increasingly employs machine learning to advance educational readiness assessments (Tiwari et al., 2021; Demircioglu Diren and Horzum, 2022; Lu et al., 2020). However, this study distinctively integrates explainable machine learning and the ADKAR model, offering predictive accuracy and actionable insights. SHAP analysis enhances interpretability by identifying critical ADKAR dimensions influencing ELR and guiding targeted interventions. This framework bridges gaps in prior research, providing a comprehensive, data-driven approach to improving ELR and shaping policy in higher education.
3 The proposed framework for ELR prediction and assessment
This section introduces the proposed framework for predicting and assessing ELR and describes the application of SHAP for feature importance analysis.
3.1 The framework for ELR prediction and assessment
The proposed approach for predicting ELR adopts a systematic framework to ensure comprehensive data collection, validation, and analysis. This framework combines the strengths of the ADKAR model, advanced machine learning algorithms, and SHAP for feature importance analysis, offering an in-depth assessment of ELR in Algerian universities. Beyond accurate ELR prediction, this approach provides actionable insights into the critical factors influencing learning readiness. Figure 2 illustrates the key steps of the framework, encompassing data acquisition, preprocessing, model training, evaluation, and interoperability.
The framework begins with the application of the ADKAR model, a widely recognized approach that identifies the key factors influencing ELR. The model focuses on five core components: Awareness, Desire, Knowledge, Ability, and Reinforcement. These factors are systematically measured through a structured questionnaire, ensuring accuracy and reliability through rigorous validation techniques. This foundational step ensures that the data used for subsequent modeling accurately reflects the core elements influencing ELR.
Machine learning models are then employed to predict ELR based on the factors derived from the ADKAR model. The selection of these models is driven by their ability to handle the complexity of the data, including the potential non-linearity and interactions among the factors. A variety of models, including ensemble methods such as Random Forest and Gradient Boosting, are used to optimize predictive performance. These models are well-suited for capturing complex, non-linear relationships in the data. Random Forest, for example, benefits from its robustness to overfitting and its ability to model intricate interactions between features. Gradient Boosting, on the other hand, iteratively refines predictions to correct errors from previous trees, making it effective for high accuracy and fine-tuned predictions. Additionally, hyperparameter tuning and cross-validation techniques are applied to ensure the models are well-calibrated, preventing underfitting or overfitting.
One of the critical challenges in ELR prediction is understanding which factors most influence the learning readiness of individuals. SHAP (Shapley Additive Explanations) is employed to address this challenge by providing a clear explanation of the feature importance within the model. SHAP values allow us to quantify how much each ADKAR factor contributes to the prediction of ELR, offering stakeholders a transparent view of which variables should be prioritized. This analysis is essential for enhancing the interpretability of the machine learning models and ensuring that predictions can be aligned with real-world interventions. By integrating the ADKAR model, machine learning algorithms, and SHAP analysis, the framework provides a powerful, data-driven methodology for assessing and improving ELR in Algerian universities. The combination of these components not only predicts ELR but also provides valuable insights into the key factors that need attention for improving learning readiness. This is particularly relevant in higher education, where interventions targeting the right areas can significantly enhance the effectiveness of training programs and student outcomes. Below is a detailed outline of the process.
-
1.
Initial Data Collection:
-
Semi-Structured Interviews: At the beginning, semi-structured interviews were performed with 15 randomly chosen participants from the university community. These interviews were designed to gather in-depth insights into the factors influencing ELR and to identify key variables and themes relevant to the study.
-
-
2.
Questionnaire Design:
-
Questionnaire Structure: Based on the insights from the interviews, a questionnaire was developed. The questionnaire was organized into two primary sections:
-
Demographic Information: The first section gathered demographic information about the respondents, such as gender, educational level, and academic background.
-
Study Variables: The second section included 33 questions covering the study variables, focusing on factors related to ELR.
-
-
-
3.
Data Validation:
-
The questionnaire underwent rigorous validation procedures, including checking internal validity, item structure, readability, and clarity of questions, with input from seven experts specializing in methodological standards and quality measurement.
-
-
4.
Pilot Study:
-
The questionnaire was pilot-tested with 49 respondents, comprising both students and professors from the University Center of Maghnia, to assess and ensure its reliability and validity.
-
-
5.
Data Collection:
-
The validated questionnaire was distributed to 530 participants, including professors and students from various Algerian universities, in both paper and electronic formats to ensure wide accessibility and accurate responses.
-
-
6.
Model Training and Evaluation:
-
Training Process: Eight machine learning models were evaluated for predicting ELR using ADKAR factors. The data was split into a training set (80%) and a testing set (20%).
-
Cross-Validation: Five-fold cross-validation was used during training to improve generalization and prevent overfitting.
-
Hyperparameter Tuning: Hyperparameters of each model were optimized during training to achieve good predictive accuracy.
-
Evaluation Metrics: After training, model performance was evaluated using the testing data. The performance metrics used included:
-
Mean Squared Error (MSE):
$$\begin{aligned} \text {MSE} = \frac{1}{n} \sum _{i=1}^{n} (y_i - \hat{y}_i)^2, \end{aligned}$$(1) -
Root Mean Squared Error (RMSE):
$$\begin{aligned} \text {RMSE} = \sqrt{\frac{1}{n} \sum _{i=1}^{n} (y_i - \hat{y}_i)^2}, \end{aligned}$$(2) -
Mean Absolute Error (MAE):
$$\begin{aligned} \text {MAE} = \frac{1}{n} \sum _{i=1}^{n} |y_i - \hat{y}_i|, \end{aligned}$$(3) -
Coefficient of Determination (\(R^2\)):
$$\begin{aligned} R^2 = 1 - \frac{\sum _{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum _{i=1}^{n} (y_i - \bar{y})^2}, \end{aligned}$$(4)
-
-
SHAP Analysis: SHAP was employed to determine the most significant ADKAR factors impacting ELR, providing interpretability and insights into the model’s predictions.
-
3.2 Feature impact analysis using SHapley additive exPlanations
This study employs SHAP to quantify the influence of features on ELR predictions, offering insights into feature significance, aiding model interpretation, and enhancing assessment capabilities (Lundberg et al., 2020). By applying SHAP with the XGBoost model, this study evaluates the importance of different factors, including ADKAR elements, in predicting and assessing ELR. SHAP values are based on cooperative game theory, specifically the Shapley value, which allocates the total value among players according to their marginal contributions (Shapley, 1953). In machine learning, features act as players, and the goal is to fairly allocate the model’s prediction to each feature. SHAP values offer a unified measure of feature importance, satisfying properties such as consistency, symmetry, and linearity by considering all possible feature combinations and averaging their marginal contributions (Lundberg et al., 2020; Al Saleem et al., 2024). The computation of the Shapley value for a feature \(i\) is described by Roth (1988); Nohara et al. (2022):
where \(N\) denotes the set of features, \(S\) represents a subset of these features, and \(v(S)\) indicates the model’s prediction using the features in \(S\). This formula evaluates the average marginal contribution of feature \(i\) by analyzing every possible subset of features.
For a given prediction, the SHAP value for feature \(i\) is defined as:
where \(f(S)\) is the model’s prediction with the features in subset \(S\). This computation aggregates the marginal contributions of feature \(i\) across all possible combinations of feature subsets.
Understanding SHAP values is crucial for interpreting model predictions. Positive SHAP values reflect a feature’s constructive influence on the model’s output, while negative SHAP values denote a negative impact. SHAP insights improve the interpretability of machine learning models and empower decision-makers, including policymakers and university administrators. By revealing which features most significantly affect ELR, SHAP provides actionable intelligence for designing targeted interventions, allocating resources efficiently, and prioritizing initiatives to enhance student and faculty readiness. For instance, if ‘Awareness’ is identified as a critical factor influencing readiness, universities can develop programs or campaigns specifically to raise awareness. Similarly, understanding the role of ‘Ability’ could guide the implementation of skill-building workshops or faculty training initiatives.
The integration of SHAP with our predictive models enables a nuanced understanding of feature importance and interaction effects. This interpretability is crucial for decision-makers in several ways:
-
SHAP values reveal the most impactful factors affecting ELR, such as technological infrastructure, faculty readiness, and student engagement levels. Understanding these determinants allows policymakers to prioritize resource allocation effectively.
-
By quantifying the contribution of each feature, SHAP insights facilitate the design of targeted interventions. For instance, if faculty readiness emerges as a significant barrier, professional development programs can be prioritized to enhance digital competencies.
-
The transparency provided by SHAP values supports evidence-based policy formulation. Policymakers can develop strategies grounded in empirical data, increasing the likelihood of successful e-learning implementation.
Analyzing SHAP values helps identify which factors most significantly influence ELR, guiding targeted improvements and interventions.
4 Results and discussion
4.1 Research model and hypotheses
The primary aim of this research is to assess the effectiveness of the ADKAR model in analyzing the adoption of e-learning within the context of academic professionals in Higher Education Institutions (HEIs). Additionally, the study incorporates variables such as gender and scientific level to enrich the analysis. These hypotheses are delineated as follows (Hiatt, 2006):
H1: Awareness
Awareness represents an individual’s perception and comprehension of the nature of change, its underlying causes, and the repercussions linked to its absence. It also involves recognizing the intricacies of internal and external factors and their benefits in fostering a desire for change toward digital education. Consequently, it is hypothesized that awareness of change will exert a statistically significant positive influence on the readiness to embrace digital education systems.
H2: Desire
Desire reflects an individual’s motivation and willingness to embrace change, influenced by personal attributes and motivations. A strong desire for change is expected to significantly and positively impact the readiness to adopt digital education systems.
H3: Knowledge
Adequate knowledge about the change process, including skills, tools, systems, and methods, is expected to positively impact the readiness to adopt digital education systems.
H4: Ability
Within the ADKAR model, ability denotes the capacity to translate information into actionable change. Higher ability levels are expected to have a statistically significant positive impact on digital education system readiness.
H5: Reinforcement
In the ADKAR model, reinforcement is pivotal, relying on strong leadership competencies to guide individuals toward achieving the desired change. Hence, reinforcing change is expected to have a statistically significant positive impact on digital education system readiness.
4.2 Study design and data collection
A multi-faceted approach was employed to investigate the status of digital and online education in Algerian universities. This included semi-structured interviews with a randomly selected group of 15 participants from the university community and an extensive review of relevant literature, drawing on recent studies and articles from respected international journals on the research theme. Building on this foundation, a well-structured questionnaire consisting of two sections was developed. The first section gathered demographic information, such as gender and educational level. To maximize accessibility, the questionnaire was provided in Arabic, in both paper and electronic formats. The survey was administered to 530 participants, encompassing both professors and students from several Algerian universities. It utilized a five-point Likert scale, ranging from “1” (strongly disagree) to “5” (strongly agree), enabling respondents to convey a broad spectrum of opinions.
This study was conducted in strict adherence to established ethical research guidelines. Informed consent was obtained from all participants, who were fully briefed on the study’s objectives, procedures, and their rights, including the option to withdraw from the study at any point without any negative consequences. Participation was entirely voluntary, and robust measures were taken to ensure data confidentiality, including the anonymization of all responses. Ethical approval was secured by complying with both institutional and national standards, ensuring rigorous adherence to all ethical and procedural requirements.
4.3 Demographic profiles of respondents
The study sample comprises 530 participants, with a notable gender and academic level disparity. Males make up 63.4% (336 participants), while females account for 36.6% (194 participants), highlighting a male preference in the study, possibly due to differing participation rates in the included educational institutions. Regarding academic distribution, first-year students constitute 4.9% (26 participants), while second-year students represent the largest group at 47% (249 participants), indicating a strong engagement in e-learning at advanced stages of university education. Master’s degree holders comprise 47.2% (250 participants), suggesting that many participants possess advanced academic levels. Professors are underrepresented, with only 0.9% (5 participants), reflecting a focus on students rather than faculty. Overall, this demographic distribution emphasizes the study’s broad representation of undergraduate and graduate students, with limited professor participation. The detailed demographic characteristics of the respondents are summarized in Table 3.
The higher proportion of male participants in this study may be attributed to cultural and social factors influencing survey participation in Algeria. Research suggests that men are more likely to engage in field surveys due to societal norms and expectations. Additionally, the labor force participation rate in Algeria shows a significant gender gap, with higher male participation, which may be reflected in the survey demographics.
The underrepresentation of professors (0.9%) is primarily due to the study’s focus on assessing student readiness for e-learning, as students are the primary users of these platforms. Furthermore, the data collection period coincided with intensive work schedules for professors, limiting their availability to participate. Studies indicate that faculty members often face professional demands that reduce their ability to engage in additional activities like surveys.
4.4 E-learning readiness frequencies
The ELR data, as summarized in Table 4, indicate that most participants exhibit a high level of readiness for e-learning. Specifically, 73.86% (390 participants) were classified at level 4, showing significant preparedness to engage with e-learning systems. This is followed by 15.34% (81 participants) at level 5, the highest readiness level, demonstrating a strong potential for utilizing technology in education. Conversely, lower levels of readiness were less represented, with no participants at level 1 (0%), only 2.85% (17 participants) at level 2, and 7.95% (42 participants) at level 3. These findings suggest that the majority of participants possess adequate readiness to adapt to e-learning, with the highest concentrations at advanced readiness levels.
4.5 Sampling and sample size
The sample size in research is a critical determinant of the accuracy and validity of the study’s findings. In this study, a sample of 530 respondents was carefully selected from a population of approximately 1.7 million students across various Algerian universities. This sample size was calculated using Cochran’s formula, which takes into account factors such as the total population size, desired precision level, and confidence level. The formula is represented as follows:
where \( Z \) represents the z-score corresponding to the chosen confidence level, such as 1.96 for a 95% confidence interval. The parameter \( N \) denotes the total population size, while \( p \) is the estimated proportion of the population possessing the characteristic of interest; in this study, \( p \) is set to 0.5 to maximize the sample size. The complement of \( p \) is \( q \), calculated as \( q = 1 - p \). Finally, \( d \) is the margin of error, which is specified as 0.03 in this case.
This careful calculation confirmed that a sample size of 530 provides a strong statistical representation of the target population, ensuring reliable and valid insights into their perspectives. It strikes a balance between achieving accurate results and maintaining the feasibility of data collection and analysis, thereby enhancing the study’s inferential power and contributing to a deeper understanding of the experiences and challenges faced by Algerian university students.
4.6 Instrument reliability and validity
Ensuring the reliability and validity of the measurement instrument is essential for guaranteeing the quality of the collected data and the robustness of the study findings. This was achieved through a comprehensive assessment conducted in several stages. Firstly, the questionnaire was subjected to a rigorous validation process, which involved assessing internal consistency, item structure, readability, and clarity of questions. This validation was conducted with the participation of seven experts specializing in methodological standards and quality measurement. To ensure data completeness and avoid missing responses, all questions were mandatory. A pilot survey was then administered to 49 participants, including students and professors from the University Center of Maghnia. This preliminary step was essential for confirming the accuracy and reliability of the collected data.
Secondly, the reliability of the questionnaire was quantitatively assessed to ensure its alignment with study objectives. Cronbach’s alpha was used to evaluate the internal consistency of each scale, indicating the extent to which scale items are correlated. Table 5 shows that the overall Cronbach’s alpha value for the scales was 0.913, surpassing the recommended range of 0.67 to 0.85, as suggested by Hiatt (2006). This result confirms that the items in the measurement tool demonstrate a high level of internal consistency and reliability.
The validity of the measurement scales was assessed through both convergent and discriminant validity. Convergent validity was determined by calculating the Average Variance Extracted (AVE), in accordance with the guidelines of Hair Jr et al. (2019). An AVE value of at least 0.50 is typically required to establish convergent validity, indicating that the constructs measure what they are intended to measure. Table 5 highlights that all constructs achieve AVE values surpassing the threshold of 0.50 and Composite Reliability (CR) values exceeding 0.70. These results underscore the robustness of the measurement scales, confirming their reliability and the presence of convergent validity. Such metrics validate that the constructs accurately measure their intended dimensions, thereby reinforcing the methodological soundness of the instrument.
To assess discriminant validity, the criterion proposed by Henseler et al. (2015) was applied, which involves comparing the square root of the AVE for each construct with the inter-construct correlations. Discriminant validity is established when the square root of AVE is greater than the highest correlation between any pair of constructs. The findings of this study confirm that all constructs meet the criterion for discriminant validity, further reinforcing the reliability of the measurement scales used, as recommended by Hair Jr et al. (2021).
The reported reliability and validity metrics, such as Cronbach’s alpha and AVE, are essential in confirming the robustness of the questionnaire. High Cronbach’s alpha values, well above the recommended threshold, validate the internal consistency of the scales, ensuring that the items cohesively measure their intended constructs. Similarly, AVE values exceeding 0.50 and CR values above 0.70 provide strong evidence of convergent validity, demonstrating that the constructs are accurately captured. Discriminant validity further ensures that each construct is empirically distinct, reducing measurement overlap and increasing the credibility of the findings. Together, these metrics enhance the overall validity of the study, providing a reliable foundation for meaningful analysis and robust conclusions.
4.7 Exploratory factor analysis and data analysis
4.8 Confirmatory factor analysis
Exploratory Factor Analysis (EFA) was conducted to assess the suitability of the data for Confirmatory Factor Analysis (CFA) and to explore the underlying factor structure of the dataset (Hair et al., 2010). EFA identified latent constructs and ensured that observed variables aligned with theoretical constructs.
Correlation coefficients between scale items ranged from 0.40 to 0.70, indicating adequate interrelationships to support the factor structure. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy yielded a value of 0.85, suggesting a strong fit for factor analysis (Kaiser, 1974). Bartlett’s test of sphericity was significant (\( p < 0.001 \)), confirming the data’s suitability for factor analysis.
Factor loadings, which reflect the correlation between observed variables and latent factors, were generally above 0.5, indicating that the variables effectively measured their respective constructs. Table 6 presents the factor loading statistics, demonstrating the reliability and validity of the measurement instrument.
Based on these criteria-adequate correlation coefficients, significant KMO and Bartlett’s test results, and satisfactory factor loadings-the data was deemed appropriate to proceed with Confirmatory Factor Analysis (CFA). This step was crucial to validate the underlying structure of the constructs and ensure the robustness of the measurement model.
4.9 Confirmatory factor analysis
A Confirmatory Factor Analysis (CFA) was conducted to validate the measurement model comprising the ADKAR constructs and their associated reflective indicators. The primary goal of the CFA was to verify how well the hypothesized model, based on theoretical expectations, aligns with the actual data collected in the study. This process is essential for assessing whether the proposed factor structure accurately represents the observed variables and if the constructs are measured reliably.
CFA was utilized to confirm the factor structure and to identify any necessary modifications for accurate estimation of the measurement parameters. Several goodness-of-fit indices were calculated to evaluate the overall fit of the measurement model, as detailed in Table 7. These indices included the Chi-square to degrees of freedom ratio (CMIN/df), Goodness-of-Fit Index (GFI), Incremental Fit Index (IFI), Adjusted Goodness-of-Fit Index (AGFI), Comparative Fit Index (CFI), and the Root Mean Square Error of Approximation (RMSEA).
The fit indices were compared against the recommended thresholds suggested by Hair et al. Hair Jr et al. (2021) to determine the adequacy of the model fit. The results showed that all the fit indices met or exceeded the recommended values, indicating a satisfactory model fit. Specifically, the CMIN/df ratio was 1.42, well within the acceptable range of \(\le 3.00\), and the GFI, IFI, and CFI values were all above 0.90, demonstrating strong model performance. Additionally, the AGFI value was 0.87, above the acceptable threshold of 0.80, and the RMSEA was 0.03, significantly below the maximum recommended value of 0.08.
These results confirm that the measurement model is robust and adequately fits the data, thereby supporting the validity and reliability of the constructs used. As such, the CFA findings validate that the measurement model is appropriate for further hypothesis testing within the structural model.
4.10 Correlations between readiness and ADKAR factors
To better understand the relationships between ELR and the ADKAR factors, we conducted a correlation analysis. This analysis aims to uncover how each ADKAR factor-Awareness, Desire, Knowledge, Ability, and Reinforcement-relates to ELR, and to identify the strength and direction of these associations. The Pearson correlation coefficient \( r \) quantifies the linear relationship between two variables. A value of 1 indicates a complete positive linear correlation, -1 is a complete negative linear correlation, and 0 is an absence of linear correlation. Figure 3 presents a heatmap illustrating the pairwise correlation coefficients between ELR and the ADKAR factors. This visual analysis facilitates a deeper understanding of the interrelationships among these factors and their respective contributions to ELR.
From Fig. 3, the following key observations can be made.
-
The highest correlation is observed between Ability and ELR (\(r = 0.692\)), emphasizing that an individual’s capacity to effectively apply knowledge and skills is pivotal to their readiness for E-learning. This underscores the critical role of enhancing learners’ abilities to improve their engagement with online learning platforms.
-
Knowledge demonstrates a significant positive correlation with readiness (\(r = 0.662\)). This finding highlights that individuals’ understanding and familiarity with E-learning concepts substantially influence their readiness. Consequently, providing comprehensive knowledge resources is vital for improving readiness levels.
-
Desire shows a moderate correlation with readiness (\(r = 0.616\)). This result underscores the importance of motivation and willingness to foster readiness for E-learning. Efforts to cultivate intrinsic motivation and enthusiasm for learning could significantly enhance ELR.
-
Awareness exhibits a moderate positive correlation with readiness (\(r = 0.534\)). This suggests that individuals’ awareness of the benefits and processes of E-learning moderately affects their readiness. Strategies aimed at raising awareness could, therefore, contribute to improved E-learning readiness.
-
Reinforcement shows the weakest positive correlation with ELR (\(r = 0.372\)). While the relationship is less pronounced, this factor-involving support and encouragement provided to learners-remains beneficial. Enhancing reinforcement mechanisms could still yield positive effects, albeit to a lesser extent than other factors.
It is crucial to recognize that the Pearson correlation coefficient assesses only linear relationships between variables. Therefore, while these correlations provide valuable insights into the linear associations among the ADKAR factors and ELR, they may not fully capture non-linear relationships or interactions between the variables. To gain a more comprehensive understanding of the relative importance of each ADKAR factor in predicting ELR and to account for potential non-linear relationships, we will next employ advanced machine-learning techniques. Specifically, we will use RF and XGBoost algorithms to identify and assess the importance of each ADKAR factor in ELR prediction. These methods will provide a more nuanced analysis and help us develop more effective strategies for enhancing ELR.
Identifying variable importance is essential for understanding the contributions of factors in machine learning models. This process reveals significant predictors, offering insights to guide decision-making, resource allocation, and targeted strategies. For ELR, identifying key ADKAR factors helps refine strategies for effective e-learning adoption. Machine learning algorithms like Random Forest (RF) and XGBoost are highly effective in analyzing variable importance, utilizing their ability to model complex, nonlinear relationships. These methods rank predictors, providing clarity on the dynamics influencing ELR. RF evaluates variable importance by analyzing predictors’ contributions to model accuracy using metrics like impurity reduction. XGBoost provides information through metrics such as gain and feature frequency. Figure 4 displays the results of applying these two algorithms to identify the importance of ADKAR factors in the prediction of ELR.
Figure 4 shows that both Random Forest and XGBoost identified “Ability” as the most critical factor influencing E-learning Readiness. This underscores the pivotal role of individual skills and competencies in facilitating e-learning adoption. The significance of “Ability” suggests prioritizing initiatives to enhance digital skills and capabilities. Additionally, “Knowledge” and “Reinforcement” emerged as important factors, highlighting the necessity for familiarity with e-learning systems and the value of structured feedback and support mechanisms. The models highlighted “Knowledge” as the second most influential factor, emphasizing the need for well-informed learners to navigate e-learning platforms effectively. “Reinforcement” was also significant in sustaining motivation through consistent support. “Desire” and “Awareness” demonstrated moderate importance, reflecting learners’ intrinsic motivation and understanding of digital transition needs. Although secondary, these factors contribute meaningfully to readiness. These findings indicate that while motivation and awareness are valuable, the primary focus should remain on developing skills and knowledge.
Overall, “Ability” emerges as the primary driver of ELR, followed by “Knowledge” and “Reinforcement.” These findings advocate for targeted efforts to develop skills, improve e-learning familiarity, and enhance support systems, effectively addressing barriers to adoption.
4.11 Readiness prediction
To predict ELR using ADKAR factors, we utilized 80% of the dataset for model training, with the remaining 20% reserved for testing and evaluation purposes. During the training phase, a 5-fold cross-validation approach was employed to ensure that the models were robust and capable of generalizing well to new data. The hyperparameters of each model were tuned during the training process to achieve optimal accuracy.
The predictive performance of machine learning models is significantly influenced by the selection and tuning of their hyperparameters. For this study, each model’s hyperparameters were carefully optimized during training to enhance accuracy and model robustness. Below are the details of the hyperparameters used for each model:
-
The kNN model was optimized with 2 neighbors, utilizing the Chebyshev distance metric. The Chebyshev metric, which measures the greatest absolute distance along any coordinate dimension, helps in handling varied distributions and makes the model sensitive to local variations.
-
The SVR model used a Radial Basis Function (RBF) kernel, which is well-suited for non-linear regression problems. The cost parameter \( C \) was set to 1, which controls the trade-off between model complexity and prediction error. The regression loss parameter \( \epsilon \) was set to 0.1, defining the margin of tolerance within which no penalty is associated with the training loss function.
-
The PLS model retained 3 components, which balance the trade-off between the variance explained in the independent and dependent variables. This reduction helps manage multicollinearity and improves model interpretability.
-
The RF model was configured with 50 trees, balancing computational efficiency and prediction accuracy. The model was set to avoid splitting subsets smaller than 5 instances, reducing the risk of overfitting and ensuring meaningful splits.
-
The GB model employed 100 trees with a learning rate of 0.1, controlling the contribution of each tree to the final model, which helps in fine-tuning and reducing overfitting. The maximum depth of individual trees was limited to 3, ensuring that each tree remained simple. Additionally, subsets smaller than 2 instances were not split to avoid excessively small leaves.
-
The Decision Tree model was configured with a minimum of 2 instances required in the leaf nodes, and it avoided splitting subsets smaller than 5 instances to prevent overly complex splits. The maximum depth of the tree was capped at 1000, allowing for deep trees when necessary but still maintaining a limit to avoid excessive branching.
-
CatBoost was set with 1000 trees and a learning rate of 0.3, making it highly responsive to changes in data patterns while retaining efficiency. The depth of individual trees was limited to 6, and a regularization parameter \( \lambda \) of 3 was used to prevent overfitting by penalizing overly complex models.
-
XGBoost utilized 100 trees, with a learning rate of 0.3 to balance the impact of each tree. A regularization parameter \( \lambda \) of 1 was employed to manage model complexity. The depth of individual trees was limited to 6, which allowed the model to capture non-linear interactions without becoming too complex.
The tuning of hyperparameters played a crucial role in optimizing the performance of the models. For instance, ensemble methods such as CatBoost and XGBoost benefited significantly from the depth and learning rate settings, allowing them to capture complex patterns without overfitting. Similarly, simpler models like kNN and PLS were fine-tuned to balance the sensitivity to local data variations and component retention, respectively. Each model’s configuration reflects a strategic choice aimed at maximizing predictive power while minimizing generalization errors, contributing to the overall effectiveness of the ELR prediction.
Following training, we assessed model performance on the 20% test dataset using MSE, RMSE, MAE, and \(R^2\). The metrics, presented in Table 8, offer a detailed assessment of accuracy and variance explanation.
The performance metrics presented in Table 8 offer a detailed assessment of how well each model predicts ELR. The results indicate that CatBoost and XGBoost consistently outperform other models across various evaluation metrics. Both models achieve the lowest MSE and RMSE values of 0.060 and 0.245, respectively, highlighting their superior accuracy in predicting ELR. They also exhibit the lowest MAE of 0.102, suggesting their predictions are closest to actual values. Additionally, CatBoost and XGBoost achieve the highest \(R^2\) value of 0.811, demonstrating their strong ability to explain the variance in ELR, reinforcing their overall effectiveness and reliability in modeling. By contrast, models such as kNN and SVM show higher MSE and RMSE values (0.120 and 0.347 for kNN, and 0.111 and 0.334 for SVM), as well as higher MAE (0.111 and 0.221, respectively). Their lower \(R^2\) values of 0.620 and 0.649 indicate limited explanatory power and weaker performance compared to the gradient-boosting methods. The results underscore the importance of advanced algorithms like CatBoost and XGBoost in capturing non-linear interactions and complex patterns, which are vital in predicting ELR. Overall, CatBoost and XGBoost exhibit the highest performance across all metrics, making them the most effective models for predicting ELR in this study. Their strength lies in handling complex, non-linear interactions among the ADKAR factors. Accurate predictions of readiness enable institutions to design targeted training programs, address learning gaps, and enhance e-learning effectiveness. These insights also support personalized learning by identifying students needing extra support and improving engagement and outcomes. Additionally, readiness predictions aid in optimizing resource allocation, guiding strategic decisions for technology investments, and scaling e-learning programs successfully.
In evaluating machine learning models, it is crucial to determine whether one model statistically outperforms the others. This is accomplished through pairwise comparisons based on performance metrics such as the R\(^{2}\) score, which measures a model’s ability to explain the variance in the data. Table 9 presents a detailed pairwise comparison of the performance of several models. It shows the probability that the score for the model in the row is higher than that of the model in the column. A small probability indicates that the difference between models is negligible, while larger probabilities suggest that one model is consistently superior to another. This methodology relies on the Bayesian interpretation of the t-test, which provides a rigorous statistical approach to ascertain model performance differences (Corani and Benavoli, 2015).
Table 9 provides a comprehensive pairwise comparison of model performance based on the R\(^{2}\) score. This comparison is essential for understanding whether a model statistically outperforms the others. From the results, XGBoost stands out as one of the top-performing models, demonstrating a high probability of outperforming kNN (0.856), SVR (0.97), DTR (0.73), and PLSR (0.986), RF (0.834), and GB (0.967). The probability scores suggest that XGBoost consistently performs better than these models, highlighting its robustness and effectiveness in handling complex, non-linear relationships within the data.
Interestingly, the comparison between XGBoost and CatBoost reveals a relatively lower probability score (0.255), indicating that these two models are closely matched in performance (Table 9). Both models excel in managing categorical data and complex interactions, which explains their similar performance levels. This close-performance match suggests that XGBoost and CatBoost are highly competitive, with neither model showing a significant advantage over the other. Overall, the pairwise comparisons underscore the importance of evaluating multiple models, as XGBoost and CatBoost consistently demonstrate strong performance compared to traditional models. This analysis helps understand which models are statistically superior, ensuring more informed decisions in selecting the optimal model for specific applications.
4.12 Important ADKAR factors identification
Figure 5 presents SHAP values for the ADKAR factors-Ability, Desire, Reinforcement, Knowledge, and Awareness-when using the XGBoost model to assess ELR among participants. The SHAP values indicate the impact of each feature on the model’s output, with positive values showing a positive influence on readiness and negative values indicating a negative effect. Indeed, SHAP values provide insights into how each feature (or factor) contributes to the prediction of ELR. Each dot represents an individual data point, with color indicating the feature value (blue for low and red for high). Dots positioned farther from zero signify a stronger impact on the prediction.
From Fig. 5, we can see that high Ability values (in red) are strongly associated with ELR. This means that participants with higher skills are more likely to be prepared for e-learning, making Ability the most influential factor, with most points clustering on the positive side of the axis. Next, Desire shows a moderate impact, suggesting that the willingness to engage in e-learning plays a moderate role in influencing readiness. Meanwhile, Reinforcement also has a noticeable influence, indicating that feedback and encouragement are important in supporting readiness. Furthermore, Knowledge demonstrates a substantial positive impact, particularly at higher values, which suggests that well-informed participants are better equipped for e-learning. Lastly, awareness has some impact, although it is less significant than other factors. This indicates that while understanding the need for e-learning is helpful, it is not a critical component on its own.
4.13 Actionable strategies for improving readiness
Improving e-learning readiness (ELR) in Algerian universities requires targeted, data-driven strategies that address both individual and systemic challenges. Predictive models like CatBoost and XGBoost, enhanced with SHAP (SHapley Additive exPlanations) analysis, provide clear insights into the key drivers of readiness. By identifying how ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement) factors influence student readiness, universities can implement precise and impactful interventions. For instance, if “Knowledge” is identified as a limiting factor, institutions can provide open-access resources, interactive tutorials, and online lectures to address these gaps. Similarly, if “Reinforcement” emerges as critical, peer mentoring programs, recognition systems, and feedback loops can encourage consistent engagement and strengthen learning outcomes. Addressing systemic barriers such as limited internet access, inadequate technological infrastructure, and institutional support gaps is equally essential. Universities must advocate for better internet infrastructure, provide subsidized or free e-learning tools, and improve digital literacy among students and educators. Expanding internet access in rural and underserved areas is particularly urgent, as inconsistent connectivity undermines even the best-designed interventions.
Investing in technology and fostering specific behaviors to enhance “Ability,” “Knowledge,” and “Reinforcement” are critical components of improving readiness. Hands-on training in learning management systems builds technical proficiency, while open-access educational resources address knowledge gaps. Reinforcement strategies, such as peer mentoring and recognition programs, encourage sustained engagement. Continuous feedback mechanisms, powered by SHAP-driven insights, enable students to adjust their learning strategies proactively. Coupled with academic counseling, these mechanisms empower learners to navigate external challenges like familial responsibilities. Predictive tools with SHAP analysis ensure these initiatives are targeted and effective. By implementing these strategies, Algerian universities can build an equitable and inclusive e-learning ecosystem, addressing systemic barriers while leveraging data-driven insights to enhance engagement and outcomes for all students.
5 Conclusion
This study introduces a data-driven framework for predicting ELR in Algerian universities by integrating the ADKAR model with advanced machine learning techniques and SHAP analysis. The findings underscore the importance of accurately assessing ELR to enhance educational outcomes and optimize digital learning initiatives. Among the machine learning models tested, CatBoost and XGBoost demonstrated superior predictive performance, effectively capturing the complex interplay of factors influencing ELR. The inclusion of SHAP analysis enriched the framework by identifying key contributors, enabling decision-makers to implement targeted strategies addressing critical areas such as “Ability,” “Knowledge,” and “Reinforcement.” This integration of behavioral insights with data-driven techniques offers a comprehensive and practical approach to improving digital education readiness. Furthermore, the framework’s ability to bridge technical and behavioral dimensions provides a robust foundation for addressing readiness challenges in a structured manner, paving the way for strategic improvements in e-learning systems.
Despite these important findings, the study faced challenges such as limited representation of demographic groups (e.g., professors and females), which may affect the generalizability of the results. Moreover, the rapidly evolving nature of digital education necessitates continuous adaptation of the framework to remain relevant. Future efforts should address these challenges to enhance the utility and scalability of the framework in varying educational and regional contexts. While the study focuses on Algerian universities, the methodology is data-driven and leverages machine learning, making the framework adaptable to other contexts or regions. Future efforts should collect data from diverse regions to validate the framework’s adaptability and generalizability. Applying this approach to varied settings will enable insights relevant to local needs and ensure the framework’s robustness across different educational environments. Additionally, acknowledging external factors such as internet access, institutional support, and infrastructure, which could significantly influence ELR, is critical. These factors may need to be included in future iterations of the framework to provide a more comprehensive understanding of readiness and its drivers. By addressing these areas, the study’s findings can be further refined and expanded to have a greater impact on digital education systems.
Data availability
Data available on request from the authors.
References
Abdallah, M., Mohammad. M. M. (2016). Proceedings of the academic conference of Assiut University College of Education: Educational Views for Developing the Pre-University Education System (ERIC)
Abdelaliem, S. M. F., & Elzohairy, M. H. S. (2023). The relationship between nursing students’ readiness and attitudes for e-learning: The mediating role of self leadership: An online survey (comparative study). Journal of Professional Nursing, 46, 77.
Adelman-Mullally, T., Nielsen, S., & Chung, S. Y. (2023). Planned change in modern hierarchical organizations: A three-step model. Journal of Professional Nursing, 46, 1.
Al Saleem, M., Harrou, F., Sun, Y. (2024). Explainable machine learning methods for predicting water treatment plant features under varying weather conditions. Results in Engineering 21, 101930
Al-Alawi, A. I., Abdulmohsen, M., Al-Malki, F. M., & Mehrotra, A. (2019). Investigating the barriers to change management in public sector educational institutions. International Journal of Educational Management, 33(1), 112.
Al-araibi, A. A. M., Naz’ri bin Mahrin, M., Yusoff, R. C. M., Chuprat, S. B. (2019). A model for technological aspect of e-learning readiness in higher education. Education and Information Technologies 24(2), 1395
Ali, M. A., Zafar, U., Mahmood, A., Nazim, M. (2021). The power of adkar change model in innovative technology acceptance under the moderating effect of culture and open innovation, LogForum 17(4)
Antoniades, N., Constantinou, C., Allayioti, M., & Biska, A. (2022). Lasting political change performance: knowledge, awareness, and reinforcement (kare). SN Business & Economics, 2(2), 14.
Arbaein, T. J., Alharbi, K. K., Alzhrani, A. A., Monshi, S. S., Alzahrani, A. M., & Alsadi, T. M. (2024). The assessment of readiness to change among head managers of primary healthcare centers in makkah, ksa. Journal of Taibah University Medical Sciences, 19(2), 453.
Arthur-Nyarko, E., Agyei, D. D., & Armah, J. K. (2020). Digitizing distance learning materials: Measuring students’ readiness and intended challenges. Education and Information Technologies, 25(4), 2987.
Bahamdan, M. A., Al-Subaie, O. A. (2021). Change management and its obstacles in light of “adkar model" dimensions from female teachers perspective in secondary schools in dammam in saudi arabia. ilkogr, Online 20, 2475
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5.
Chen, T., Guestrin, C. (2016). in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794
Chipamaunga, S., Nyoni, C. N., Kagawa, M. N., Wessels, Q., Kafumukache, E., Gwini, R., Kandawasvika, G., Katowa-Mukwato, P., Masanganise, R., Nyamakura, R., et al. (2023). Response to the impact of covid-19 by health professions education institutions in africa: a case study on preparedness for remote learning and teaching. Smart Learning Environments, 10(1), 31.
Corani, G., & Benavoli, A. (2015). A bayesian approach for comparing cross-validated algorithms on multiple data sets. Machine Learning, 100(2–3), 285.
Moraes, C. R. d., Cunha, P. R. (2023). Enterprise servitization: Practical guidelines for culture transformation management. Sustainability 15(1). https://doi.org/10.3390/su15010705
Demircioglu Diren, D., Horzum, M. B. (2022). in Artificial intelligence education in the context of work, Springer, pp. 139–154
Faishol, O. K. L., & Subriadi, A. P. (2022). Change management scenario to improve webometrics ranking. Procedia Computer Science, 197, 557.
Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of Statistics pp. 1189–1232
Geladi, P., & Kowalski, B. R. (1986). Partial least-squares regression: a tutorial. Analytica Chimica Acta, 185, 1.
Giray, G. (2021). An assessment of student satisfaction with e-learning: An empirical study with computer and software engineering undergraduate students in turkey under pandemic conditions. Education and Information Technologies, 26(6), 6651.
Glegg, S. M., Ryce, A., & Brownlee, K. (2019). A visual management tool for program planning, project management and evaluation in paediatric health care. Evaluation and Program Planning, 72, 16.
Haffar, M., Al-Karaghouli, W., Djebarni, R., Al-Hyari, K., Gbadamosi, G., Oster, F., Alaya, A., & Ahmed, A. (2023). Organizational culture and affective commitment to e-learning’changes during covid-19 pandemic: The underlying effects of readiness for change. Journal of Business Research, 155, 113396.
Hair Jr, J., Hair Jr, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M. (2021). A primer on partial least squares structural equation modeling (PLS-SEM), Sage publications
Hair Jr, J., Page, M., Brunsveld, N. (2019). Essentials of business research methods, Routledge
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate data analysis: Pearson college division. Person: London, UK.
Harrou, F., Sun, Y., Hering, A. S., Madakyaru, M., et al. (2020). Statistical process monitoring using advanced data-driven and deep learning approaches: theory and practical applications, Elsevier
Harrou, F., Taghezouit, B., & Sun, Y. (2019). Improved \( k \) nn-based monitoring schemes for detecting faults in pv systems. IEEE Journal of Photovoltaics, 9(3), 811.
Harrou, F., Zeroual, A., & Sun, Y. (2020). Traffic congestion monitoring using an improved knn strategy. Measurement, 156, 107534.
He, S., Jiang, S., Zhu, R., & Hu, X. (2023). The influence of educational and emotional support on e-learning acceptance: An integration of social support theory and tam. Education and Information Technologies, 28(9), 11145.
Helmold, M. (2023). Virtual and innovative quality management across the value chain. Management for Professionals
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43, 115.
Hiatt, J. (2006). ADKAR: a model for change in business, government, and our community, Prosci
Hu, C., Jain, G., Zhang, P., Schmidt, C., Gomadam, P., & Gorka, T. (2014). Data-driven method based on particle swarm optimization and k-nearest neighbor regression for estimating capacity of lithium-ion battery. Applied Energy, 129, 49.
Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika 39(1), 31
Kaminski, J. (2022). Theory applied to informatics–the prosci adkar model. editorial. Canadian Journal of Nursing Informatics 17(2)
Kramer, O. (2011). Unsupervised k-nearest neighbor regression. arXiv preprint arXiv:1107.3600
Latifah, I. N., Suhendra, A. A., & Mufidah, I. (2024). Factors affecting job satisfaction and employee performance: a case study in an indonesian sharia property companies. International Journal of Productivity and Performance Management, 73(3), 719.
Loh, W. Y. (2011). Classification and regression trees. Wiley interdisciplinary reviews: data mining and knowledge discovery, 1(1), 14.
Lu, D. N., Le, H. Q., & Vu, T. H. (2020). The factors affecting acceptance of e-learning: A machine learning algorithm approach. Education Sciences, 10(10), 270.
Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., & Lee, S. I. (2020). From local explanations to global understanding with explainable ai for trees. Nature machine intelligence, 2(1), 56.
Mirabolghasemi, M., Choshaly, S. H., & Iahad, N. A. (2019). Using the hot-fit model to predict the determinants of e-learning readiness in higher education: a developing country’s perspective. Education and Information Technologies, 24, 3555.
Mohammadi, M. K., Mohibbi, A. A., & Hedayati, M. H. (2021). Investigating the challenges and factors influencing the use of the learning management system during the covid-19 pandemic in afghanistan. Education and Information Technologies, 26, 5165.
Mouazen, A. M., Hernández-Lara, A. B. (2023). in The International Research & Innovation Forum, Springer, pp. 27–39
Mudjisusatyo, Y., Darwin, D., & Kisno, K. (2024). Change management in independent campus program: application of the adkar model as a change management competency constructor. Cogent Education, 11(1), 2381892.
Mulu, M. M., & Nyoni, C. N. (2023). Standards for evaluating the quality of undergraduate nursing e-learning programme in low-and middle-income countries: a modified delphi study. BMC Nursing, 22(1), 73.
Nadar, N. (2023). Enhancing student performance prediction through stream-based analysis dataset using modified xgboost algorithm. International Journal on Information Technologies & Security 15(2)
Ngang Tang, K. (2019). Leadership and Change management,Springer
Nohara, Y., Matsumoto, K., Soejima, H., & Nakashima, N. (2022). Explanation of machine learning models using shapley additive explanation and application for real data in hospital. Computer Methods and Programs in Biomedicine, 214, 106584.
Nwagwu, W. E. (2020). E-learning readiness of universities in nigeria-what are the opinions of the academic staff of nigeria’s premier university? Education and Information Technologies, 25(2), 1343.
Pillai, S., Rohani, K., Macdonald, M. E., Al-Hamed, F. S., & Tikhonova, S. (2024). Integration of an evidence-based caries management approach in dental education: The perspectives of dental instructors. Journal of Dental Education, 88(1), 69.
Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., Gulin, A. (2018). Catboost: unbiased boosting with categorical features. Advances in neural information processing systems 31
Prunuske, A. J., Evans-Anderson, H. J., Furniss, K. L., Goller, C. C., Mirowsky, J. E., Moore, M. E., Raut, S. A., Swamy, U., Wick, S., & Wolyniak, M. J. (2022). Using personas and the adkar framework to evaluate a network designed to facilitate sustained change toward active learning in the undergraduate classroom. Discover Education, 1(1), 22.
Roth, A. E. (1988). The Shapley value: essays in honor of Lloyd S. Shapley., Cambridge University Press
Ruele, V. (2019). The localisation of technology education curriculum in botswana. Explorations in Technology Education Research: Helping Teachers Develop Research Informed Practice, pp. 33–43
Sarker, M. F. H., Mahmud, R. A., Islam, M. S., & Islam, M. K. (2019). Use of e-learning at higher educational institutions in bangladesh: Opportunities and challenges. Journal of Applied Research in Higher Education, 11(2), 210.
Shapley, L. S., et al. (1953). A value for n-person games
Siriporananon, S., & Visuthismajarn, P. (2018). Key success factors of disaster management policy: A case study of the asian cities climate change resilience network in hat yai city, thailand. Kasetsart Journal of Social Sciences, 39(2), 269.
Smola, A. J., & Schölkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14, 199.
Tiwari, S., Srivastava, S. K., Upadhyay, S. (2021). in 2021 2nd International conference on intelligent engineering and management (ICIEM), IEEE, pp. 504–509
Vázquez, S. R. (2022). in Proceedings of the 19th international web for all conference, pp. 1–5
Vershitskaya, E. R., Mikhaylova, A. V., Gilmanshina, S. I., Dorozhkin, E. M., & Epaneshnikov, V. V. (2020). Present-day management of universities in russia: Prospects and challenges of e-learning. Education and Information Technologies, 25, 611.
Wagiran, W., Suharjana, S., Nurtanto, M., Mutohhari, F. (2022). Determining the e-learning readiness of higher education students: A study during the covid-19 pandemic. Heliyon 8(10)
Yalley, A. A. (2022). Student readiness for e-learning co-production in developing countries higher education institutions. Education and Information Technologies, 27(9), 12421.
Yavuzalp, N., & Bahcivan, E. (2021). A structural equation modeling analysis of relationships among university students’ readiness for e-learning, self-regulation skills, satisfaction, and academic achievement. Research and Practice in Technology Enhanced Learning, 16(1), 15.
Zarei, S., & Mohammadi, S. (2022). Challenges of higher education related to e-learning in developing countries during covid-19 spread: a review of the perspectives of students, instructors, policymakers, and ict experts. Environmental Science and Pollution Research, 29(57), 85562.
Zine, M., Harrou, F., Terbeche, M., Bellahcene, M., Dairi, A., & Sun, Y. (2023). E-learning readiness assessment using machine learning methods. Sustainability, 15(11), 8924.
Funding
Open access publishing provided by King Abdullah University of Science and Technology (KAUST). This research received no external funding.
Author information
Authors and Affiliations
Contributions
All authors have contributed equally to the work. All authors read and approved the manuscript
Corresponding author
Ethics declarations
Conflict of interest/Competing interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zine, M., Harrou, F., Terbeche, M. et al. Evaluating e-learning readiness using explainable machine learning and key organizational change factors in higher education. Educ Inf Technol 30, 12905–12937 (2025). https://doi.org/10.1007/s10639-025-13335-9
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s10639-025-13335-9






