Abstract
This paper describes an ontology-oriented framework for structuring and adapting digital content through automatic enrichment of a subject ontology using large language models. The work addresses the problem of converting unstructured text into formal knowledge representations that can be used in adaptive information systems. The framework combines natural language processing, language model–based extraction, and ontology engineering within a single processing pipeline. Textual materials and user responses are analyzed to identify domain concepts, relations, and error categories. The extracted information is represented in a structured format and transformed into ontology classes, properties, and instances using OWL-based tools. This approach reduces reliance on manual annotation and predefined extraction rules. The enriched ontology is applied to content organization, task selection, and automated assessment. The framework is implemented as a software system using Python and OWLready2, with asynchronous interaction with an external language model API. System performance is evaluated using classification metrics and experimental data collected in a digital textbook environment. The results show improvements in content adaptation and consistency of automated analysis compared to non-ontology-based solutions. Although the framework is evaluated in an educational setting, the proposed methods are not domain-specific and can be applied to other areas that require structured knowledge extraction, ontology enrichment, and adaptive content management.
Introduction
The use of artificial intelligence tools in digital educational environments is considered one of the areas for the development of automated learning support. International reports emphasize that AI technologies can be used to analyze educational data, generate recommendations, and support students throughout the learning process. However, UNESCO and OECD documents indicate that the practical implementation of these opportunities remains limited, in particular due to the lack of sustainable mechanisms for individualizing learning trajectories and scalable analysis of academic work (UNESCO 2024; [25]). According to the World Bank, these limitations are associated with persistent differences in educational outcomes and the problem of learning poverty [33]. Research pays special attention to issues of interpretability and regulatory compliance of artificial intelligence systems in education. Reviews in the field of explainable artificial intelligence point to the need for transparency in model outputs and the ability to relate automated analysis results to learning objectives [14]. The European Commission's regulatory and ethical guidelines require that AI outputs be linked to formal learning outcomes and pedagogical goals, particularly when using such systems for student assessment and support [12]. This underscores the need to integrate AI with formal knowledge representation models. One approach to solving these problems is the use of ontologies. Ontological models allow for the formalization of educational concepts, skills, and the relationships between them, ensuring a consistent representation of educational content. Several studies show that ontologies can be used as a basis for personalizing learning and adapting educational materials (Villegas-Ch and García-Ortiz 2023). In recent years, there has been a growing interest in the application of large language models to ontology engineering tasks, including automatic knowledge extraction and structuring [19]. Some works consider methods for extracting linguistic and conceptual information from texts with subsequent inclusion of the obtained data in domain ontologies (Mukanova et al. 2024; Mukanova et al. 2025). At the same time, the field of personalized learning using artificial intelligence is developing. Approaches to adapting learning tasks and recommendations based on in-dividual learner characteristics are being explored, including the use of large-scale language models [19]. A number of studies examine methods for automated generation of data-based pedagogical interventions [17], models for assessing and supporting learners in AI-supported educational environments [24], and learners' perceptions of personalized solutions [15]. Analytical reviews document a persistent interest in personalized learning technologies and emphasize the need for their further methodological development ([1]; [4]; Zengeya and Fonou-Dombeu 2024). Despite existing studies on personalized learning, feedback generation, and onto-logical modeling, the integration of text processing results from large language models into domain-specific ontologies remains insufficiently addressed. In many existing systems, feedback is provided in textual form and is not converted into a structured representation aligned with learning concepts, skills, and error categories. This limits the applicability of such solutions in adaptive learning environments that require a formal connection between student responses and personalization models. To address this gap, this article suggests an approach based on the use of large language models to automatically enrich a personalized learning ontology. This paper proposes an approach based on the use of large language models for the automatic enrichment of a personalized learning ontology. The study implements a procedure for transforming the analysis results of educational materials and student responses into structured data integrated into a domain ontology. Based on the ex-tended ontological model, an ontology-driven personalized learning system is developed to support the formation of individual learning trajectories, task adaptation, and feedback aligned with the learner’s level of preparation. This study contributes to applied artificial intelligence research by presenting a framework for ontology enrichment that combines large language models with formal knowledge representation methods. The contribution focuses on a transformation pipeline that converts unstructured educational texts and learner responses into structured ontological elements. The pipeline uses language model outputs to extract domain concepts, relationships, error categories, and semantic attributes, which are subsequently formalized as ontology classes, properties, and instances using OWL-based representations. This approach reduces reliance on manual annotation and rule-based extraction procedures commonly used in ontology engineering. The framework defines a structured interaction scheme between language model outputs and ontology constraints. The use of predefined output formats enables consistent mapping of extracted information into formal ontology components. This design allows probabilistic text analysis results to be integrated into symbolic knowledge structures and supports traceable processing of extracted knowledge. From a system perspective, the study describes an architecture that includes:
natural language processing for multilingual text analysis;
semantic extraction using large language models;
ontology construction and enrichment through formal representation tools;
classification of learner responses and error types.
The performance of the system is evaluated using standard classification metrics, including precision, recall, and F1-score, as well as statistical analysis of learning outcomes. The results indicate that the ontology-based processing of model outputs supports consistent categorization of learner errors and structured feedback generation. The framework is evaluated in a personalized learning context; however, education is treated as an application domain. The proposed methods for ontology enrichment, symbolic representation of extracted knowledge, and response classification can be adapted to other domains that require structured knowledge extraction and rule-based decision support. The discussion in the introductory part and the results of the theoretical review allowed formulating the following research questions:
RQ1. How is a subject ontology, automatically enriched using large language models, used to form the structure of a digital textbook (curricular sections, concepts, tasks, and assessment elements)?
RQ2. To what extent does an ontologically oriented digital textbook ensure adaptation of educational content and tasks in accordance with the level of preparation of students?
RQ3. How do ontology and automated assessment and feedback mechanisms influence the formation of individual learning trajectories and learning outcomes?
Literature review
Recent research shows that large-scale language models (LLMs) are increasingly being used in educational contexts to analyze learner responses, generate feedback, and support the learning process. Approaches based on sequential query generation im-prove the manageability and predictability of model output, making them suitable for error analysis and structured feedback generation (Wu et al. 2022). Work focused on learners con-siders the use of generative models to support independent learning, writing assignments, and problem solving in various subject areas ([8]; Rachha and Seyam 2023; [6]; [5]). However, a number of authors emphasize the need for interpretability and validation of LLM outputs, especially in educational scenarios where automated feedback can influence learning out-comes (Rachha and Seyam 2023; [9]). Research in the field of adaptive and AI-supported learning environments shows that automated systems can be used to monitor learning activity and support pedagogical decisions (Wang et al. 2024; [15]). LLM tools are considered as auxiliary means in language teaching and assessment, where they are used to generate formative feedback and analyze student work ([3]; [7]). Some studies describe the use of generative AI in engineering and medical education, while pointing to the need to control the accuracy and pedagogical consistency of the results obtained ([9]; Abolkasim and Hasan 2024). In parallel, the field of personalized learning is developing, focusing on adapting assignments, feedback, and pedagogical interventions to the individual characteristics of students. Approaches are being proposed for the automated generation of personalized pedagogical interventions based on data and models of intelligent learning systems ([21]; Alghazo et al. 2025). Learners' perceptions of personalized solutions in AI-supported environments are also being studied, demonstrating the dependence of the adoption of such technologies on their usefulness and usability [31]. Contemporary reviews confirm the sustained interest in personalized learning technologies and consider generative AI as one of the components for the further development of adaptive educational systems (Zohuri and Mossavar-Rahmani 2024; [10]). A separate area of research is related to the ontological representation of knowledge in education. Ontologies are used to formalize educational concepts, skills, and the relationships between them, providing a structured basis for personalizing learning and adapting educational content (Zualkernan 2025). In recent years, there has been growing interest in the application of LLM in ontology engineering, including the automatic ex-traction of linguistic and conceptual information from texts and the subsequent enrichment of ontological models (Nadăș et al. 2025; [30]; [11]; Wang et al. 2024). A number of studies show that the combination of ontological models, linguistic algorithms and LLM allows obtaining structured knowledge representations suitable for further processing in intelligent systems (Mukanova et al. 2024; Mukanova et al. 2025). Research in the field of automated assessment and analysis of educational data shows that such approaches typically fail to link text processing results to structured learning models that describe concepts, skills, and error types of learners ([13]; [26]). This limits the application of LLM in adaptive learning environments where feedback needs to be aligned with performance indicators and individual learning trajectories [29]. The research focuses on learners' perceptions of AI-supported systems. It has been shown that the use of such solutions requires transparency and understandability of the recommendation mechanisms, as well as their alignment with learning goals and learners' expectations [16]. Review papers on generative artificial intelligence in education also point to the lack of formalized mechanisms for integrating LLM results with learning models, which complicates their use in personalized and adaptive educational systems ([28]; [32]). This paper proposes an approach that combines automated ontology enrichment using large-scale language models and ontology-driven personalized learning. The proposed solution is based on transforming the results of analysis of educational materials and student responses into structured ontological data. This data is used to generate feedback and select learning tasks in accordance with the current level of preparation of the student in the digital educational environment. This study also includes a structured review of systems using LLM to generate personalized feedback in adaptive learning environments. The review followed the PRISMA 2020 guidelines, which govern the procedures for identifying, selecting, and analyzing relevant studies [22]. In accordance with the PRISMA protocol, Figure 1 presents a diagram of the publication selection process, reflecting the transition from the initial array of sources to the final sample of 38 included studies.

Bibliometric data analysis confirms growing interest in this topic among the scientific community. Figure 2 presents a summary of Scopus publications from 2021–2025 on personalized learning and LLM in Computer Science. The sample included 622 documents published in 135 scientific publications. The average annual growth rate of the number of papers was 129.87%, reflecting the rapid development of this field. The corpus identified 1,904 authors, with an average number of co-authors per article of 3.81, indicating a high degree of scholarly collaboration. The presence of 1,543 unique author keywords demonstrates thematic diversity of the research, and the average number of citations (4.875 per article) indicates a sustained interest in LLM applications in education.

The keyword frequency map (Figure 3) reveals the structural elements of the scientific field. The central keywords include terms such as “personalized learning,” “artificial intelligence,” “students,” “teaching,” “language model,” and “large language models,” demonstrating the integration of AI technologies with personalized learning objectives. Several clusters form around this central core: machine learning methods, natural language technologies, and pedagogical aspects. Key words related to ethics and data protection are highlighted separately, highlighting the relevance of security issues when implementing AI technologies in educational processes.

Overall, a literature review shows that, despite the rapid development of artificial intelligence technologies, research on the systematic and effective use of large language models to generate personalized learning feedback remains insufficient. Existing studies examine individual LLM functions and capabilities, but comprehensive solutions aimed at improving the quality of feedback and supporting individual learning trajectories are rare.
Methodology
This study is based on an experimental-analytical approach. The work considers the use of automatically supplemented subject ontology in the structure of digital textbooks and in the process of individualizing learning. The main focus is on the development of learning content, adaptation of tasks, and analysis of the operation of the automatic assessment system.
The study was conducted in several stages. First, a subject ontology was created, which was supplemented with language models. Then, based on this ontology, the structure of the digital textbook was formed and educational materials were systematized. At the final stage, adaptation of learning content, automatic assessment, and feedback mechanisms were introduced, and their impact on the individual learning trajectory of students was analyzed. During the preparatory stage, a subject ontology was created using the Protégé editor. At this stage, the hierarchy of key concepts was determined, their properties and interrelationships were identified. The ontology was used to structure educational materials and mark them semantically.
After the ontology was ready, educational materials were loaded into the system and automatically processed. The content was identified with ontological objects. As a result, theoretical sections, a terminological dictionary, interactive tasks, and test elements were formed. Such a structure allows for the organized presentation of learning information. The system architecture uses a natural language processing module for analyzing texts and student questions in the Kazakh language. The module performs morphological and semantic analysis and links the queries to the corresponding learning content in the ontology. The automatic assessment subsystem processes student responses, records errors, and determines the level of mastery. Based on this data, the adaptive learning module selects tasks of the appropriate level and forms an individual learning path. All of the above processes were implemented in a software system. The main component is based on Python 3.9. It collects subject web resources, builds queries from the text, sends them to the language model via an external API, and converts the resulting structured results into ontology objects using the OWLready2 library (Figure 4).

The system enables structured and adaptive use of educational content.
HTTP-Based Data Retrieval
In the experiment, data were retrieved from web pages using the HTTP protocol. Subject matter experts first selected web resources containing information relevant to the subject domain. Subsequently, the knowledge engineer transformed the collected data into an ontological representation. This separation of roles simplified the data collection and modeling process (Mukanova et al. 2024; Mukanova et al. 2025). As an illustrative example, a Wikipedia article was used to refine the ontology structure. The procedure for retrieving and preprocessing textual content is illustrated in Figure 5. The extracted page title and main textual body are combined to form natural language input for subsequent processing stages. Although the example focuses on a specific web resource, the proposed procedure can be applied to other sources with minor configuration adjustments.

The extracted textual content is used as input for subsequent ontology generation.
LLM input preparation
The query sent to the model is written in natural language and includes the processed web page text, a description of the required information, and the expected output format. The model returns results in a Python-compatible structured format, which are subsequently used in further processing stages (Mukanova et al. 2024; Mukanova et al. 2025). As the model produces probabilistic outputs, the same query may generate different responses. To reduce this variability, a stable query format was defined through repeated testing. An example of this procedure is shown in Figure 6; the full web page content is omitted due to its size.

Figure 6. Procedure for extracting and structuring linguistic elements from Kazakh-language text.
The query defines the format for extracting linguistic elements from text and representing them in a structured form with explicit relations.
Sending a request to the OpenAI API


At the next stage, the generated ontology is enriched with data extracted from Kazakh-language texts and represented as Python data structures for further processing. This process is implemented in the program code, the procedure of which is illustrated in Figure 9. Ontology interaction is performed using the OWLready2 library. For each created entity (parts of speech, grammatical categories, syntactic units, and usage examples), labels are automatically generated in three languages.

Ontology interaction is implemented using OWLready2 classes. For each created entity (part of speech, grammatical category, syntactic unit, or example), multilingual labels are automatically generated (see Figure 10).

The figure presents the hierarchy of grammatical and morphological categories (e.g., case, number, person) and sample instances. Each element includes semantic annotations and is linked to the corresponding linguistic categories.
Experimental methods of pedagogical measurement
The pedagogical experiment involved 65 students assigned to control and experimental groups. The control group used standard digital learning materials, while the experimental group studied with a digital textbook based on an automatically enriched ontology and adaptive feedback.
Both groups completed a pre-test to assess their initial knowledge and a post-test at the end of the study period. The tests reflected the structure of the learning materials and covered key concepts and tasks.
Learning outcomes were evaluated using task accuracy, number of errors, time required to master the material, and quality of error classification. Precision, recall, and F1-score were used to assess the automated analysis system. Group differences were analyzed by comparing mean values, with normality checked before applying parametric tests.
Data collection and analysis
Data were collected automatically during students’ interaction with the digital platform. Recorded data included task results, error types, learning sequence, and time metrics. Data from the control group were obtained using standard digital assessment tools.
The analysis followed a comparative approach to identify differences in learning dynamics between the groups. Special attention was given to learning trajectory adaptation, repeated use of materials, and changes in error patterns. These data formed the basis for the results presented in the next section, which examines the effect of the ontology-based digital textbook and automated feedback on student performance.
Results
As described above, the system sequentially constructs a subject ontology, extracts data from web sources, generates structured linguistic elements, and integrates them into the ontological model using the OWLready2 library. The resulting data are then used to organize the operation of the digital textbook. The Digital Smart Education Platform presents educational materials in Kazakh-language subjects (Kazakh language, Mathematics, and Computer Science) in digital form. Its functionality is based on an ontological model that formalizes learning concepts, their properties, and relationships, enabling semantic structuring of content. Educational materials include theoretical sections, interactive tasks, questions and answers, and test modules derived from officially approved school textbooks. All content elements are linked to ontology objects, which supports content consistency and adaptation to students’ levels of preparation. The user interface visualizes the structure of learning materials, supports navigation between topics, and enables personalized learning paths. The adaptation mechanism analyzes student performance and selects tasks with appropriate difficulty levels. Figures 11 and 12 present examples of interface elements and the structure of the Grade 6 digital textbook for the Kazakh language, illustrating the organization of content and its alignment with the underlying ontological model.


A fragment of the theoretical material for Lesson 1, Part 1 is shown in Figure 13. Interactive tasks, a terminology thesaurus, and quizzes are organized according to the lesson content. The materials focus on basic concepts and rules, while the tasks support skill practice.

Figure 13. Excerpt from the theoretical material of Lesson 1, Section 1.
The associated thesaurus (Figure 14) shows semantic links between key terms and supports vocabulary development.

Test questions are used to assess understanding of the material. Question–answer test items are applied to evaluate mastery of lesson content; an example is shown in Figure 15.

The assessment system includes automated evaluation, allowing students to receive immediate results and review their mistakes. This supports timely feedback and learning improvement. The Digital Smart Education Platform organizes the learning process in a digital environment and enables personalized interaction with educational materials. The digital textbook supports natural language interaction, individualized learning paths, interactive tasks, and automated assessment. Educational content is structured using semantic representations. Ontological links between concepts, tasks, and assessment elements support navigation, task selection, and content adaptation within the digital textbook. The use of such digital platforms contributes to the integration of digital technologies into Kazakh-language education and supports the development of the national digital educational environment.
Experimental Evaluation of Effectiveness
To assess the effectiveness of the LLM-based personalized feedback system, a pedagogical experiment was conducted. Learning outcomes were compared before and after system use, as well as between control and experimental groups.
Study design. Sixth-grade students participated in the study and were assigned to two groups. The control group used a standard digital textbook without LLM support, while the experimental group studied with a platform providing automated feedback, explanations, and personalized task recommendations. Both groups completed the same learning module consisting of ten assignments. In the experimental group, student responses were analyzed automatically, errors were identified, and follow-up tasks were selected based on individual performance.
Assessment methods. Learning performance was evaluated using task accuracy, number of repeated errors, and time required to master the topic. The quality of automatic error classification was assessed using precision, recall, F1-score, and overall accuracy. The results are summarized in Tables 1–3.
Learning outcomes. Table 1 shows learning outcomes before and after the introduction of the LLM-based feedback system. Accuracy increased from 64.1% to 82.5%, repeated errors decreased by 51.2%, and the time required to master the material was reduced by 35.3%. All changes were statistically significant (p < 0.001) with large effect sizes.
| Indicator | Before | After | Change | t-value | p-value | Cohen’s d | Interpretation |
| Accuracy (%) | 64.1 | 82.5 | +18.4 | –8.72 | < 0.001 | 1.78 | Very large effect |
| Repeated errors | 4.3 | 2.1 | –51.2% | 6.14 | < 0.001 | 1.25 | Large effect |
| Time to learn (sessions) | 3.4 | 2.2 | –35.3% | 5.65 | < 0.001 | 1.12 | Large effect |
Table 2 compares the control and experimental groups. The experimental group demonstrated higher accuracy (+19.4%), fewer repeated errors (–53.6%), and shorter learning time (–34.3%) than the control group. Independent t-tests confirmed that all differences were statistically significant (p < 0.001), with large or very large effect sizes (Cohen’s d = 1.50–2.30).
| Indicator | Control | Experimental | Difference | t-value | p-value | Cohen’s d | Interpretation |
| Accuracy (%) | 62.3 | 81.7 | +19.4 | –7.95 | < 0.001 | 2.30 | Very large effect |
| Repeated errors | 4.1 | 1.9 | –53.6% | 6.38 | < 0.001 | 1.85 | Large effect |
| Time to learn (sessions) | 3.2 | 2.1 | –34.3% | 5.10 | < 0.001 | 1.50 | Large effect |
Error classification performance. Table 3 presents the quality of automatic error classification. The system achieved a precision of 0.89, recall of 0.83, F1-score of 0.86, and overall accuracy of 0.88, indicating stable identification of error types in student responses.
| Metric | Value |
| Precision | 0.89 |
| Recall | 0.83 |
| F1-score | 0.86 |
| Overall accuracy | 0.88 |
Overall, the results demonstrate that the LLM-based personalized feedback system improves learning accuracy, reduces repeated errors, and shortens learning time. The classification metrics confirm that the system provides reliable and pedagogically relevant feedback.
Discussion
This study analyzes an ontology-based digital textbook in which a subject ontology is automatically enriched using large language models and applied to content organization, assessment, and feedback. The discussion interprets the results in relation to the research questions and positions them within current work on personalized and ontology-driven learning systems.
For RQ1, the results show that the automatically enriched ontology forms the structural basis of the digital textbook. It defines curricular sections and formalizes links between learning materials, tasks, and assessment elements. This structure supports consistent organization of content across the textbook, which is consistent with previous studies on ontology-based knowledge representation in education.
Regarding RQ2, the findings indicate that the ontology-oriented textbook supports content and task adaptation according to students’ proficiency levels. Differences between the control and experimental groups in accuracy, repeated errors, and learning time suggest that ontological links between concepts, tasks, and error categories guide the selection of appropriate learning materials. Unlike approaches based mainly on textual feedback, the proposed method connects student response analysis with a formal knowledge model, addressing limitations noted in earlier LLM-based studies.
With respect to RQ3, the results show that combining ontological representation with automated assessment and feedback supports the formation of individual learning paths. In the experimental group, task sequencing and repeated engagement with materials changed in response to feedback derived from ontological descriptions of concepts and error types. This extends existing personalized learning approaches by introducing a formal link between learner analysis and subject-domain ontologies. The evaluation of automatic error classification demonstrates that integrating natural language processing with ontological constraints enables consistent mapping of learner responses to predefined error categories and learning concepts. Precision, recall, and F1-score values indicate stable performance within the scope of the experiment. Several limitations should be noted. The study involved a limited sample size and focused on a single subject and grade level. Long-term learning effects and sustained pedagogical support were not examined and remain topics for future research.
Overall, the results indicate that automatic ontology enrichment can be effectively integrated into a digital textbook to support structured content organization, learning adaptation, and automated feedback. These findings provide a foundation for further research on ontology-oriented educational systems that use large language models to transform natural language data into structured representations for personalization.
Conclusion
This study proposes an ontology-oriented approach to personalized learning based on the automatic enrichment of a subject ontology using large language models. The results show that integrating the analysis of educational materials and student responses into a formal ontological model supports structured organization of learning content, consistency of digital textbook components, and the implementation of adaptive learning and automated feedback. Experimental findings indicate improved learning outcomes compared to traditional digital materials. The study also highlights limitations of existing work in this area. Many current LLM-based educational solutions focus on generating textual feedback without formal integration into learning models, which limits interpretability and adaptability. In addition, the present experiment was conducted with a limited sample size, within a single subject area, and over a short time period.
Future research should extend the experimental scope, include longitudinal studies, and examine the scalability of automatic ontology enrichment across different subjects and educational levels. Further investigation is also needed to improve the interpretability of language model outputs when applied in ontology-based educational systems. From a practical perspective, the proposed approach can be applied to the development of intelligent educational platforms and digital textbooks that support personalized learning. At the policy level, the findings suggest that interpretable, ontology-based AI solutions can be integrated into digital transformation strategies for educational systems.
Funding
This research has been funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP19678613).
Institutional Review Board Statement
The study was conducted in accordance with institutional ethical standards. All participants were informed about the purpose of the study, and participation was voluntary. No personal or sensitive data were collected.
Data Availability Statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
Informed Consent Statement
Informed consent for participation was obtained from all students and faculty members involved in the study. The survey was conducted through the official university platform, and all participants provided their consent prior to participation. The collected data were used exclusively for research purposes in accordance with institutional ethical guidelines.
Ethics Statement
The studies involving human participants were reviewed and approved by the Digital Resource Center and the relevant Ethics Committee at L.N. Gumilyov Eurasian National University within the framework of the Smart.EDU project. All participants were informed about the purpose of the study, participation was voluntary, and no personal, sensitive, or identifiable data were collected.
Section Title
References
- Abbes, Feriel, Souha Bennani, and Ahmed Maalel. "Generative AI and gamification for personalized learning: Literature review and future challenges." SN Computer Science 5, no. 8 (2024): 1154. DOI: 10.1007/s42979-024-03491-z
- Abolkasim, Entisar, and Manal Hasan. "Integrating ChatGPT in education and learning: A case study on Libyan universities." Journal of Pure & Applied Sciences 23, no. 2 (2024): 19-24. DOI: 10.51984/jopas.v23i2.3082
- Ahmed, Mahdi A. "ChatGPT and the EFL classroom: Supplement or substitute in Saudi Arabia’s eastern region." Information Sciences Letters 12, no. 7 (2023): 2727-2734. Alghazo, Mohannad, Vian Ahmed, and Zied Bahroun. "Exploring the applications of artificial intelligence in mechanical engineering education." In Frontiers in Education, vol. 9, p. 1492308. Frontiers Media SA, 2025. DOI: 10.18576/isl/120704
- Almogren, Abeer S., Waleed Mugahed Al-Rahmi, and Nisar Ahmed Dahri. "Integrated technological approaches to academic success: Mobile learning, social media, and AI in higher education." IEEE Access (2024). DOI: 10.1109/access.2024.3498047
- Athanassopoulos, Stavros, Polyxeni Manoli, Maria Gouvi, Konstantinos Lavidas, and Vassilis Komis. "The use of ChatGPT as a learning tool to improve foreign language writing in a multilingual and multicultural classroom." Advances in Mobile Learning Educational Research 3, no. 2 (2023): 818-824. DOI: 10.25082/amler.2023.02.009
- Bernabei, Margherita, Silvia Colabianchi, Andrea Falegnami, and Francesco Costantino. "Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances." Computers and Education: Artificial Intelligence 5 (2023): 100172. DOI: 10.1016/j.caeai.2023.100172
- Chaudhry, Iffat Sabir, Sayed Ahmad M. Sarwary, Ghaleb A. El Refae, and Habib Chabchoub. "Time to revisit existing student’s performance evaluation approach in higher education sector in a new era of ChatGPT-a case study." Cogent Education 10, no. 1 (2023): 2210461. DOI: 10.1080/2331186x.2023.2210461
- Dai, Yun, Ziyan Lin, and Ang Liu. "Facilitating Students’ Adaptive Help-seeking and Peer Interactions through an Analytics-enhanced Forum in Engineering Design Education." Procedia CIRP 128 (2024): 292-297. DOI: 10.1016/j.procir.2024.06.024
- Divito, Christopher B., Bryan M. Katchikian, Jenna E. Gruenwald, and Jennifer M. Burgoon. "The tools of the future are the challenges of today: The use of ChatGPT in problem-based learning medical education." Medical teacher 46, no. 3 (2024): 320-322. DOI: 10.1080/0142159x.2023.2290997
- Doo, Min Young, Curtis Bonk, and Heeok Heo. "A meta-analysis of scaffolding effects in online learning in higher education." International Review of Research in Open and Distributed Learning 21, no. 3 (2020): 60-80. DOI: 10.19173/irrodl.v21i3.4638
- Dwivedi, Yogesh K., Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah et al. "Opinion Paper:“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy." International journal of information management 71 (2023): 102642. DOI: 10.1016/j.ijinfomgt.2023.102642
- European Commission. 2023. Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators. Brussels: European Commission. Accessed December 13, 2025. DOI: 10.12794/metadc2443088
- Hooda, Monika, Chhavi Rana, Omdev Dahiya, Jayashree Premkumar Shet, and Bhupesh Kumar Singh. "Integrating LA and EDM for improving students Success in higher Education using FCN algorithm." Mathematical Problems in Engineering 2022, no. 1 (2022): 7690103. DOI: 10.1155/2022/7690103
- Islam, Mir Riyanul, Mobyen Uddin Ahmed, Shaibal Barua, and Shahina Begum. "A systematic review of explainable artificial intelligence in terms of different application domains and tasks." Applied Sciences 12, no. 3 (2022): 1353. DOI: 10.3390/app12031353
- Kohnke, Lucas, Di Zou, Amy Wanyu Ou, and Michelle Mingyue Gu. "Preparing future educators for AI-enhanced classrooms: Insights into AI literacy and integration." Computers and Education: Artificial Intelligence 8 (2025): 100398. DOI: 10.1016/j.caeai.2025.100398
- Kim, Jihyun, Kelly Merrill, Kun Xu, and Deanna D. Sellnow. "My teacher is a machine: Understanding students’ perceptions of AI teaching assistants in online education." International Journal of Human–Computer Interaction 36, no. 20 (2020): 1902-1911. DOI: 10.1080/10447318.2020.1801227
- Kochmar, Ekaterina, Dung Do Vu, Robert Belfer, Varun Gupta, Iulian Vlad Serban, and Joelle Pineau. "Automated data-driven generation of personalized pedagogical interventions in intelligent tutoring systems." International Journal of Artificial Intelligence in Education 32, no. 2 (2022): 323-349. DOI: 10.1007/s40593-021-00267-x
- Kohnke, Lucas, Di Zou, and Fan Su. "Exploring the potential of GenAI for personalised English teaching: Learners' experiences and perceptions." Computers and Education: Artificial Intelligence 8 (2025): 100371. DOI: 10.1016/j.caeai.2025.100371
- Li, Chong, Xin Lee, and Xiaojin Wu. "Provide personalized programming learning for individuals based on large language models." Alexandria Engineering Journal 132 (2025): 396-406. DOI: 10.1016/j.aej.2025.10.026
- Li, Jiayi, Daniel Garijo, and María Poveda-Villalón. "Large Language Models for Ontology Engineering: A Systematic Literature Review." (2025). DOI: 10.1016/j.websem.2018.09.003
- Malyshev, Viktor, Angelina Gab, Dmytro Shakhnin, Yurii Lipskyi, Viktoriia Kovalenko, and Olga Orel. "Research of the global higher education market and of the use of artificial intelligence in this field." EUREKA: Social and Humanities 6 (2024): 28-41. DOI: 10.21303/2504-5571.2024.003663
- Moher, David, Alessandro Liberati, Jennifer Tetzlaff, and Douglas G. Altman. "Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement." Bmj 339 (2009). DOI: 10.1136/bmj.b2535
- Nadăș, Mihai, Laura Dioșan, and Andreea Tomescu. "Synthetic data generation using large language models: Advances in text and code." IEEE Access (2025). DOI: 10.1109/access.2025.3589503
- Ocumpaugh, Jaclyn, Rod D. Roscoe, Ryan S. Baker, Stephen Hutt, and Stephen J. Aguilar. "Toward asset-based instruction and assessment in artificial intelligence in education." International Journal of Artificial Intelligence in Education 34, no. 4 (2024): 1559-1598. DOI: 10.1007/s40593-023-00382-x
- OECD. 2025. Artificial Intelligence and Education and Skills. Paris: OECD. Accessed December 13, 2025. DOI: 10.1787/d5241a23-en
- Osakwe, Ikenna, Guanliang Chen, Alex Whitelock-Wainwright, Dragan Gašević, Anderson Pinheiro Cavalcanti, and Rafael Ferreira Mello. "Towards automated content analysis of educational feedback: A multi-language study." Computers and Education: Artificial Intelligence 3 (2022): 100059. DOI: 10.1016/j.caeai.2022.100059
- Rachha, Ashwin, and Mohammed Seyam. "Explainable AI in education: Current trends, challenges, and opportunities." SoutheastCon 2023 (2023): 232-239. DOI: 10.1109/southeastcon51012.2023.10115140
- Samala, Agariadne Dwinggo, Soha Rawas, Tianchong Wang, Janet Marie Reed, Jinhee Kim, Natalie-Jane Howard, and Myriam Ertz. "Unveiling the landscape of generative artificial intelligence in education: a comprehensive taxonomy of applications, challenges, and future prospects." Education and Information Technologies 30, no. 3 (2025): 3239-3278. DOI: 10.1007/s10639-024-12936-0
- Shafiq, Dalia Abdulkareem, Mohsen Marjani, Riyaz Ahamed Ariyaluran Habeeb, and David Asirvatham. "Student retention using educational data mining and predictive analytics: a systematic literature review." IEEE Access 10 (2022): 72480-72503. DOI: 10.1109/access.2022.3188767
- Sharma, Sahil, Puneet Mittal, Mukesh Kumar, and Vivek Bhardwaj. "The role of large language models in personalized learning: a systematic review of educational impact." Discover Sustainability 6, no. 1 (2025): 1-24. DOI: 10.1007/s43621-025-01094-z
- Snyder, Hannah. "Literature review as a research methodology: An overview and guidelines." Journal of business research 104 (2019): 333-339. DOI: 10.1016/j.jbusres.2019.07.039
- Williams, Tim G., Daniel G. Brown, Seth D. Guikema, N. R. Magliocca, Birgit Müller, C. E. Steger, and Thomas Logan. "Integrating equity considerations into agent-based modeling: A conceptual framework and practical guidance." (2022). DOI: 10.18564/jasss.4816
- World Bank. 2022. The State of Global Learning Poverty: 2022 Update. Washington, DC: World Bank Group. Accessed December 13, 2025. DOI: 10.1145/3491102.3517582
- Zengeya, Tsitsi, and Jean Vincent Fonou-Dombeu. "A review of state of the art deep learning models for ontology construction." IEEE Access 12 (2024): 82354-82383. DOI: 10.1109/access.2024.3406426
- Zohuri, Bahman, and Farhang Mossavar-Rahmani. "Revolutionizing education: The dynamic synergy of personalized learning and artificial intelligence." International Journal of Advanced Engineering and Management Research 9, no. 1 (2024): 143-153. Zualkernan, Imran. "Adoption of Generative AI and Large Language Models in Education: A Short Review." In 2025 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1-5. IEEE, 2025. DOI: 10.1109/iceic64972.2025.10879632