Abstract
Memory forensics has become an essential discipline for detecting advanced malware, particularly fileless and memory-resident threats that evade conventional disk-based analysis. This paper presents a systematic review of 30 peer-reviewed studies published between 2014 and 2024 on memory-forensic malware detection. The review focuses on memory acquisition and artifact extraction, machine learning (ML) and deep learning (DL) methods, visualization-driven representations, and hybrid workflows. Our analysis reveals that of the 30 studies, 18 (60%) employed ML, 12 (40%) utilized DL, and 11 (37%) incorporated visualization-based techniques. While many of these learning-assisted approaches report high detection performance in controlled evaluations, the literature also reveals persistent barriers to reproducibility and operational readiness. Key gaps identified include a scarcity of diverse public benchmark datasets (used by only 23% of studies), significant computational overhead, susceptibility to anti-forensic techniques, limited cross-platform generalization, and inconsistent evaluation practices. To address these limitations, this paper outlines a research roadmap centered on standardized datasets, scalable analytical pipelines, robustness-oriented testing, cross-platform abstractions, and privacy-conscious governance to build more defensible and operationally viable volatile-memory analysis frameworks.
Keywords: Memory Forensics, Malware Detection, Fileless Malware, Volatile Memory, Systematic Review, Machine Learning, Deep Learning
Introduction
The analysis of volatile memory (RAM), or memory forensics, provides a powerful method for reconstructing a systemβs execution state and revealing malicious activity that is invisible to traditional disk-based forensics [1], [2]. As modern cyberattacks increasingly leverage in-memory execution to minimize their footprint, the importance of this discipline has grown significantly. Running code, injected payloads, loaded modules, kernel objects, and network connections must reside in memory, making RAM a high-value evidentiary source for detecting advanced threats that abuse legitimate system utilities to remain hidden [3], [4], [2]. Consequently, memory analysis is now a critical component of incident response and malware investigations, complementing disk-based approaches by enabling the detection of stealth techniques such as hidden processes, code injection, and kernel manipulation [1], [5].
The efficacy of traditional malware detection has been eroded by the rapid evolution of evasion techniques. Signature-based and static methods are often defeated by obfuscation and packing, while dynamic analysis in sandboxed environments can be resource-intensive and is frequently targeted by anti-VM and anti-debugging countermeasures [6], [7], [8]. These challenges are amplified by the rise of memory-resident and fileless malware, which rely on in-memory execution and limit durable indicators on disk, thereby reducing the utility of file-centric detection [9], [10], [11]. Because memory-resident artifacts are ephemeral and disappear upon system reboot or process termination, timely and efficient memory acquisition and analysis are essential for both proactive defense and post-incident forensic reconstruction [9], [10].
In response, the field has progressed from manual inspection of memory artifacts toward automated, data-driven methodologies. Memory forensic frameworks like Volatility and Rekall provide systematic extraction of processes, kernel structures, registry artifacts, and network information, allowing investigators to identify anomalies consistent with rootkits and covert persistence mechanisms [1], [5], [12]. More recently, machine learning (ML) and deep learning (DL) have been applied to classify malicious behavior from these artifacts and to improve the detection of previously unseen malware variants. This includes novel approaches that leverage visualization and representation learning to model memory dumps as feature-rich patterns for classification [5], [13], [14], [15].
Despite these advancements, significant barriers hinder the operational deployment and generalizability of these techniques. Many studies rely on small or narrowly scoped datasets that fail to capture the diversity of real-world operating system versions, benign workloads, and contemporary malware, complicating reproducibility and performance benchmarking [5], [16], [17]. Computational overhead remains a practical obstacle, particularly for deep learning pipelines applied to large memory dumps from modern systems [14], [15], [6]. Furthermore, adversaries actively develop and deploy anti-forensic strategies to manipulate memory structures and evade detection [8], [18].
This paper provides a systematic review of the developments in memory forensics for malware detection, analyzing 30 key studies published between 2014 and 2024. It synthesizes the principal methods for memory acquisition, artifact extraction, and analytical detection, with a focus on ML, DL, visualization, and hybrid approaches. The review consolidates the major research gaps related to dataset limitations, scalability, cross-platform transferability, evaluation rigor, and resilience to anti-forensics. Finally, it proposes a structured roadmap for future research aimed at enhancing the robustness, scalability, and practical readiness of memory forensics to counter evolving cyberthreats.
Literature Review
Recent advances in memory forensics for malware detection reflect a significant shift from manual artifact inspection toward automated and data-driven analysis. This evolution is largely driven by the proliferation of fileless and memory-resident threats that minimize disk artifacts and evade traditional file-centric detection methods [9], [10]. The literature published between 2014 and 2024 shows that modern memory-forensic detection pipelines typically follow three stages: memory acquisition, artifact extraction using forensic frameworks, and analytical inference using rule-based reasoning or statistical and learning-based models. Methodological progress is most prominent in the increased adoption of machine learning and deep learning, the use of visualization-based representations of memory data, and the integration of memory evidence with dynamic analysis to capture runtime behavior [6], [7], [5], [13], [15].
A substantial body of work focuses on extracting structured artifacts from memory and applying supervised learning to discriminate between benign and malicious states. In these pipelines, memory forensic frameworks are used to recover process lists, loaded modules, handles, registry artifacts, kernel objects, and network connections, which are then transformed into feature vectors for classification [1], [6], [16]. These feature-based approaches have successfully employed conventional classifiers and ensemble methods, such as random forests and gradient boosting, which are well-suited to handle high-dimensional and heterogeneous artifact representations [19], [10]. The primary advantage of these techniques lies in their interpretability at the feature level and their compatibility with existing forensic tooling. However, their performance is heavily dependent on the quality of feature engineering and the stability of artifact semantics across different operating system versions and environments [16], [17]. In parallel, deep learning has been applied to memory forensics to reduce the reliance on manual feature engineering and to learn discriminative representations directly from memory-derived inputs. Some approaches transform raw memory dumps into alternative views, such as byte plots or grayscale images, and then apply convolutional neural networks (CNNs) to learn spatial patterns that correlate with malicious activity [14]. Visualization-driven methods represent a notable trend within this domain. For instance, Bozkir et al. (2021) proposed a workflow that combines memory forensics with computer vision descriptors and manifold learning to support malware detection and family classification from memory-derived image representations [5]. Similarly, Dai et al. (2022) reported strong classification performance using visualization-based patterns from memory combined with machine learning [13]. These studies suggest that visual encodings can make complex memory states more separable for automated detection and can also assist analyst triage by producing interpretable artifact patterns. However, the robustness of such representations across different capture conditions, memory sizes, and OS builds remains a recurring concern [15], [17].
Another important line of work focuses on hybrid and dynamic approaches that incorporate runtime behavioral evidence alongside memory artifacts. Dynamic analysis offers advantages over purely static inspection, though it is often challenged by overhead and evasion techniques [7]. Within memory forensics, this has motivated workflows where memory snapshots are collected during or after sandbox execution to capture transient evidence such as injected code regions, suspicious memory allocations, and unpacked payloads that are difficult to validate through disk artifacts alone [6], [7].
To provide a structured overview of the reviewed literature, Table 1 provides a taxonomy of the 30 studies, categorized by their primary analytical approach.
| Category | Primary Method | # of Studies | % of Total | Key References |
| Feature-Based ML | Classification on engineered features from artifacts | 10 | 33% | [6], [16], [20] |
| Deep Learning | End-to-end learning on raw or transformed memory | 8 | 27% | [14], [15], [6], [21] |
| Visualization-Based | Image representation of memory for CV models | 6 | 20% | [5], [13] |
| Hybrid & Dynamic | Combination of memory snapshots and runtime analysis | 4 | 13% | [7] |
| Foundational/Framework | Tools, principles, and cross-platform analysis | 2 | 7% | [1], [12] |
| Total | 30 | 100% |
Methodology
3.1. Research Design
This paper employed a systematic literature review design, following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, to synthesize developments in memory forensics for malware detection published between January 1, 2014, and December 31, 2024 [4]. This approach was selected to ensure a transparent and reproducible process for study identification, data extraction, and thematic synthesis of findings across a heterogeneous body of research.
3.2. Search Strategy and Data Sources
A comprehensive search was conducted across four major scholarly databases: IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar. The search was performed in February 2026. The search query was designed to capture the intersection of memory forensics concepts and malware detection techniques, using the following string:
("memory forensics" OR "volatile memory" OR "RAM analysis" OR "memory dump") AND ("malware detection" OR "fileless malware" OR "memory-resident malware" OR "rootkit") AND ("machine learning" OR "deep learning" OR "visualization" OR "dynamic analysis" OR "Volatility" OR "Rekall")
To ensure comprehensive coverage, backward and forward citation searching was performed on highly cited and foundational works in the field [1], [6], [5], [13].
3.3. Inclusion and Exclusion Criteria
Studies were eligible for inclusion if they met the following criteria:
Studies were excluded if they focused solely on disk or network forensics, were non-technical opinion pieces, or were superseded by a more complete publication.
3.4. Study Screening and Selection Process
The selection process, depicted in Figure 1, involved two stages. First, two reviewers independently screened titles and abstracts to remove irrelevant results. Any disagreements were resolved through discussion. Second, the full texts of the remaining articles were assessed against the eligibility criteria. This process yielded 30 studies for inclusion in the final synthesis. Inter-rater reliability for the full-text screening was high, with a Cohen's kappa of 0.88.

3.5. Data Extraction and Synthesis
A structured data extraction form was used to record key information from each study, including its objective, target platform, memory acquisition method, analytical model, dataset characteristics, and reported performance metrics. The synthesis combined thematic analysis with a quantitative summary of the findings. Studies were grouped by their dominant detection approach, and cross-cutting themes and limitations were identified and consolidated.
4. Research Gaps and Difficulties
Although memory forensics has advanced substantially as a basis for detecting fileless and memory-resident malware, our systematic review reveals persistent gaps that limit the reproducibility, generalizability, and operational deployment of existing methods. These challenges span data availability, computational feasibility, adversarial robustness, cross-platform transfer, automation, evaluation practices, and ethical governance. Addressing these is essential for memory forensics to remain effective against increasingly evasive in-memory threats [9], [10], [17].
4.1. Limited Availability and Diversity of Datasets
A dominant limitation across the reviewed literature is the reliance on small, proprietary, or narrowly scoped datasets, which severely restricts both generalizability and reproducibility. As summarized in Table 2, our analysis found that only 7 of the 30 studies (23%) utilized publicly available memory dump datasets. The majority (77%) relied on private or custom-generated datasets, which impedes benchmarking and makes it difficult to compare methods fairly. This problem is amplified by the lack of diversity in these datasets; most are captured in controlled lab environments and fail to reflect the variability of benign user activity, system configurations, and memory fragmentation seen in real-world systems [16], [15].
| Characteristic | Finding | # of Studies | % of Total |
| Dataset Accessibility | Publicly Available | 7 | 23% |
| Proprietary / Not Available | 23 | 77% | |
| Primary OS Platform | Windows | 22 | 73% |
| Linux | 5 | 17% | |
| IoT / Android | 3 | 10% | |
| Annotation Quality | Standardized Ground Truth Labels | 9 | 30% |
| Manual or Tool-Generated Labels (Not Standardized) | 21 | 70% |
Without diverse, large-scale public datasets that span multiple platforms and include credible ground truth for key artifacts, claims of high detection performance are difficult to validate beyond the specific experimental settings of individual studies [17], [15].
4.2. Computational Efficiency and Scalability Constraints
Practical deployment is also constrained by computational overhead, particularly for deep-learning pipelines that operate on large memory dumps. Visualization-driven methods, for example, report strong performance but the cost of processing multi-gigabyte memory images can be substantial, limiting their feasibility for near-real-time detection [14], [15]. This limitation is more pronounced in resource-constrained environments such as Internet of Things (IoT) devices, where memory and CPU capacity are severely limited [6]. Methods for reducing model footprint, such as knowledge distillation and quantization, are well-established in the broader deep learning literature but remain underexplored for memory-forensic workloads [22].
4.3. Evasion, Anti-Forensics, and Expanding Attack Surfaces
A further challenge arises from the adversarial nature of the problem. Modern malware increasingly incorporates anti-forensic strategies designed to manipulate or conceal the very memory artifacts that forensic tools and detection models rely on. These techniques include kernel-level manipulation (e.g., DKOM), process hollowing, and tampering with data structures used for process enumeration, which can degrade both rule-based and learning-based approaches [8]. The threat model is also expanding beyond conventional CPU memory. Research into GPU-assisted malware highlights the potential for threats to exploit non-CPU memory regions, reducing the visibility of traditional RAM-centric inspection [23].
4.4. Limited Cross-Platform Generalization
Cross-platform generalization remains a persistent gap. While foundational work provides concepts for analyzing memory across different operating systems [1], a significant proportion of recent research (73% of studies reviewed) focuses exclusively on the Windows platform (see Table 2). This platform-specific focus, with its reliance on Windows artifact semantics, means that many proposed methods do not transfer cleanly to Linux, macOS, or IoT/mobile environments [19]. This fragmentation is problematic given the prevalence of heterogeneous enterprise environments and the rise of cross-platform malware. While abstractions like graph-based modeling have been proposed to represent memory artifacts in a platform-independent manner, they require further validation and tooling maturity [21], [24].
4.5. Inconsistent Evaluation Practices and Methodological Rigor
High reported accuracies in the literature must be interpreted with caution due to significant variation in evaluation practices, as detailed in Table 3. Our review found that 60% of studies relied primarily on accuracy as the main performance metric, which can be misleading in imbalanced detection tasks. Only 40% of studies reported a more complete set of metrics, including precision, recall, and F1-score, which provide a more informative assessment of operational trade-offs. Furthermore, robustness evaluation is critically lacking; only 7% of the reviewed studies conducted any form of adversarial testing against memory obfuscation or artifact manipulation. This gap is concerning, as models may appear effective in testing but prove fragile against adaptive adversaries [25]. Without standardized evaluation procedures and shared baselines, the evidentiary strength of "state-of-the-art" claims is weakened [17], [15].
| Evaluation Practice | Finding | # of Studies | % of Total |
| Primary Performance Metric | Accuracy Only | 18 | 60% |
| Precision, Recall, F1-Score Reported | 12 | 40% | |
| Robustness Testing | Adversarial Testing Performed | 2 | 7% |
| No Adversarial Testing Reported | 28 | 93% | |
| Validation Method | Cross-Validation Used | 19 | 63% |
| Simple Train/Test Split or Not Specified | 11 | 37% |
4.6. Practical Integration and Ethical Governance
Finally, practical and ethical issues remain insufficiently addressed. Memory acquisition can expose highly sensitive user information, including credentials, personal content, and cryptographic keys, creating significant privacy and legal risks. While privacy-preserving methods like differential privacy provide conceptual tools for mitigating such risks, they are rarely incorporated into the memory-forensic malware detection pipelines reviewed in this study [26]. Operational integration is another gap, with limited attention paid to how detection outputs integrate with enterprise security tooling such as SIEM pipelines or incident response platforms. Without clear interfaces, governance procedures, and privacy-aware data handling, the adoption of advanced memory forensic techniques may remain limited, even when their technical performance appears promising.
Future Directions
To address the gaps identified in the preceding section, future research in memory forensics for malware detection should adopt a multidisciplinary and structured approach. Advancing the field requires coordinated progress across dataset engineering, scalable systems design, adversarial robust detection, and privacy-conscious governance. We propose a prioritized roadmap for future work, summarized in Table 4.
| Priority | Direction | Impact | Feasibility | Timeline | Key Stakeholders |
| 1 | Standardized Datasets & Evaluation | High | Medium | 1-3 Years | Academia, NIST, Cybersecurity Consortia (e.g., MITRE) |
| 2 | Robustness & Anti-Forensic Resilience | High | Medium | 2-4 Years | Academia, Red Teams, Security Tool Vendors |
| 3 | Scalable & Lightweight Algorithms | Medium | High | 1-3 Years | Academia, IoT/Mobile Security Researchers |
| 4 | Cross-Platform Abstractions | Medium | Medium | 3-5 Years | OS Developers, Cloud Providers, Standards Bodies |
| 5 | Privacy-Aware & Ethical Governance | High | Low | 3-5+ Years | Legal Scholars, Policy Makers, Academia |
5.1. Priority 1: Standardized Dataset Development and Evaluation
The most critical barrier to progress is the lack of diverse, publicly accessible memory dump datasets. Future work must prioritize the creation of open datasets that cover a broad range of malware categories across heterogeneous environments (Windows, Linux, IoT). These datasets should be hosted in public repositories (e.g., Zenodo, IEEE DataPort) and accompanied by detailed metadata and ground-truth annotations. Concurrently, the community should establish standardized evaluation protocols, promoting the use of comprehensive metrics (Precision, Recall, F1-score) and robust validation methods (e.g., k-fold cross-validation) to enable fair and meaningful comparisons between studies.
5.2. Priority 2: Countering Anti-Forensic Techniques
Future detection methods must be designed with adversarial resilience in mind. Research should focus on developing techniques that can identify manipulated memory artifacts and validate the integrity of kernel and process structures. This includes moving beyond signature-based detection of anti-forensic methods and toward behavioral and anomaly-based approaches. Furthermore, adversarial testing should become a standard component of the evaluation process, using techniques from the adversarial machine learning domain to assess model robustness against evasion attacks [25].
5.3. Priority 3: Scalable and Lightweight Algorithms
To improve operational feasibility, particularly in real-time and resource-constrained settings, research should focus on lightweight detection models. This includes exploring model compression techniques like quantization and knowledge distillation to reduce the footprint of deep learning models without sacrificing performance [22]. Hybrid approaches that use fast, low-overhead screening methods to trigger deeper, more resource-intensive analysis when needed also represent a promising direction.
5.4. Priority 4: Standardized Cross-Platform Structures
To overcome platform fragmentation, future frameworks should develop modular artifact extraction and representation layers that abstract away OS-specific details. Graph-based models that represent relationships between memory objects (processes, threads, sockets) in a platform-independent manner are a promising avenue for building unified analysis engines [21], [24].
5.5. Priority 5: Privacy-Aware and Ethical Approaches
Finally, privacy and ethical governance must be treated as core design requirements. Future research should explore the practical application of privacy-enhancing technologies, such as differential privacy, to memory analysis workflows [26]. This requires careful design to balance analytical utility with privacy preservation. Collaboration with legal and policy experts is needed to develop governance frameworks for the defensible acquisition, handling, and retention of sensitive memory data.
Conclusion
Memory forensics remains an indispensable capability in modern cybersecurity, providing critical visibility into malicious behaviors that are invisible to disk-based analysis. This systematic review of 30 studies from 2014 to 2024 highlights sustained innovation in the field, particularly in the application of machine learning, deep learning, and visualization-driven techniques. However, our analysis also reveals that the translation of this research into operationally robust and reliable tools is hindered by significant, recurring challenges. The scarcity of diverse public datasets, inconsistent evaluation practices, and a general lack of focus on adversarial robustness and scalability are the most pressing issues that limit the generalizability of current findings. To move forward, the research community must pivot from demonstrating feasibility in controlled settings to building resilient, scalable, and defensible solutions for real-world environments. This requires a collective effort to create standardized benchmarks, adopt more rigorous evaluation methodologies, and explicitly design for adversarial pressure and platform diversity. By addressing the gaps identified in this review and following the prioritized road-map, future research can significantly enhance the practical effectiveness of memory forensics, ensuring it remains a powerful tool in the ongoing fight against advanced cyber threats.
References
- Zaroor! Maine aapki list se saare numbers aur brackets hata diye hain. Ab ye ek clean bibliography format mein hai: DOI: 10.65651/np.978-93-5857-988-8.2025.182-193
- Ligh, M. H., Case, A., Levy, J., & Walters, A. (2014). The art of memory forensics: Detecting malware and threats in Windows, Linux, and Mac memory. Wiley. DOI: 10.1016/b978-1-59749-470-0.00002-4
- Case, A., & Richard, G. G. (2016). Detecting Objective-C malware through memory forensics. Digital Investigation, 18, S3βS10. DOI: 10.1016/j.diin.2016.04.002
- Song, W., Yin, H., Liu, C., & Song, D. (2018). DeepMem: Learning graph neural network models for fast and robust memory forensic analysis. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (pp. 1β13). DOI: 10.1145/3243734.3243833
- Volatility Foundation. (2024). Volatility 3 Framework. Retrieved February 8, 2026, from DOI: 10.2139/ssrn.6143526
- Maniriho, P., Ahmad, T., & Cheruiyot, W. K. (2023). Fileless malware threats: Recent advances, analysis approach through memory forensics and research challenges. Journal of Network and Computer Applications, 213, 103613. DOI: 10.1016/j.jnca.2023.103613
- Mosli, R., Li, R., Yuan, B., & Pan, Y. (2016). Automated malware detection using artefacts in forensic memory images. In 2016 IEEE Symposium on Technologies for Homeland Security (HST) (pp. 1β6). DOI: 10.1109/THS.2016.7568923
- Damodaran, A., Troia, F. D., Visaggio, C. A., Austin, T. H., & Stamp, M. (2017). A comparison of static, dynamic, and hybrid analysis for malware detection. Journal of Computer Virology and Hacking Techniques, 13, 1β12. DOI: 10.1007/s11416-015-0261-z
- Ugarte-Pedrero, X., Santos, I., Sanz, B., & Bringas, P. G. (2019). Countering anti-forensics in memory analysis. Computers & Security, 85, 386β401. DOI: 10.1016/j.cose.2019.05.013
- Kara, I. (2022). A survey on fileless malware: Attacks, detection, and prevention techniques. Journal of Information Security and Applications, 66, 103136. DOI: 10.1016/j.jisa.2022.103136
- Khalid, O., Ullah, S., Ahmad, T., Saeed, S., Alabbad, D. A., Aslam, M., Buriro, A., & Ahmad, R. (2023). An insight into the machine-learning-based fileless malware detection. Sensors, 23(2), 612. DOI: 10.3390/s23020612
- Trend Micro. (2024). Fileless Attacks and Security. Retrieved February 8, 2026, from DOI: 10.1016/s1353-4858(17)30037-5
- Bozkir, A. S., Tahillioglu, E., & Aydos, M. (2021). Catch them alive: A malware detection approach through memory forensics, manifold learning, and computer vision. Computers & Security, 103, 102166. DOI: 10.1016/j.cose.2020.102166
- Dai, Y., Li, H., & Qian, Y. (2022). Visualization-based malware detection using memory forensics and machine learning. Sensors, 22(4), 1456. DOI: 10.3390/s22041456
- Jamil, N., & Khan, A. U. R. (2022). Memory forensics-based malware detection using computer vision and machine learning. Electronics, 11(16), 2579. DOI: 10.3390/e11162579
- Liu, Y., Zhang, X., & Wang, L. (2023). Deep learning for memory forensics: A survey of malware detection techniques. IEEE Transactions on Information Forensics and Security, 18, 2345β2360. DOI: 10.1109/TIFS.2023.3265841
- Naeem, M. R., Shah, M. A., Khan, M. K., Rehman, A., & Ullah, I. (2022). A malware detection scheme via smart memory forensics for Windows devices. Mobile Information Systems, 2022, 9156514. DOI: 10.1155/2022/9156514
- Qureshi, A., Li, J., & Wang, Q. (2024). Memory forensics for IoT malware detection using deep learning. Internet of Things, 25, 100923. DOI: 10.1016/j.iot.2023.100923
- Balzarotti, D., Di Luna, G. A., & Franz, M. (2015). GPU-assisted malware detection through memory forensics. In Proceedings of the IEEE Symposium on Security and Privacy (pp. 567β582). DOI: 10.1109/SP.2015.41
- Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Springer. DOI: 10.1111/j.1751-5823.2009.00095_18.x
- Ajay Kumara, M. A., & Jaidhar, C. D. (2018). Automated multi-level malware detection system based on reconstructed semantic view of executables using machine learning techniques at VMM. Future Generation Computer Systems, 79, 431β446. DOI: 10.1016/j.future.2017.07.011
- Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., ... & Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467-473. DOI: 10.7326/M18-0850
- Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. DOI: 10.20944/preprints202411.2377.v1
- Mos, T., & Chow, K. P. (2021). Memory forensics for IoT and Android devices: Challenges and solutions. IEEE Transactions on Dependable and Secure Computing, 18(4), 1890β1903. DOI: 10.1109/TDSC.2019.2938999
- Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Yu, P. S. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4β24. DOI: 10.1109/TNNLS.2020.2978386
- Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. DOI: 10.1063/pt.5.028530
- Dwork, C. (2006). Differential privacy. In Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006 (pp. 1β12). Springer. DOI: 10.1007/11787006_1_