Optimizing Recurrent Neural Network with Bayesian Algorithm for Behavioural Authentication System
Current studies on behavioural biometrics authentication have been focused on the use of deep learning and keystroke dynamics but the aspect of conscious optimization of the algorithm in order to obtain best outcome has not been considered. This study employed and incorporated Bayesian algorithm into Recurrent Neural Network to build a Keystroke Behavioural Biometric (KBB) authentication model used against social engineering attacks. The model begins with importing the keylogging dataset for data pre-processing, feature extraction, and RNN algorithm was used to build the KBB model. Hyperparameter tuning was done to achieve optimal results. A traditional optimizer called Adaptive Momentum Estimation (Adam) was used and evaluated so as to estimate the impact of optimization in model inferencing. RNN model result with Bayesian optimization technique shows a better performance than the result of RNN model with ADAM optimization. The essence of incorporating and evaluating the best optimization technique is to come up with an effective and accurate model for behavioural biometric authentication, that could mitigate effectively against social engineering attacks.
A. Henricks and H. Kettani, “On Data Protection Using Multi-Factor Authentication,” pp. 1–4, 2019.
L. Razaq, T. Ahmad, S. Ibtasam, U. Ramzan, and S. Mare, “Understanding Mobile-based Fraud Through Victims’ Experiences,” Proc. ACM Human-Computer Interact., vol. 5, no. CSCW1, 2021, doi: 10.1145/3449115.
L. Razaq, T. Ahmad, S. Ibtasam, U. Ramzan, and S. Mare, “Understanding Mobile-based Frauds Through Victims’ Experience,” no. April, 2021, doi: 10.1145/3449115.
F. Mouton, M. Malan, L. Leenen, and H. Venter, “Social Engineering Attack Framework,” no. August, 2014, doi: 10.1109/ISSA.2014.6950510.
T. Eude and C. Chang, “One-class SVM for biometric authentication by,” no. February, pp. 1–16, 2017, doi: 10.1111/coin.12122.
K. Corpus, R. Joseph, D. Gonzales, L. Vea, and A. Morada, “Mobile User Identification through Authentication using Keystroke Dynamics and Accelerometer Biometrics,” pp. 11–12, 2016.
H. Gamboa and A. Fred, “A Behavioural Biometric System Based on Human Computer Interaction,” Biometric Technol. Hum. Identif., vol. 5404, pp. 381–392, 2004, doi: 10.1117/12.542625.
M. Frank, R. Biedert, E. Ma, I. Martinovic, and D. Song, “Touchalytics : On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication,” pp. 1–20, 2012.
B. Schouten, A. Salah, and R. van Kranenburg, “Behavioural Biometrics and Human Identity BT - Second Generation Biometrics: The Ethical, Legal and Social Context,” pp. 195–214, 2012, doi: 10.1007/978-94-007-3892-8_9.
L. Leonard, Web-Based Behavioral Modeling for Continuous User Authentication (CUA), 1st ed., vol. 105. Elsevier Inc., 2017.
E. Ellavarason, R. Guest, and F. Deravi, “Touch-dynamics based Behavioural Biometrics on Mobile Devices – A Review from a Usability and Performance,” vol. 53, no. 6, 2020.
A. T. Princy and M. K. Preetha, “Active Behavioural Biometric Authentication using CAT Swarm Optimization Variants with Deep Learning,” Indian J. Comput. Sci. Eng., vol. 13, no. 3, pp. 653– 668, 2022.
M. Lansley, N. Polatidis, S. Kapetanakis, K. Amin, G. Samakovitis, and M. Petridis, “Detecting social engineering attacks using case-based reasoning and deep learning,” CEUR Workshop Proc., vol. 2567, pp. 39–48, 2019.
N. Ryabchuk et al., “Artificial intelligence technologies using in social engineering attacks,” CEUR Workshop Proc., vol. 2654, pp. 546–555, 2020.
A. Acien, A. Morales, J. V. Monaco, R. Vera-Rodriguez, and J. Fierrez, “TypeNet: Deep Learning Keystroke Biometrics,” IEEE Trans. Biometrics, Behav. Identity Sci., vol. 4, no. 1, pp. 57–70, 2022, doi: 10.1109/TBIOM.2021.3112540.
V. Gurcinas, J. Dautartas, J. Janulevicius, N. Goranin, and A. Cenys, “A Deep-Learning-Based Approach to Keystroke-Injection,” Electronics, vol. 14, no. 13, pp. 1–29, 2023.
S. D. Boyles, “Basic Optimization Concepts What is an optimization problem,” pp. 1–11, 2015.
M. Dehghani and P. Trojovský, “A new optimization algorithm based on average and subtraction of the best and worst members of the population for solving various optimization problems,” pp. 1–29, 2022, doi: 10.7717/peerj-cs.910.
H. A. Priestley and M. P. Ward, “A multipurpose backtracking algorithm,” J. Symb. Comput., vol. 18, no. 1, pp. 1–40, 1994, doi: 10.1006/jsco.1994.1035.
H. H. Tan and K. H. Lim, “Review of second-order optimization techniques in artificial neural networks backpropagation,” IOP Conf. Ser. Mater. Sci. Eng., vol. 495, no. 1, 2019, doi: 10.1088/1757-899X/495/1/012003.
J. Bolte, S. Sabach, M. Teboulle, and Y. Vaisbourd, “First order methods beyond convexity and lipschitz gradient continuity with applications to quadratic inverse problems,” SIAM J. Optim., vol. 28, no. 3, pp. 2131–2151, 2018, doi: 10.1137/17M1138558.
P. K. Chaurasia, “Gradient Descent Algorithm in Machine Learning.” 2018, [Online]. Available: https://mgcub.ac.in/pdf/material/202004290203577d596f1ec8.pdf.
K. Chakrabarti and N. Chopra, “Generalized AdaGrad (G-AdaGrad) and Adam: A State-Space Perspective,” Proc. IEEE Conf. Decis. Control, vol. 2021-Decem, pp. 1496–1501, 2021, doi: 10.1109/CDC45484.2021.9682994.
S. Ruder, “An overview of gradient descent optimization,” arxiv, vol. 02, pp. 1–14, 2017.
A. Mustapha, “Comparative study of optimization techniques in deep learning : Application in the ophthalmology Comparative study of optimization techniques in deep learning : Application in the ophthalmology field .,” Phys. Conf. Ser. 1743 012002, 2021, doi: 10.1088/1742- 6596/1743/1/012002.
H. H. Tan and K. H. Lim, “Review of second-order optimization techniques in artificial neural networks backpropagation,” IOP Conf. Ser. Mater. Sci. Eng., vol. 495, no. 1, 2019, doi: 10.1088/1757-899X/495/1/012003.
J. Frost and R. Lavatt, “Comparison of Second Order Optimization Algorithms in Neural Networks Applied on Large-Scale Problems,” 2020.
M. L. Hanel and C. Schonlieb, “Efficient Global Optimization of Non-Differentiable, Symmetric Objectives for Multi Camera Placement,” IEEE Sens. J., vol. 22, no. 6, pp. 5278–5287, 2022, doi: 10.1109/JSEN.2021.3086037.
D. R. Jones and J. A. Martins, “The DIRECT algorithm: 25 years Later,” J. Glob. Optim., vol. 79, no. 3, pp. 521–566, 2021, doi: 10.1007/s10898-020-00952-6.
A. Lovison and K. Miettinen, “On the Extension of the DIRECT Algorithm to Multiple Objectives,” J. Glob. Optim., vol. 79, no. 2, pp. 387–412, 2021, doi: 10.1007/s10898-020-00942-8.
L. Abualigah, A. Diabat, and R. Zitar, “Orthogonal Learning Rosenbrock’s Direct Rotation with the Gazelle Optimization Algorithm for Global Optimization,” Mathematics, vol. 10, no. 23, pp. 1–42, 2022, doi: 10.3390/math10234509.
S. Allassonnière, “Special Issue : Stochastic Algorithms and Their Applications,” pp. 15–16, 2022.
J. Sienz, “Population-Based Methods: PARTICLE SWARM OPTIMIZATION-Development of a General-Purpose Optimizer and Applications-PART I,” 2006.
R. Hossain and D. Timmer, “Machine Learning Model Optimization with Hyper Parameter Tuning Approach,” vol. 21, no. 2, 2021.
Y. A. Ali, E. Awwad, M. Al-Razgan, and A. Maarouf, “Hyperparameter Search for Machine Learning Algorithms for Optimizing the Computational Complexity,” Processes, vol. 11, no. 2, 2023, doi: 10.3390/pr11020349.
W. Lichao, G. Perin, and S. Picek, “I Choose You : Automated Hyperparameter Tuning for Deep Learning-based Side-channel Analysis,” pp. 1–23, 2020.
D. M. Belete and M. D. Huchaiah, “Grid search in hyperparameter optimization of machine learning models for prediction of HIV/AIDS test results,” Int. J. Comput. Appl., vol. 44, no. 9, pp. 875–886, 2022, doi: 10.1080/1206212X.2021.1974663.
V. Hoang Tu, H. Ngoc, and L. Quach, “An Approach to Hyperparameter Tuning in Transfer Learning for Driver Drowsiness Detection Based on Bayesian Optimization and Random Search,” Int. J. Adv. Comput. Sci. Appl., vol. 14, no. 4, pp. 828–837, 2023, doi: 10.14569/IJACSA.2023.0140492.
A. Hebbal, L. Brevault, M. Balesdent, E. Talbi, and N. Melab, Bayesian optimization using deep Gaussian processes with applications to aerospace system design, vol. 22, no. 1. Springer US, 2021.
S. Watanabe, “Tree-Structured Parzen Estimator: Understanding Its Algorithm Components and Their Roles for Better Empirical Performance,” pp. 1–74, 2023, [Online]. Available: http://arxiv.org/abs/2304.11127.
S. Watanabe and F. Hutter, “c-TPE : Generalizing Tree-structured Parzen Estimator with Inequality Constraints for Continuous and Categorical Hyperparameter Optimization,” no. 2014, pp. 1–29, 2022.
Y. Xu, W. Gao, F. Qian, and Y. Li, “Potential Analysis of the Attention-Based LSTM Model in Ultra-Short-Term Forecasting of Building HVAC Energy Consumption,” vol. 9, no. August, pp. 1– 14, 2021, doi: 10.3389/fenrg.2021.730640.
J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Mach. Learn. Res., vol. 13, pp. 281–305, 2012
Copyright (c) 2024 International Journal of Engineering and Computer Science

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.