Abstract
Aim: The aim of this paper is to present the new opportunities that the rise of artificial intelligence presents for social engineering defence strategies.
Methodology: The authors present the methods of social engineering, the milestones of artificial intelligence, and the expected usefulness of artificial intelligence in the field of cyber attacks by searching, summarizing and analysing the relevant literature.
Findings: Artificial intelligence is already playing a key role in defending against increasingly sophisticated attacks, but it is important to remember that it is also a race, where the ideal would be for researchers and developers to be one step ahead of cybercriminals in prevention. Experience has shown that incorporating artificial intelligence into security protocols significantly increases the effectiveness of defences, but continuous human monitoring is also a cornerstone of a cyber defence strategy.
Value: In this paper, we will describe the different types of social engineering crimes, the evolution of artificial intelligence - its history, its eras; and models based on linguistic processing, and their role in the perpetration of crimes. The potential uses in defence and the software currently available will be explored. Special attention is given to a recent study on the creation of the first generative worm by researchers. Finally, we circle around the inevitable question of where the role of humans in cyber defence lies when artificial intelligence is present.
References
Aliman, N. M., Kester, L. J. H. M., Werkhoven, P., & Yampolsky, R. (2019). Orthogonality-based disentanglement of responsibilities for ethical intelligent systems. In P. Hammer, B. Goertzel, & M. Iklé (Eds.), Artificial general intelligence (Vol. 11654, pp. 3–13). Springer.
Bányász P. (2018). Social engineering and social media. Nemzetbiztonsági Szemle, 5(1), 59–77.
Bostrom, N. (2015). Szuperintelligencia – Utak, veszélyek, stratégiák (Szalay Á., Ford.) [Eredeti mű megjelenése 2014]. Ad Astra Kiadó.
Butz, M. V. (2021). Toward strong AI. KI – Künstliche Intelligenz, 35(4), 391–399.
Chaturvedi, S., Patvardhan, C., & Lakshmi, C. V. (2023). AI value alignment problem: The clear and present danger. In 2023 6th International Conference on Information Systems and Computer Networks (ISCON) (pp. 1–6). IEEE. https://doi.org/10.1109/ISCON57294.2023.10112100
Dub M. (2021). A social engineering támadások megelőzésének lehetőségei. Hadmérnök, 16(3), 137–187. https://doi.org/10.32567/hm.2021.3.10
Falade, P. V. (2023). Decoding the threat landscape: ChatGPT, FaudGPT, and WormGPT in social engineering attacks. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 9(5), 132–133.
Florêncio, D., & Herley, C. (2019). The economics of phishing and malicious email. In Proceedings of the 28th USENIX Security Symposium (pp. 963–980). USENIX Association.
Ghosh, A., Sufian, A., Sultana, F., Chakrabarti, A., & De, D. (2020). Fundamental concepts of convolutional neural network. In V. Balas, R. Kumar, & R. Srivastava (Eds.), Recent trends and advances in artificial intelligence and internet of things (Intelligent Systems Reference Library, Vol. 172, pp. 519–567). Springer. https://doi.org/10.1007/978-3-030-32644-9_36
Hadnagy, C. (2018). Social engineering: The science of human hacking. John Wiley & Sons Inc.
Hassabis, D. (2017). Artificial intelligence: Chess match of the century. Nature, 544(7651), 413. https://doi.org/10.1038/544413a
Jagodics I., & Kollár Cs. (2023). 21. századi social engineering támadások, védekezés és szervezeti hatások Európában. Belügyi Szemle, 71(1), 113–126. https://doi.org/10.38146/BSZ.2023.1.6
Kelemen R., & Németh R. (2019). A kibertér alanyai és sebezhetősége. Szakmai Szemle: A Katonai Nemzetbiztonsági Szolgálat tudományos-szakmai folyóirata, (3), 95–118.
Klein T., & Szabó A. (2018). A cybercrime, mint infokommunikációs jogi probléma. In Polyák G., & Lévai D. (Szerk.), Tanulmányok a technológia- és cyberjog néhány aktuális kérdéséről (pp. 123–132). Médiatudományi Intézet.
Kristóf T. (2002). A mesterséges neurális hálók a jövőkutatás szolgálatában. In Hideg É. (Szerk.), Jövőelméletek sorozat (pp. 23–26). BGE Jövőkutatási Kutatóközpont.
Leung, A. C. M., & Bose, I. (2008). Indirect financial loss of phishing to global market. In ICIS 2008 Proceedings (Paper 5). Association for Information Systems. https://aisel.aisnet.org/icis2008/5
Lumacad, G. S., & Namoco, R. A. (2023). Multilayer perceptron neural network approach to classifying learning modalities under the new normal. IEEE Transactions on Computational Social Systems, 1(99), 1–13. https://doi.org/10.1109/TCSS.2023.3251566
Minsky, M. (1961). Steps toward artificial intelligence. Proceedings of the IRE, 49, 8–30. https://doi.org/10.1109/JRPROC.1961.287775
Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: A survey. ACM Computing Surveys, 54(1), 1–41. https://doi.org/10.1145/3425780
Mitnick D. K. & Simon L. W. (2002). A legendás hacker – A megtévesztés művészete. Perfact-Pro Kft. Kiadó.
Mühlhoff, R. (2019). Human aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning. New Media & Society, 22(10), 1868–1884. https://doi.org/10.1177/1461444819885334
Nassi, B., Cohen, S., & Bitton, R. (2024). Here comes the AI Worm: Unleashing zero click worms that target GenAI powered applications. Retrieved from https://sites.google.com/view/compromptmized
Németh R. (2019). Kibertámadások gazdasági vonatkozásai a vállalati szférában. In Dernóczy-Polyák A. (Szerk.), Kutatási jelentés (1. kötet, pp. 307–325).
Oroszi E. (2019). Az információbiztonság lélektana. Nemzeti Kibervédelmi Intézet.
Rössler A., Cozzolino D., Verdoliva L., Riess Ch., Thies J., & Nießner M. (2019). FaceForensics++: Learning to detect manipulated facial images. arXiv. https://arxiv.org/abs/1901.08971
Saygin A., Cicekli I., & Akman V. (2000). Turing test: 50 years later. Minds and Machines, 10(4), 463–518.
Searle J. R. (1996). Az elme, az agy és a programok világa. In Pléh Cs. (Szerk.), Kognitív tudomány (pp. 136–151). Osiris Kiadó.
Sörös T., & Váczi D. (2013). Social engineering a biztonságtechnika tükrében. XXXI. Országos Tudományos Diákköri Konferencia, Had- és Rendészettudományi Szekció, Budapest.
Takale G., Wattamwar A., Saipatwar S., Saindane H., & Tushar P. (2024). Comparative analysis of LSTM, RNN, CNN and MLP machine learning algorithms for stock value prediction. Journal of Firewall Software and Networking, 2(1).
Youvan, D. (2024). Self-improving AI and the ungodly basis for intelligent design: Implications for the future of intelligence, creation, and a simulated universe. https://doi.org/10.13140/RG.2.2.26336.90881

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 Academic Journal of Internal Affairs