Questions of protection of automatic biometric identity verification systems - Presentation attacks, deception
PDF (Hungarian)

Keywords

biometrics, remote biometric identification, presentation attack

How to Cite

Questions of protection of automatic biometric identity verification systems - Presentation attacks, deception. (2025). Academic Journal of Internal Affairs, 73(3), 477-494. https://doi.org/10.38146/bsz-ajia.2025.v73.i3.pp477-494

Abstract

Aim: The aim of the study is to present the currently known challenges of presentation attacks against biometric systems, their law enforcement and national security aspects, and to draw attention to the importance and possibilities of solutions and devices that provide different levels of security in system design and biometric identity verification, and their limitations, to regular and target-oriented risk analysis.

Methodology: The study presents the results of the latest research on presentation attacks, by processing the professional publications, studies, test reports, reports of leading international organizations, and by reviewing the solutions provided by existing standards, it presents the attacks against biometric systems and draws conclusions.

Findings: Biometric systems are systems using artificial intelligence technologies, they have enormous advantages and the opportunities provided by the technology, and, of course, parallel to this, the challenges arising from the technology and the tasks to be solved. The threats arising from the use of technology for bad purposes, the deception of biometric systems are a real, increasingly challenging problem, and the recognition and prevention of forgery and deception is not yet easy due to the complexity of the task. Extensive research is currently being carried out in this area, but it has been established that there is no generally usable solution or tool or application that can be integrated into the systems to protect against such threats. Technical solutions and tools help, but only with a risk-based approach, setting up risk levels and corresponding system design and thoughtful security policy measures, false data communications and attacks deceiving biometric systems can be successfully recognized and prevented or the damage mitigated.

Value: Knowledge of the challenges of automatic identity verification technology and current methods of deceiving biometric systems is the basis for planning defense methods and procedures, and is also in the interest of law enforcement and national security. The article aims to provide support for this with the analysis.

PDF (Hungarian)

References

Akhtara, Z., Dasgupta, D., & Banerjee, B. (2019). Face authenticity: An overview of face manipulation generation, detection and recognition. Proceedings of the International Conference on Communication and Information Processing (ICCIP-2019), pp. 1–9. Elsevier-SSRN. https://www.researchgate.net/publication/333984240

Dong, F., Zou, X., Wang, J., & Liu, X. (2023). Contrastive learning-based general Deepfake detection with multi-scale RGB frequency clues. Journal of King Saud University – Computer and Information Sciences, 35(4), pp. 90–99. https://doi.org/10.1016/j.jksuci.2023.03.005

ENISA. (2022). Remote identity proofing: Attacks & countermeasures. In V. Paggio, E. Nikolouzou, & M. Dekker (Eds.), ENISA report. Publications Office of the European Union. ISBN: 978-92-9204-549-4. https://www.facetec.com/wp-content/uploads/2022/01/ENISA-Report-Remote-Identity-Proofing-Attacks-Countermeasures-1.pdf

ENISA. (2024). Remote ID proofing good practices. In E. Nikolouzou & R. Naydenov (Eds.), ENISA report. Publications Office of the European Union. ISBN: 978-92-9204-661-3. https://www.enisa.europa.eu/publications/remote-id-proofing-good-practices

Europol. (2022). Facing reality? Law enforcement and the challenge of deepfakes. Publications Office of the European Union. ISBN: 978-92-95236-23-3. https://op.europa.eu/s/z12G

Garrido, P., Valgaerts, L., Rehmsen, O., Thorm, T., Perez, P., & Theobalt, C. (2014). Automatic face reenactment. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014). IEEE. ISBN: 978-1-4799-5118-5. http://dx.doi.org/10.1109/CVPR.2014.537.

Grand View Research. (2020). Jelszó nélküli hitelesítési piacméret, részesedés és trendelemzési jelentés komponensenként, terméktípusonként (arcfelismerés, ujjlenyomat, írisz), hitelesítési típus szerint, hordozhatóság szerint, végfelhasználó szerint, régió szerint és szegmens-előrejelzések szerint. Market Analysis Report 2022–2030. GVR-4-68039-996-6. https://www.grandviewresearch.com/industry-analysis/passwordless-authentication-market-report

Hernandez-Ortega, J., Fierrez, J., Morales, A., & Galbally, J. (2019). Introduction to face presentation attack detection. In S. Marcel, J. Fierrez, & N. Evans (Eds.), Handbook of biometric anti-spoofing (pp. 203–230). Springer. https://doi.org/10.1007/978-3-319-92627-8_9

Kramer, R. S. S., Mireku, M. O., Flack, T. R., & Kay, L. R. (2019). Face morphing attacks: Investigating detection with humans and computers. Cognitive Research: Principles and Implications, 4(28). https://doi.org/10.1186/s41235-019-0181-4

Ngan, M., Grother, P., & Hanaoka, K. (2022). NIST IR 8331 DRAFT SUPPLEMENT: Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms. National Institute of Standards and Technology (NIST). https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing

Ngan, M., Grother, P., & Hom, A. (2023). NIST Internal Report NIST IR 8491: Face analysis technology evaluation (FATE), Part 10: Performance of passive, software-based presentation attack detection (PAD) algorithms. National Institute of Standards and Technology (NIST). https://doi.org/10.6028/NIST.IR.8491

Ngan, M., Grother, P., Hanaoka, K., & Kuo, J. (2024). NIST IR 8292 DRAFT SUPPLEMENT: Face analysis technology evaluation (FATE), Part 4: MORPH - Performance of automated face morph detection. National Institute of Standards and Technology (NIST). https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing

Qin, L., Peng, F., Long, M., Ramachandra, R., & Busch, B. (2021). Vulnerabilities of unattended face verification systems to facial components-based presentation attacks: An empirical study. ACM Transactions on Privacy and Security, 25(1), Article 4, pp. 1–28. https://doi.org/10.1145/3491199

Rathgeb, C., Tolosana, R., Vera-Rodriguez, R., & Busch, C. (Eds.). (2024). Handbook of digital face manipulation and detection: From DeepFakes to morphing attack. Springer. https://doi.org/10.1007/978-3-030-87664-7

Venkatesh, S., Ramachandra, R., Raja, K., & Busch, C. (2021). Face morphing attack generation and detection: A comprehensive survey. IEEE Transactions on Technology and Society, 2(3), pp. 128–145. https://ieeexplore.ieee.org/document/9380153

Yan, Z., Zhang, Y., Yuan, X., Lyu, S., & Wu, B. (2023). DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection. Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), Track on Datasets and Benchmarks. https://arxiv.org/pdf/2307.01426

Yu, Z., Qin, Y., Li, X., Zhao, C., Lei, Z., & Zhao, G. (2023). Deep learning for face anti-spoofing: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5), pp. 5609–5631. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9925105

Zhao, Z., Wang, P., & Lu, W. (2021). Multi-layer fusion neural network for deepfake detection. International Journal of Digital Crime and Forensics, 13(4), pp. 26–39. http://dx.doi.org/10.4018/IJDCF.20210701.oa3

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Copyright (c) 2025 Academic Journal of Internal Affairs

Downloads

Download data is not yet available.