2025 IEEE International Conference on Cyber Security and Resilience

Full Program

Summary:

This research investigates the security risks of AI-generated JavaScript code by analyzing vulnerabilities in six widely used Large Language Models (LLMs). Using identical prompts, we conducted a two-layered security assessment with SAST tools and manual reviews, categorizing vulnerabilities by Common Weakness Enumeration and their severity.
Our research demonstrates that, a significant portion of AI-generated JavaScript code (%45.7) contains security vulnerabilities. Our research further reveals notable variations in CWE frequency and severity across different LLMs, suggesting that certain models are more prone to generating certain types of vulnerabilities.
Lastly, to fairly compare LLM security risks, we introduced Vulnerabilities per Line of Code (V/LoC) and Weighted Security Risk per Line of Code (WSR/LoC) as new evaluation metrics, enabling a standardized assessment across LLMs. Our findings highlight the importance of ensuring AI-generated code meets security standards.

Author(s):

Deniz Aydın    
Istanbul Technical University
Turkey

2022-2025 Siemens Cybersecurity Expert
2023-2025 Istanbul Technical University Cybersecurity Engineering and Cryptology MSc
2018-2022 Istanbul Technical University Computer Engineering BSc

Şerif Bahtiyar    
Istanbul Technical University
Turkey

2019-2025 Istanbul Technical University Computer Engineering Associate Professor
2017-2019 Istanbul Technical University Computer Engineering Lecturer
2014-2016 Mastercard Product Manager
2013-2014 Provus Expert Researcher
2012-2013 Technische Universitaet Berlin Computer Engineering Post-Doc
2004-2011 Bogazici University Computer Engineering PhD
2001-2004 Istanbul Technical University Computer Engineering MSc
1996-2001 Istanbul Technical University Computer Engineering BSc

 


Copyright © 2025 SUMMIT-TEC GROUP LTD