Full Program
Summary:
Deep Learning models are increasingly employed in mission-critical security systems, but remain vulnerable to adversarial attacks due to out-of-distribution (OOD) inference biases. These biases arise from spurious correlations in training data, an issue exacerbated by the use of synthetic datasets. To address these concerns, we propose an algorithm leveraging causal inference to identify and transform spurious features. We test our algorithms to two deep learning based IDS, and experimentally demonstrate enhancing both model robustness and performance against highly aggressive adversarial attacks. All data and algorithms presented in this paper are available in the replication package: <link>.Author(s):
Marin François
LAMSADE, UMR CNRS 7243, Université Paris-Dauphine PSL
France
Pierre-Emmanuel Arduin
DRM, UMR CNRS 7088, Université Paris-Dauphine PSL
France
Myriam Merad
LAMSADE, UMR CNRS 7243, Université Paris-Dauphine PSL
France