Loading...

CSR RAI 2021

2021 IEEE CSR Workshop on Resilient Artificial Intelligence (RAI)

July 28, 2021


Advances in Artificial Intelligence (AI) technology have opened up new markets and opportunities for progress in critical areas such as systems and network resiliency, health, education, energy, economic inclusion, social welfare. AI is expected to play an increasing role in defensive and offensive measures to provide a rapid response to react to the landscape of evolving threats.

Traditionally, cyber-physical systems (CPS) security has focused on the detection of attacks in CPS and has been investigated from the perspective of preventing intruders from gaining access to the system using cryptography and other access control techniques. However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient.

Resilient CPS is designed to withstand disruptions and remain functional despite the operation of adversaries. One of the dominant methodologies explored for building resilient CPS is dependent on machine learning (ML) algorithms. However, rising from recent research in adversarial ML, requires that ML algorithms for securing CPS must themselves be resilient.

Resilient Artificial Intelligence workshop aims to collect the most recent research trends, advances, and promising future research directions from the international research community in order to take a comprehensive stock on Artificial Intelligence-driven security and resilience approaches, of the interactions among resilient CPS using ML, DL, and resilient AI-based approaches applied in CPS, the recent advances on securing AI for CPS and countermeasures, as well as research trends in this active area, also including the exploration of Adversarial Resilience Learning, as one of the most challenging issue recently investigated by several cyber-security communities.

Topics of Interest

Prospective authors are encouraged to submit previously unpublished contributions from a broad range of topics, which include but are not limited to the following:

› Artificial Intelligence Driven Resilience
› Resilient Machine and Deep Learning
› Explainable Artificial Intelligence for Resilience
› Metrics for Resilience in Artificial Intelligence
› Adversarial Resilience
› AI Approaches to Trust and Reputation Inference
› Safety and Security in the Future of AI
› Security of deep learning systems

› Robust Decision Making for Security
› Robust Statistics
› Robust Training Methods
› Resilient Distributed
› Secure Federated Learning
› White-box and oracle AI attacks
› AI-Based Cyber Threats
› Malicious AI

Important Dates

Paper submission deadline: April 19 May 10, 2021 AoE (firm)
Authors’ notification: May 3 May 31, 2021 AoE
Camera-ready submission: May 10 June 7, 2021 AoE
Early registration deadline: June 14, 2021 AoE
Workshop date: July 28, 2021

Submission Guidelines

The workshop’s proceedings will be published by IEEE and will be included in IEEE Xplore. The guidelines for authors, manuscript preparation guidelines, and policies of the IEEE CSR conference are applicable to RAI 2021 workshop. Please visit the authors’ instructions page for more details. When submitting your manuscript via the conference management system, please make sure that the workshop’s track 2T9 RAI is selected in the Topic Areas drop down list.

Workshop chairs

Fiammetta Marulli, University of Campania (IT)
Francesco Mercaldo, University of Molise (IT)

Organizing committee

Lelio Campanile, University of Campania (IT)

Publicity chairs

Lelio Campanile, University of Campania (IT)
Laura Verde, National Research Council for Italy CNR-ICAR (IT)

Contact us

fiammetta.marulli@unicampania.it
francesco.mercaldo@unimol.it

Program committee

Nicole Bussola, FBK (IT)
Pasquale Cantiello, INGV (IT)
Sabatino Carbone, University of Napoli “Federico II” (IT)
Rosangela Casolare, University of Molise (IT)
Mariangela Graziano, University of Campania (IT)
Mauro Iacono, University of Campania (IT)
Giacomo Iadarola, National Research Council for Italy CNR-IIT (IT)
Michele Mastroianni, University of Campania (IT)
Giovanni Paragliola, National Research Council for Italy CNR-ICAR (IT)
Salvatore Russo, Meridiana Italia (IT)
Carlo Sanghez, GAM (IT)
Laura Verde, National Research Council for Italy CNR-ICAR (IT)

Program Information

All sessions are held in Nefeli room (July 28, 2021)

Technical session WS-RAI1

Chair: Fiammetta Marulli, University of Campania (IT)

10:00–10:20 CET

Welcome from the RAI workshop chairs

FMarulli and FMercaldo

10:20–10:40 CET

Towards resilient artificial intelligence: survey and research issues

O. Eigner, S. Eresheim, P. Kieseberg, L. D. Klausner, F. Marulli, F. Mercaldo, M. Pirker, T. Priebe, and S. Tjoa

10:40–11:00 CET

Assessing adversarial training effect on IDSs and GANs

H. Chaitou, T. Robert, J. Leneutre, and L. Pautet

11:00–11:20 CET

Defending against model inversion attack by adversarial examples

J. Wen, S.-M. Yiu, and L. C. K. Hui

 

 

Coffee break

 

 

Technical session WS-RAI2

Chair: Francesco Mercaldo, University of Molise (IT)

11.40–12:00 CET

X-BaD: A flexible tool for explanation-based bias detection

M. Pacini, F. Nesti, A. Biondi, and G. Buttazzo

12.00–12:20 CET

Improving classification trustworthiness in random forests

S. Marrone, M. S. de Biase, F. Marulli, and L. Verde

See also the conference’s overall program.

Assessing adversarial training effect on IDSs and GANs
H. Chaitou, T. Robert, J. Leneutre, and L. Pautet

Defending against model inversion attack by adversarial examples
J. Wen, S.-M. Yiu, and L. C. K. Hui

Improving classification trustworthiness in random forests
S. Marrone, M. S. de Biase, F. Marulli, and L. Verde

Towards resilient artificial intelligence: survey and research issues
O. Eigner, S. Eresheim, P. Kieseberg, L. D. Klausner, F. Marulli, F. Mercaldo, M. Pirker, T. Priebe, and S. Tjoa

X-BaD: A flexible tool for explanation-based bias detection
M. Pacini, F. Nesti, A. Biondi, and G. Buttazzo

See also the conference’s overall list of accepted papers.