Chania, Crete, Greece (in-person event) • August 5, 2025
The CSR CRE workshop explores the foundational and applied advances in cyber resiliency strategies, policies, and technologies to shift the balance in favor of the defender, ensure critical processes continue to operate in face of a successful cyber-attack, and identify and quantify the effect economic realities have on the decision processes. At the top level, national and organizational strategies and policies are needed to understand what is to be achieved and the resources to be made available to protect critical resources and infrastructures. Strategies and policies must be supported by security and resiliency technologies. As a result, in addition to exploring various strategies, the workshop will seek to understand the capabilities, strengths/weaknesses, and benefits of various technologies whether existing or in research. This includes the incorporation of new technologies that are not resilience-focused but still have significant impact on a system’s ability to continue to operate in face of attack. Such examples include artificial intelligence and machine learning that can be used by defenders and attackers to impact the asymmetric balance.
The workshop focuses on the parameters needed to accurately quantify asymmetric imbalance from the offensive and defensive perspective; examine technical and non-technical approaches to shifting that balance, including the full range of costs/benefits of each approach; explore and evaluate a range of options for defining and achieving optimality. It will bring together a diverse group of experts from multiple fields to advance the above concepts.
The CSR CRE workshop will accept high-quality research papers presenting strong theoretical contributions, applied research and innovation results obtained from funded cyber-security and resilience projects, and industrial papers that promote contributions on technology development and contemporary implementations.
Prospective authors are encouraged to submit previously unpublished contributions from a broad range of topics, which include but are not limited to the following:
› Cyber resiliency strategies and policies
› Cyber resilience research focused on OT/IT solutions
› Alignment of technical and mission resilience
› Cyber resilience in OT environments: strengths and weaknesses
› Metrics and measurements for resilience
› Resilience and security mutual support
› Development of true pre-emptive cyber resilience
› AI/ML and its effects on the defender and the adversary as related to cyber resilience
› Economics of cyber resiliency including examining economic models to determine return on investments
› Technical/economic barriers to implementation
Paper submission deadline: April 14 May 5, 2025 (extended, firm)
Authors’ notification: May 5 May 26, 2025
Camera-ready submission: May 26 June 16, 2025
Registration deadline (authors): May 26 June 16, 2025
Workshop date: August 5, 2025
Submitted manuscripts should not exceed 6 pages (plus 2 extra pages, being subject to overlength page charges) and should be of sufficient detail to be evaluated by expert reviewers in the field. The workshop’s proceedings will be published by IEEE and will be included in IEEE Xplore subject to meeting IEEE Xplore’s scope and quality requirements.
The guidelines for authors, manuscript preparation guidelines, and policies of the IEEE CSR conference are applicable to CRE workshop. Please visit the authors’ instructions page for more details. When submitting your manuscript via the conference management system, please make sure that the workshop’s track 2T2 CRE is selected in the Topic Areas drop down list.
|
|
Volkmar Lotz, SAP Labs (FR)
Kelly McSweeney, MITRE Corporation (US)
Elena Peterson, Pacific Northwest National Lab (US)
George Sharkov, European Software Inst CEE (BG)
Michael Atighetchi, Raytheon Corp, BBN (US)
Tom Carroll, Pacific Northwest National Lab (US)
Yung Ryn Choe, Sandia National Laboratory (US)
Erich Devendorf, Air Force Research Laboratory (US)
Ilir Gashi, Univeristy of London (UK)
Doug Jacobson, Iowa State University (US)
Dong Seong Kim, University of Queensland (AU)
Gargi Mitra, University of British Columbia (CA)
Nicholas C. Multari, University of Nevada Las Vegas (US)
Nuno Neves, University of Lisbon (PT)
Craig Rieger, Idaho National Laboratory (US)
Luigi Romano, University of Naples (IT)
Meghan Sahakian, Sandia National Laboratory (US)
Reginald Sawilla, Government of Canada (CA)
O Sami Saydjari, Cyber Defense Agency (US)
Neeraj Suri, University of Lancaster (UK)
Marco Vieira, University of North Carolina Charlotte (US)
Chris Walter, WW Technologies (US)
Kelly McSweeney, MITRE Corporation (US)
Elena Peterson, Pacific Northwest National Lab (US)
Lockheed Martin (US)
Vice President of Commercialization, Technology, and Strategic Innovation
Advances in Generative Artificial Intelligence (GAI) promise to enhance productivity across engineering, manufacturing, the enterprise, and at the edge. Transformational use cases now exist across nearly all aspects of life from learning to creating to living to sustaining and are forecasted to add trillion dollars annually to globally economy. Following videos of current operational AI and generative AI in customer service, autonomous helicopters, and firefighting intelligence, this presentation characterizes how GAI suffers from being biased, brittle and baroque. To ensure full benefits, it is essential to mitigate vulnerabilities to the confidentiality, integrity and availability of large language models and ensure safety, privacy and security for all. Adversary attacks against LLMs include exploits of vulnerabilities such as leaking privacy or security information, poisoning training data, exploiting inference weaknesses, or holding training data or models for ransom. In addition, adversaries can use LLMs against us in order to undermine confidentiality by employing LLMs to learn vulnerabilities at speed and scale, corrupt integrity through high-fidelity but erroneous synthetic training data generation, and deny availability by flooding LLMs with high fidelity human-like access. Root causes and associated harms of biased, brittle, and baroque LLMs are exemplified together with countermeasures to enhance resilience (e.g., diversifying training data and teams, guardrails, knowledge graphs, certainty management, and explanation). Together, these methods promise to enhance the correctness, coherence, and clarity of LLMs.
Mark Maybury is the vice president of Commercialization, Technology and Strategic Innovation for Lockheed Martin, responsible for leading efforts to commercialize dual-use products and services from research and development (R&D) to accelerate growth in strategic commercial and defense sectors. He works with Lockheed Martin’s business areas and functions, Corporate Technology Office, LM Ventures, LMEvolve, FEASIC and AstrisAI Boards to strengthen commercial partnerships and licensing and to explore innovative business models that accelerate and scale new products and services in core and near adjacent markets.
Previously, he was the first chief technology officer for Stanley Black & Decker, growing $1B in annual new product revenue from R&D. He served on the STANLEY Ventures Board, as executive champion for STANLEY X which incubated 7 startups, and as Executive Sponsor of STANLEY Techstars, accelerating 40 disruptive companies targeting VC and PE investment. His previous roles include Chief Scientist of the U.S. Air Force, Chief Technology and Chief Security Officer at MITRE and Director of the National Cybersecurity FFRDC. He guided spinning out 5 commercial technologies/startups as MITRE CTO. He serves as a special government employee for the Defense Science Board providing strategy and technology advice to the Office of the Secretary of Defense. He serves on several commercial boards governing innovation and growth including ISI, Halo.Energy, and Nano surfaces. His past board service includes READY Robotics, Object Management Group, Air Force Scientific Advisory Board, Intelligence Science Board and Homeland Security S&T Advisory Committee.
He is a Fellow of both the Institute of Electrical and Electronics Engineers (IEEE) and the Association for the Advancement of Artificial Intelligence. He received his BA in mathematics from College of the Holy Cross and MBA from Rensselaer Polytechnic Institute. He also holds a Master of Philosophy in Computer Speech and Language Processing and Ph.D. in Generative AI, both from the University of Cambridge, United Kingdom.
European Software Institute (BG)
Director
Modern critical infrastructure operates within a complex web of dependencies extending far beyond traditional supply chains, where seemingly innocuous external services like DNS providers or time-series databases can trigger catastrophic cascade failures. This presentation introduces a novel three-layer ontology implemented as a knowledge graph that captures not only direct software dependencies (SBOM) and system-of-systems relationships but also includes often-overlooked “enabler” services that create hidden attack paths. Integrating AI/ML into critical infrastructure has created unprecedented efficiency gains, but also introduced novel attack surfaces where compromised training data or external AI services can trigger catastrophic physical failures. The proposed three-layer ontology is further extended to incorporate AI-specific elements, including model dependencies (AIBOM), training data lineage, and external ML services that create hidden attack paths. Through real-world scenarios, we demonstrate how a compromised time-series database can poison ML models over 90 days, ultimately causing grid-wide blackouts when AI-driven decisions go catastrophically wrong. Our knowledge graph approach enables novel risk assessments that consider model drift, data poisoning vulnerabilities, and the cascading effects of AI decision failures. The framework fundamentally changes how we assess infrastructure resilience in the age of AI, where the supply chain extends from silicon to datasets, and where trust in machine learning can become our greatest vulnerability.
George Sharkov is Associate Professor at the Institute of Information and Communication Technologies, Bulgarian Academy of Sciences. Since 2003, he has been the Director of the European Software Institute (Sofia) and head of the Cybersecurity Lab at SofiaTechPark. He was a cyber defence advisor to the Bulgarian MoD (2014-2021) and National Cybersecurity Coordinator for the Bulgarian Government. He was a member of EU AI High-Level Expert Group and is representing European Digital SMEs and SBS in ETSI Technical Committees CYBER and Securing AI, and several ENISA working groups. He has more than 30 years of experience in developing complex software systems-of-systems, software process quality (CMMI), cyber security and resilience (CERT RMM), and trustworthy AI. He holds a PhD in AI/expert systems and is lecturing at three leading universities in Bulgaria (software quality, cybersecurity, security for AI).
Constanta Maritime University (RO)
Department of Navigation
This lecture explores the transformative potential of artificial intelligence (AI) in strengthening the cybersecurity posture of the maritime sector, a critical domain increasingly exposed to hybrid threats, digital vulnerabilities, and evolving operational complexity. Drawing from recent European and NATO-backed initiatives, the presentation highlights how adaptive AI-driven models can enhance threat detection, anomaly response, and cyber resilience across maritime port operations and logistics chains. Key themes include the convergence of AI with Operational Technology (OT), the integration of AI in maritime Cyber Threat Intelligence (CTI) pipelines, and the challenges of explainability and compliance in critical infrastructure environments. Real-world examples from EU projects such as ECYBRIDGE and CYRESRANGE will illustrate collaborative pathways for advancing situational awareness, decision support, and AI-based early warning systems in a dynamically shifting threat landscape. The session advocates for a pragmatic and ethically grounded approach to embedding AI in maritime cybersecurity, one that aligns technical innovation with operational safety, international regulatory frameworks, and long-term resilience planning.
Gabriel Raicu is the Rector of the Maritime University of Constanța (CMU) and Director of the Center for Excellence in Maritime Cyber Security (MARCYSCOE). He is the founder and coordinator of the Black Sea Cybersecurity Conference series, now in its ninth edition and organized in collaboration with the European Security and Defence College (ESDC). He holds a degree in maritime engineering and a PhD in cybernetics, with a research portfolio spanning early warning systems for cyber threats, the protection of critical maritime infrastructures, and the design of cybersecurity frameworks for the energy and logistics sectors.
He also served as President of the Cyber Security Cluster of Excellence (CYSCOE), a collaborative platform uniting academia, public institutions, and industry to foster the integration of cybersecurity solutions across society. He is deeply committed to the view that that artificial intelligence-driven initiatives represent a major frontier for future research and innovation. At the same time, he emphasizes the necessity of a pragmatic and robust approach to cybersecurity, given the substantial risks posed by the ongoing shift toward emerging digital technologies and the post–Industry 4.0 landscape.
SAP Security Research (FR)
Senior Manager
Requirements on cryptographic functions and their usage are continuously evolving, following compliance needs and technological developments (like quantum computing). As a company serving multiple industries and regions with their respective regulations, SAP needs to deliver to the customer exactly the crypto they need at reasonable costs – this is addressed by cryptographic agility. We investigate the challenges that occur, discuss the building blocks of a cross-company crypto agility strategy, and share some insights into lessons learned from the implementation of such a strategy.
Volkmar Lotz is Senior Manager and Chief Research Strategist at SAP. He has more than 25 years’ experience in industrial research on Security and Software Engineering. He is Strategy Lead for Product Security Research, specializing on Security Risk Management, Software Security, Threat Analysis, and IoT security. He defines and executes SAP’s security research agenda in alignment with SAP’s business strategy and global research trends. He has published numerous scientific papers in his area of interest and is regularly serving on Program Committees of internationally renowned conferences. He has supervised various European projects, including large-scale integrated projects. He holds a diploma in Computer Science from the University of Kaiserslautern.
08:40–10:00 | Session WS6: CRE workshop Chair: Nicholas J. Multari, Indiana University (US) Room: Hall 4 |
08:40–08:40 | Welcome by the chairs N. J. Multari and R. McQuaid |
08:40–09:00 | Cybersecurity for sustainability: A path for strategic resilience J. Saveljeva, I. Uvarova, L. Peiseniece, T. Volkova, J. Novicka, G. Polis, S. Kristapsone, and A. Vembris |
09:00–09:20 | Strategic allocation of defence resources against multi-step cyberattacks using evolutionary game theory J. Zhang and W. Wang |
09:20–10:00 | [Invited talk] Unveiling the invisible: Knowledge graph-driven discovery of hidden cascade risks in critical infrastructure supply chains G. Sharkov |
10:00–10:20 | Coffee break |
10:20–12:00 | Session WS8: CRE workshop Chair: Rosalie McQuaid, MITRE Corporation (US) Room: Hall 4 |
10:20–10:40 | Addressing the economics of critical national infrastructure (CNI) security S. A. Shaikh |
10:40–11:20 | [Invited talk] Learning to adapt: The role of AI in shaping maritime cybersecurity G. Raicu |
11:20–12:00 | [Invited talk] Towards crypto agility in complex software systems V. Lotz |
12:40–13:40 | Lunch break Location: Elia restaurant |
13:40–15:00 | Session WS10: CRE workshop Chair: Nicholas J. Multari, Indiana University (US) Room: Hall 4 |
13:40–14:20 | [Invited talk] Mitigating biased, brittle and baroque generative AI M. T. Maybury |
14:20–15:00 | [Panel discussion] AI/ML and it’s ramifications on cyber security and resilience G. Sharkov, M. T. Maybury, V. Lotz, G. Raicu, and CRE authors Panel moderator: Rosalie McQuaid |
See also the detailed program of the conference.
Addressing the economics of critical national infrastructure (CNI) security
S. A. Shaikh
Cybersecurity for sustainability: A path for strategic resilience
J. Saveljeva, I. Uvarova, L. Peiseniece, T. Volkova, J. Novicka, G. Polis, S. Kristapsone, and A. Vembris
Strategic allocation of defence resources against multi-step cyberattacks using evolutionary game theory
J. Zhang and W. Wang
See also the accepted papers of the conference.