Full Program
Summary:
Machine learning (ML) models, particularly in the context of Federated Learning (FL), are increasingly used to enable predictive maintenance in smart industry. However, as these models become integral to operations, they also become potential targets for data leakage and membership inference attacks (MIAs). In this paper, we hypothesise that training on multi-dimensional data (i.e., multiple features instead of a single feature) improves resilience against MIAs compared to simpler single-feature models. To test this hypothesis we design an experimental testbed to empirically evaluate the vulnerability of ML models to black-box MIAs by training models of varying complexity on industrial time-series data. Additionally, we introduce a human expert's perspective to contextualise our findings in the realm of industrial espionage, highlighting the real-world implications of data leakage. Finally, we offer a set of observations and lessons learnt from controlled experiments discussing trade-offs between model complexity security and computational effort in industrial FL deployments.Author(s):
Rustem Dautov
SINTEF
Norway
Hui Song
SINTEF
Norway
Christian Schaefer
Ericsson AB
Sweden
Seonghyun Kim
Ericsson AB
Sweden
Verena Pietsch
FILL GmbH
Austria