2021 IEEE International Conference on Cyber Security and Resilience

Full Program

Summary:

We propose a defense method against black-box MI attacks by turning the output of the target model into an adversarial example that can mislead the attacker. Furthermore, our method is the first defense method that achieves a utility-loss guarantee and zero accuracy loss for the target model. We perform experiments to compare the defense performance and utility-privacy tradeoff on different datasets and models. Our empirical evaluation results show that our defense can achieve extraordinary performance to protect the target model against the state-of-the-art MI attack.

Author(s):

Jing Wen    
The University of Hong Kong
Hong Kong

Siu-Ming Yiu    
The University of Hong Kong
Hong Kong

Lucas C.K. Hui    
Hong Kong Applied Science and Technology Research Institute (ASTRI)
Hong Kong

 


Copyright © 2021 SUMMIT-TEC GROUP LTD