Protecting User Data Privacy with Adversarial Perturbations [Poster]

NESL Technical Report #: 2021-5-1

Authors:

Abstract: The increased availability of on-body sensors gives researchers access to rich time-series data, many of which are related to human health conditions. Sharing such data can allow cross-institutional collaborations that create advanced machine models to make inferences on human well-being. However, such data are usually considered privacy-sensitive, and publicly sharing this data may incur significant privacy concerns. In this work, we seek to protect clinical time-series data against membership inference attacks, while maximally retaining the data utility. We achieve this by adding an imperceptible noise to the raw data. Known as adversarial perturbations, the noise is specially trained to force a deep learning model to make inference mistakes (in our case, mispredicting user identities). Our preliminary results show that our solution can better protect the data from membership inference attacks than the baselines, while successfully passing all the designed data quality checks.

Publication Forum: The 2021 ACM/IEEE Information Processing in Sensor Networks (IPSN) Conference

Date: 2021-05-18

Place: Nashville, Tennessee, USA

NESL Document?: Yes

Document category: Poster

Primary Research Area: Privacy, Security, and Integrity

Back