NESL Technical Report #: 2021-5-999
Abstract: There is an increasing emphasis on securing deep learning (DL)inference pipeline for mobile and IoT applications with privacy-sensitive data. Prior works have shown that privacy-sensitive data can be secured throughout deep learning inferences on cloud-offloaded models through trusted execution environments such as Intel SGX. However, prior solutions do not address the fundamental challenges of securing the resource-intensive inference tasks on low-power, low-memory devices (e.g., mobile and IoT devices), while achieving high performance. To tackle these challenges, we propose SecDeep, a low-power DL inference framework demonstrating that both security and performance of deep learning inference on edge devices are well within our reach. Leveraging TEEs with limited resources, SecDeep guarantees full confidentiality for input and intermediate data, as well as the integrity of the deep learning model and framework. By enabling and securing neural accelerators, SecDeep is the first of its kind to provide trusted and performant DL model inferencing on edge devices. We implement and validate SecDeep by interfacing the ARM NN DL framework with ARM TrustZone. Our evaluation shows that we can securely run inference tasks with 16× to 172× faster performance than CPU-based approaches by leveraging edge-available accelerators.
Publication Forum: The ACM/IEEE International Conference on Internet of Things Design and Implementation 2021
Place: IOTDI 2021
NESL Document?: Yes
Document category: Conference Paper
Primary Research Area: Privacy, Security, and IntegrityBack