Secure Deep Learning Engineering is an interdisciplinary direction (e.g., AI, SE, security) towards
constructing deep learning applications, in a systematic method from theoretical foundations,
software and system engineering, to security guarantees.
The literature on secure deep learning engineering has been mainly categorized into security
and privacy, testing and verification, interpretability and understanding. This repository aims to
provide a full coverage of the publications in the literature on secure deep learning engineering.
|Security and Privacy Papers:||86|
|Testing and Verification Papers:||53|
|Interpretability and Understanding Papers:||65|
This paper takes the first large-scale empirical study of secure deep learning engineering from
the quality assurance perspective, accompanied with a comprehensive and state-of-the-art
literature curation. Using this repository, a comprehensive trend analysis and survey of secure
deep learning engineering has been conducted. The paper also presents directions of several
research challenges and opportunities. These analyses provide evidence that secure deep
learning engineering is still in its infancy, which requires the support of experts from crossover
areas of AI, security, software engineering, and even each domain-specific area to accomplish,
while the topic of secure deep learning engineering itself is the subject of increasing interest.
Considering deep learning is likely to be one of the most transformative technologies in the
21st century, it appears essential that AI and SE communities should now begin to think about
how we design fully fledged secure deep learning systems. Only then will we see deep learning
benefiting the many and not the few.