Physical Adversarial Examples for Object Detectors


Summary

Deep neural networks (DNNs) have enabled great progress in a variety of application areas, including image processing, text analysis, and speech recognition. DNNs are being incorporated as an important component in many cyber-physical systems. For instance, the vision system of a self-driving car can take advantage of DNNs to better recognize pedestrians, vehicles, and road signs. However, recent research has shown that DNNs are vulnerable to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target classifier into mislabeling them during run time. Such adversarial examples raise security and safety concerns when applying DNNs in the real world. For example, adversarially perturbed inputs could mislead the perceptual systems of an autonomous vehicle into misclassifying street signs, with potentially catastrophic consequences. To better understand these vulnerabilities, there has been extensive research on how adversarial examples may affect DNNs deployed in the physical world. Our recent work Robust physical-world attacks on deep learning models has shown physical attacks on classifiers. As the next logical step, we show attacks on object detectors. These computer vision algorithms identify relevant objects in a scene and predict bounding boxes indicating objects’ position and kind. Compared with classifiers, detectors are more challenging to fool as they process the entire image and can use contextual information (e.g. the orientation and position of the target object in the scene) in their predictions. We demonstrate physical adversarial examples against the YOLO detector, a popular state-of-the-art algorithm with good real-time performance. Our examples take the form of sticker perturbations that we apply to a real STOP sign. The following image shows our example physical adversarial perturbation. 

We also perform dynamic tests by recording a video to test out the detection performance. As can be seen in the video, the YOLO network does not perceive the STOP sign in almost all the frames. If a real autonomous vehicle were driving down the road with such an adversarial STOP sign, it would not see the STOP, possibly leading to a crash at an intersection. The perturbation we created is robust to changing distances and angles — the most commonly changing factors in a self-driving scenario. More interestingly, the physical adversarial examples generated for the YOLO detector are also be able to fool standard Faster-RCNN. The video contains a dynamic test of the physical adversarial example on Faster-RCNN. As this is a black box attack on Faster-RCNN, the attack is not as successful as it is in the YOLO case. This is expected behavior. We believe that with additional techniques (such as ensemble training), the black box attack could be made more effective. Additionally, specially optimizing an attack for Faster-RCNN will yield better results. We are currently working on a paper that explores these attacks in more detail. The image below is an example of Faster-RCNN not perceiving the Stop sign.

In both cases (YOLO and Faster-RCNN), a stop sign is detected only when the camera is very close to the sign (about 3 to 4 feet away). In real settings, this distance is too close for a vehicle to take effective corrective action. Stay tuned for our upcoming paper that contains more details about the algorithm and results of physical perturbations against state-of-the-art object detectors.

Physical Adversarial Sticker Perturbations for YOLO

Physical Adversarial Examples for YOLO (2)

Black box transfer to Faster RCNN of physical adversarial examples generated for YOLO


Short Note on arXiv (use this for citations)

PDF


Team (alphabetical order)

Ivan Evtimov, Ph.D. Candidate, University of Washington
Kevin Eykholt, Ph.D. Candidate, University of Michigan
Earlence Fernandes, Postdoctoral Researcher, University of Washington
Tadayoshi Kohno, Professor, University of Washington
Bo Li, Postdoctoral Researcher, University of California Berkeley
Atul Prakash, Professor, University of Michigan
Amir Rahmati, Professor, Stony Brook University
Dawn Song, Professor, University of California Berkeley
Florian Tramer, Ph.D. Candidate, Stanford University