Adversarial Geometry and Lighting using a Differentiable Renderer

Authors: Hsueh-Ti Derek Liu , Michael Tao , Chun-Liang Li , Derek Nowrouzezahrai , Alec Jacobson

Venue: ICLR2019

publication url

Abstract

Many machine learning classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Modern adversarial methods either directly alter pixel colors, or “paint” colors onto a 3D shapes. We propose novel adversarial attacks that directly alter the geometry of 3D objects and/or manipulate the lighting in a virtual scene. We leverage a novel differentiable renderer that is efficient to evaluate and analytically differentiate. Our renderer generates images realistic enough for correct classification by common pre-trained models, and we use it to design physical adversarial examples that consistently fool these models. We conduct qualitative and quantitate experiments to validate our adversarial geometry and adversarial lighting attack capabilities.