Synthetic Image Generation for Deep Neural Networks
, Senior Researcher, Agency for Defense Development
, Researcher, Agency For Defense Development
, Principal Researcher, Agency For Defense Development
We present a GPU-accelerated synthetic image generation framework that leverages both computer-graphic (CG) and neural-network (NN) based methods.
CG engines let us create a variety of virtual scenes by changing scene properties such as lighting and background structures. We can also simulate rare cases, such as occlusions, to alleviate imbalance in generated synthetic data, and their label annotations are automatically generated.
NN models enable us to obtain accurate 3D objects or scenes from 2D images, as well as differentiable rendering that can manipulate object texture for 3D physical adversarial attack. We evaluated our framework on downstream tasks such as object detection, 3D object reconstruction, and 3D physical adversarial attack by using a DGX workstation consisting of four NVIDIA Tesla V100 GPUs. The proposed framework shows decent results in our extensive experiments, and this would be effectively used in industrial and academic fields.