The International Arab Journal of Information Technology (IAJIT)

..............................
..............................
..............................


Research on a Method of Defense Adversarial Samples for Target Detection Model of Driverless Cars

The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, Sum of Squared Difference (SSD), YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.

[1] Athalye A., Engstrom L., Ilyas A., and Kwok K., “Synthesizing Robust Adversarial Examples,” arXiv Preprint, 2017. https://doi.org/10.48550/arXiv.1707.07397

[2] Bietti A., Mialon G., Chen D., and Mairal J., “A Kernel Perspective for Regularizing Deep Neural Networks,” arXiv Preprint, 2018. https://doi.org/10.48550/arXiv.1810.00363

[3] Buckman J., Roy A., Raffel C., and Goodfellow I., “Thermometer Encoding: One Hot Way to Resist Adversarial Examples,” in Proceedings of the International Conference on Learning Representation, Vancouver, 2018.

[4] Caesar H., Bankiti V., Lang A H., Vora S., Liong V., Xu Q., Krishnan A., Pan Y., Balden G., and Beijbom O., “Nuscenes: A Multimodal Dataset for Autonomous Driving,” arxiv Preprint, 2019.

[5] Carlini N. and Wagner D., “Towards Evaluating the Robustness of Neural Networks,” in Proceedings of the IEEE Symposium on Security and Privacy, San Jose, pp. 39-57, 2017. doi: 10.1109/SP.2017.49.

[6] Chen P., Zhang H., Sharma Y., Yi J., and Hsieh C., “Zoo: Zeroth Order Optimization Based Black-Box Attacks to Deep Neural Networks without Training Substitute Models,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Texas, pp. 15-26, 2017. https://doi.org/10.1145/3128572.3140448

[7] Dong Y., Liao F., Pang T., Su H., and Zhu J., Hu X., and Li J., “Boosting Adversarial Attacks with Momentum,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, pp. 9185-9193, 2018. https://doi.org/10.48550/arXiv.1710.06081

[8] Dong Y., Pang T., Su H., and Zhu J., “Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CA, pp. 4312-4321, 2019. https://doi.org/10.48550/arXiv.1904.02884

[9] Engstrom L., Tran B., Tsipras D., Schmidt L., and Madry A., “A Rotation and a Translation Suffice: Fooling Cnns with Simple Transformations,” arXiv preprint, 2017.

[10] Goodfellow I., Shlens J., and Szegedy C., “Explaining and Harnessing Adversarial Examples,” arXiv preprint arXiv:1412.6572, 2014. https://doi.org/10.48550/arXiv.1412.6572

[11] He Z., Rakin A., Fan D., “Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CA, pp. 588-597, 2019. https://doi.org/10.48550/arXiv.1811.09310

[12] Krizhevsky A., “Learning Multiple Layers of Features from Tiny Images,” Technical Report, 746 The International Arab Journal of Information Technology, Vol. 20, No. 5, September 2023 University of Toronto, 2009.

[13] Kurakin A., Goodfellow I., and Bengio S., “Adversarial Examples in the Physical World,” arxiv Preprint, 2016. https://doi.org/10.48550/arXiv.1607.02533

[14] LeCun Y., Bottou L., Bengio Y., and Haffner P., “Gradient-Based Learning Applied To Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324. 1998. DOI: 10.1109/5.726791

[15] Lecuyer M., Atlidakis V., Geambasu R., Hsu D., and Jana S., “Certified Robustness to Adversarial Examples with Differential Privacy,” in Proceeding of the IEEE Symposium on Security and Privacy, CA, pp. 656-672, 2019. DOI: 10.1109/SP.2019.00044

[16] Liu W., Anguelov D., Erhan D., Szegedy C., and Reed S., “SSD: Single Shot Multibox Detector,” in Proceedings of the European Conference on Computer Vision, Amsterdam, pp. 21-37, 2016. https://doi.org/10.1007/978-3-319-46448-0_2

[17] Liu X., Cheng M., and Zhang H., and Hsieh C., “Towards Robust Neural Networks via Random Self-Ensemble,” in Proceedings of the European Conference on Computer Vision, Munich, pp. 369-385, 2018.

[18] Liu Y., Chen X., Liu C., and Song D., “Delving into Transferable Adversarial Examples and Black-Box Attacks,” Arxiv Preprint, 2016.

[19] Lu J., Sibai H., Fabry E., and Forsyth D., “No Need To Worry About Adversarial Examples in Object Detection in Autonomous Vehicles,” Arxiv Preprint, 2017. https://doi.org/10.48550/arXiv.1707.03501

[20] Madry A., Makelov A., Schmidt L., Dimitris., and Vladu A., “Towards Deep Learning Models Resistant To Adversarial Attacks,” arXiv preprint, 2017. https://doi.org/10.48550/arXiv.1706.06083

[21] Ottom M. and Al-Omari A., “An Adaptive Traffic Lights System using Machine Learning,” The International Arab Journal of Information Technology, vol. 20, no. 03, pp. 407- 418, 2023. https://doi.org/10.34028/iajit/20/3/13

[22] Moosavi-Dezfooli S., Fawzi A., Frossard P., “Deepfool: A Simple and Accurate Method To Fool Deep Neural Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, pp. 2574-2582, 2016. https://doi.org/10.48550/arXiv.1511.04599

[23] Narodytska N. and Kasiviswanathan S., “Simple Black-Box Adversarial Perturbations for Deep Networks,” arXiv preprint, 2016. https://doi.org/10.48550/arXiv.1612.06299

[24] Papernot N., McDaniel P., Jha S., Fredrikson M., and Celik Z., “The Limitations of Deep Learning in Adversarial Settings,” in Proceedings of IEEE European Symposium on Security and Privacy (EuroS&P), London, pp. 372-387, 2016. https://doi.org/10.48550/arXiv.1511.07528

[25] Redmon J. and Farhadi A., “Yolov3: An Incremental Improvement,” arXiv preprint, 2018. https://doi.org/10.48550/arXiv.1804.02767

[26] Ren S., He K., Girshick R., and Sun J., “Faster R- CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Advances in Neural Information Processing Systems, Montreal, pp. 91-99, 2015. https://doi.org/10.48550/arXiv.1506.01497

[27] Samangouei P., Kabkab M., Chellappa R., “Defense-Gan: Protecting Classifiers against Adversarial Attacks Using Generative Models,” arXiv preprint, 2018. https://doi.org/10.48550/arXiv.1805.06605

[28] Sharif M., Bhagavatula S., Bauer L., and Reiter M., “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition,” in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Vienna, pp. 1528-1540, 2016. https://doi.org/10.1145/2976749.2978392

[29] Simonyan K. and Zisserman A., “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv preprint 2014. https://doi.org/10.48550/arXiv.1409.1556

[30] Szegedy C., Liu W., Jia Y., Sermanet P., and Reed S., “Going Deeper with Convolutions,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, MA, pp. 1-9, 2015. https://doi.org/10.48550/arXiv.1409.4842

[31] Szegedy C., Vanhoucke V., Ioffe S., Shlens J., and Wonja Z., “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, pp. 2818-2826, 2016. DOI: 10.1109/CVPR.2016.308.

[32] Szegedy C., Zaremba W., Sutskever I., Bruna J., Dumitru E., “Intriguing Properties of Neural Networks,” arXiv preprint, 2013. https://doi.org/10.48550/arXiv.1312.6199

[33] Tramèr F., Kurakin A., Papernot N., Goodfellow I., and Boneh D., McDaniel P., “Ensemble Adversarial Training: Attacks and Defenses,” arXiv preprint, 2017. https://doi.org/10.48550/arXiv.1705.07204

[34] Wang H., Dzulkifli M., and Azman I., “An Efficient Parameters Selection for Object Recognition Based Colour Features in Traffic Image Retrieval,” The International Arab Journal of Information Technology, vol. 11, no. 3, pp. 308- 314, 2014.

[35] Xiao C., Zhu J., Li B., He W., and Liu M., Song D., “Spatially Transformed Adversarial Research on a Method of Defense Adversarial Samples for Target Detection Model ... 747 Examples,” arXiv preprint, 2018. https://doi.org/10.48550/arXiv.1801.02612

[36] Xie C., Wang J., Zhang Z., Ren Z., Yuille A., “Mitigating Adversarial Effects Through Randomization,” arXiv preprint, 2017. https://doi.org/10.48550/arXiv.1711.01991

[37] Xie C., Zhang Z., Zhou Y., Bai S., and Wamg J., Ren Z., Yuille A., “Improving Transferability of Adversarial Examples with Input Diversity,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CA, pp. 2730- 2739, 2019. https://doi.org/10.48550/arXiv.1803.06978