We are thrilled to announce that our paper entitled “N-choice Game: Building a Smart Contract for Accurate Pseudo-random Number Generation” has been accepted for publication in ACM Distributed Ledger Technologies: Research and Practice(DLT 2025). Congratulations to Sako-kun and kudos to the entire team!
Kentaro Sako, Shinichiro Matsuo, and Tatsuya Mori. “N-choice Game: Building a Smart Contract for Accurate Pseudo-random Number Generation.” ACM Distributed Ledger Technologies: Research and Practice,Volume XX, Issue XX, Article No.: XX, Pages XX – XX (accepted for publication)
We are excited to announce that our paper entitled “Adversarial Fog: Exploiting the Vulnerabilities of LiDAR Point Cloud Preprocessing Filters” has been accepted for publication at ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS 2025). Congratulations to Tanaka-san!
Y. Tanaka, K. Nomoto, R. Kobayashi, G. Tsuruoka, T. Mori, “Adversarial Fog: Exploiting the Vulnerabilities of LiDAR Point Cloud Preprocessing Filters,” Proceedings of The 20th ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS 2025), August 2025 (to appear)
We are excited to announce that our paper entitled “AVATAR: Adversarial Vehicle Trajectory Attack Targeting Autonomous Driving Planner” has been accepted for publication at The Fourth Workshop on Automotive Cyber Security (ACSW 2025). Congratulations to Jiadong!
Jiadong Liu and Tatsuya Mori. “AVATAR: Adversarial Vehicle Trajectory Attack Targeting Autonomous Driving Planner.” Proceedings of the Fourth Workshop on Automotive Cyber Security (ACSW 2025), June 2025 (to appear).
We are happy to announce that our paper entitled “Invisible but Detected: Physical Adversarial Shadow Attack and Defense on LiDAR Object Detection” has recently been accepted for publication in the Proceedings of the 34th USENIX Conference on Security Symposium (USENIX Security 2025). Congratulations, Kobayashi-kun and the team!
Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka, Tatsuya Mori, “Invisible but Detected: Physical Adversarial Shadow Attack and Defense on LiDAR Object Detection,” Proceedings of the 34th USENIX Conference on Security Symposium (USENIX Security 2025), August 2025 (to appear).
Overview.
This study introduces “Shadow Hack,” the first adversarial attack leveraging naturally occurring object shadows in LiDAR point clouds to deceive object detection models in autonomous vehicles. Unlike traditional adversarial attacks that modify physical objects directly, Shadow Hack manipulates the way LiDAR perceives shadows, affecting detection results without altering the objects themselves.
The key technique involves creating “Adversarial Shadows” using materials that LiDAR struggles to measure accurately. By optimizing the position and size of these shadows, the attack maximizes misclassification in point cloud-based object recognition models. Experimental simulations demonstrate that Shadow Hack achieves a 100% attack success rate at distances between 11m and 21m across multiple models.
Physical-world experiments validate these findings, showing a near 100% success rate at 10m against PointPillars and 98% against SECOND-IoU, using mirror sheets that remove almost all LiDAR-detected points from 1m to 14m. To counter this attack, the authors propose “BB-Validator,” a defense mechanism that successfully neutralizes the attack while maintaining high object detection accuracy.
This work highlights a novel and critical vulnerability in LiDAR-based perception systems and presents an effective defense, contributing to the ongoing effort to enhance the security of autonomous vehicles.