We are happy to announce that our paper entitled “Invisible but Detected: Physical Adversarial Shadow Attack and Defense on LiDAR Object Detection” has recently been accepted for publication in the Proceedings of the 34th USENIX Conference on Security Symposium (USENIX Security 2025). Congratulations, Kobayashi-kun and the team!
Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka, Tatsuya Mori, “Invisible but Detected: Physical Adversarial Shadow Attack and Defense on LiDAR Object Detection,” Proceedings of the 34th USENIX Conference on Security Symposium (USENIX Security 2025), August 2025 (to appear).
Overview.
This study introduces “Shadow Hack,” the first adversarial attack leveraging naturally occurring object shadows in LiDAR point clouds to deceive object detection models in autonomous vehicles. Unlike traditional adversarial attacks that modify physical objects directly, Shadow Hack manipulates the way LiDAR perceives shadows, affecting detection results without altering the objects themselves.
The key technique involves creating “Adversarial Shadows” using materials that LiDAR struggles to measure accurately. By optimizing the position and size of these shadows, the attack maximizes misclassification in point cloud-based object recognition models. Experimental simulations demonstrate that Shadow Hack achieves a 100% attack success rate at distances between 11m and 21m across multiple models.
Physical-world experiments validate these findings, showing a near 100% success rate at 10m against PointPillars and 98% against SECOND-IoU, using mirror sheets that remove almost all LiDAR-detected points from 1m to 14m. To counter this attack, the authors propose “BB-Validator,” a defense mechanism that successfully neutralizes the attack while maintaining high object detection accuracy.
This work highlights a novel and critical vulnerability in LiDAR-based perception systems and presents an effective defense, contributing to the ongoing effort to enhance the security of autonomous vehicles.
We are happy to announce that our new study entitled “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability” has recently been accepted for publication in the Proceedings of the Symposium on Usable Security and Privacy (USEC 2025). Congratulations, Mori-san and the team!
K. Mori, D. Ito, T. Fukunaga, T. Watanabe, Y. Takata, M. Kamizono, T. Mori, “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability,” Proceedings of the Symposium on Usable Security and Privacy (USEC 2025), February 2025 (to appear).
Overview.
Companies publish privacy policies to improve transparency regarding the handling of personal information. However, discrepancies between the descriptions in privacy policies and users’ understanding can lead to a decline in trust. Therefore, assessing users’ comprehension of privacy policies is essential. Traditionally, such evaluations have relied on user studies, which are time-consuming and costly.
This study explores the potential of large language models (LLMs) as an alternative for evaluating privacy policy understandability. The authors prepared obfuscated privacy policies alongside comprehension questions to assess both LLMs and human users. The results revealed that LLMs achieved an average correct answer rate of 85.2%, whereas users scored 63.0%. Notably, the questions that LLMs answered incorrectly were also difficult for users, suggesting that LLMs can effectively identify problematic descriptions that users tend to misunderstand.
Moreover, while LLMs demonstrated a strong grasp of technical terms commonly found in privacy policies, users struggled with them. These findings highlight key gaps in comprehension between LLMs and users, offering valuable insights into the feasibility of automating privacy policy evaluations. The study marks an important step toward leveraging LLMs for improving the clarity and accessibility of privacy policies, reducing the reliance on costly user studies in the future.
We are happy to announce that our study entitled “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection” has recently been accepted for publication in the Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024). Congratulations, Toshikawa-kun and Iijijma-kun!
Yuto Toshikawa, Ryo Iijima, and Tatsuya Mori, “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection,” Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024), December 2024 (to appear).
Overview.
The proliferation of Deepfake technology has raised concerns about its potential misuse for malicious purposes, such as defaming celebrities or causing political unrest. While existing methods have reported high accuracy in detecting Deepfakes, challenges remain in adapting to the rapidly evolving technology and developing efficient and effective detectors. In this study, the authors propose a novel approach to address these challenges by utilizing advanced Neural Architecture Search (NAS) methods, specifically DARTS, PC-DARTS, and DU-DARTS. The experimental results demonstrate that the PC-DARTS method achieves the highest test AUC of 0.88 with a learning time of only 2.86 GPU days, highlighting the efficiency and effectiveness of this approach. Moreover, models generated through NAS exhibit competitive performance compared to state-of-the-art architectures such as XceptionNet, EfficientNet, and MobileNet. These findings suggest that NAS can quickly and easily construct adaptive and high-performance Deepfake detection models, providing a promising direction for combating the ever-evolving Deepfake technology. The PC-DARTS results especially emphasize the importance of efficient training time while achieving high test AUC, offering a fresh perspective on the automatic search for optimal network structures in Deepfake detection.
We are happy to announce that a very interesting study entitled “An Investigation of Privacy and Security in VR APPs through URL String Analysis” has recently been accepted for publication in the Journal of Information Processing. Congraturations, Shu-pei and the team!
Shu-pei Huang, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori, “An Investigation of Privacy and Security in VR APPs through URL String Analysis,” Journal of Information Processing, vol. xx, no. xx., pp. xxxx-xxxxx (in press).
Overview.
In this research, we set out to investigate the privacy concerns inherent in the URLs used by virtual reality (VR) applications. In particular, we looked at static, hard-coded URLs that lead to destinations such as advertising and analytics services. These can have a big impact on user privacy. Using the Oculus Go VR device, the team applied a categorization methodology that helped identify the most common sources of advertising and analytics embedded in these VR applications. This approach revealed some potential privacy threats and showed us how they could impact user rights. It’s so important to look closely at external libraries and resources that VR app developers often use. The URLs we found that lead to privacy-sensitive services show us how much work there is to do to make VR safer for everyone.