A paper got accepted!

We are happy to announce that our paper entitled “Invisible but Detected: Physical Adversarial Shadow Attack and Defense on LiDAR Object Detection” has recently been accepted for publication in the Proceedings of the 34th USENIX Conference on Security Symposium (USENIX Security 2025). Congratulations, Kobayashi-kun and the team!

Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka, Tatsuya Mori, “Invisible but Detected: Physical Adversarial Shadow Attack and Defense on LiDAR Object Detection,” Proceedings of the 34th USENIX Conference on Security Symposium (USENIX Security 2025), August 2025 (to appear).

Overview.

This study introduces “Shadow Hack,” the first adversarial attack leveraging naturally occurring object shadows in LiDAR point clouds to deceive object detection models in autonomous vehicles. Unlike traditional adversarial attacks that modify physical objects directly, Shadow Hack manipulates the way LiDAR perceives shadows, affecting detection results without altering the objects themselves.

The key technique involves creating “Adversarial Shadows” using materials that LiDAR struggles to measure accurately. By optimizing the position and size of these shadows, the attack maximizes misclassification in point cloud-based object recognition models. Experimental simulations demonstrate that Shadow Hack achieves a 100% attack success rate at distances between 11m and 21m across multiple models.

Physical-world experiments validate these findings, showing a near 100% success rate at 10m against PointPillars and 98% against SECOND-IoU, using mirror sheets that remove almost all LiDAR-detected points from 1m to 14m. To counter this attack, the authors propose “BB-Validator,” a defense mechanism that successfully neutralizes the attack while maintaining high object detection accuracy.

This work highlights a novel and critical vulnerability in LiDAR-based perception systems and presents an effective defense, contributing to the ongoing effort to enhance the security of autonomous vehicles.

A paper got accepted!

We are happy to announce that our new study entitled “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability” has recently been accepted for publication in the Proceedings of the Symposium on Usable Security and Privacy (USEC 2025). Congratulations, Mori-san and the team!

K. Mori, D. Ito, T. Fukunaga, T. Watanabe, Y. Takata, M. Kamizono, T. Mori, “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability,” Proceedings of the Symposium on Usable Security and Privacy (USEC 2025), February 2025 (to appear).

Overview.

Companies publish privacy policies to improve transparency regarding the handling of personal information. However, discrepancies between the descriptions in privacy policies and users’ understanding can lead to a decline in trust. Therefore, assessing users’ comprehension of privacy policies is essential. Traditionally, such evaluations have relied on user studies, which are time-consuming and costly.

This study explores the potential of large language models (LLMs) as an alternative for evaluating privacy policy understandability. The authors prepared obfuscated privacy policies alongside comprehension questions to assess both LLMs and human users. The results revealed that LLMs achieved an average correct answer rate of 85.2%, whereas users scored 63.0%. Notably, the questions that LLMs answered incorrectly were also difficult for users, suggesting that LLMs can effectively identify problematic descriptions that users tend to misunderstand.

Moreover, while LLMs demonstrated a strong grasp of technical terms commonly found in privacy policies, users struggled with them. These findings highlight key gaps in comprehension between LLMs and users, offering valuable insights into the feasibility of automating privacy policy evaluations. The study marks an important step toward leveraging LLMs for improving the clarity and accessibility of privacy policies, reducing the reliance on costly user studies in the future.

A paper got accepted!

We are happy to announce that our study entitled “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection” has recently been accepted for publication in the Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024). Congratulations, Toshikawa-kun and Iijijma-kun!

Yuto Toshikawa, Ryo Iijima, and Tatsuya Mori, “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection,” Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024), December 2024 (to appear).

Overview.

The proliferation of Deepfake technology has raised concerns about its potential misuse for malicious purposes, such as defaming celebrities or causing political unrest. While existing methods have reported high accuracy in detecting Deepfakes, challenges remain in adapting to the rapidly evolving technology and developing efficient and effective detectors. In this study, the authors propose a novel approach to address these challenges by utilizing advanced Neural Architecture Search (NAS) methods, specifically DARTS, PC-DARTS, and DU-DARTS. The experimental results demonstrate that the PC-DARTS method achieves the highest test AUC of 0.88 with a learning time of only 2.86 GPU days, highlighting the efficiency and effectiveness of this approach. Moreover, models generated through NAS exhibit competitive performance compared to state-of-the-art architectures such as XceptionNet, EfficientNet, and MobileNet. These findings suggest that NAS can quickly and easily construct adaptive and high-performance Deepfake detection models, providing a promising direction for combating the ever-evolving Deepfake technology. The PC-DARTS results especially emphasize the importance of efficient training time while achieving high test AUC, offering a fresh perspective on the automatic search for optimal network structures in Deepfake detection.

コンピュータセキュリティシンポジウム(CSS 2024)で11件の研究発表

10月22日から25日まで開催されたコンピュータセキュリティシンポジウム (CSS 2024) にて、当研究室から11件の研究を発表しました。この内、7件の発表に対して表彰を頂きました。当日発表でのフィードバック、ならびに表彰等で奨励頂きましたことを糧に、次の研究につなげたいと思います。

  • 飯島 涼,長谷川 幸己,河岡 諒,森 達哉,“rPPG 信号に基づく個人識別攻撃の提案と対策”,コンピュータセキュリティシンポジウム 2024 論文集,pp.46-53,2024年 (優秀論文賞)
  • 河岡諒,海老根佑雅,森達哉,“ステレオカメラ深度推定技術を用いたドローンの 衝突回避機構に対する錯視画像の影響評価“,コンピュータセキュリティシンポジウム 2024 論文集, pp. 54-60,2024年
  • 山岸伶,藤井翔太,森達哉,“違法ソフトウェア導入を騙ったYouTube動画によるマルウェア拡散手法の実態解明“,コンピュータセキュリティシンポジウム 2024 論文集, pp. 98-105,2024年 (コンセプト研究)
  • 野本一輝,福永拓海,鶴岡豪,小林竜之輔,田中優奈,神薗雅紀,森達哉,“自動運転システムのセキュリティ評価プラットフォーム Overpass による敵対的攻撃の E2E 評価“,コンピュータセキュリティシンポジウム 2024 論文集,pp. 393-400,2024年 (優秀論文賞)
  • 鶴岡豪,佐藤貴海,Qi Alfred Chen,野本一輝,小林竜之輔,田中優奈,森達哉,“ヘッドライトの反射光を悪用する敵対的パッチ攻撃の提案と評価”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 401-408,2024年 (学生論文賞)
  • 小林竜之輔,野本一輝,田中優奈,鶴岡豪,森達哉,”LiDAR点群の物理的消失による誤検出誘発攻撃と防御”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 409-416,2024年 (学生論文賞)
  • 田中優奈,野本一輝,小林竜之輔,鶴岡豪,森達哉,“自動運転システムの LiDAR 点群前処理フィルタに対する人工霧を用いた敵対的攻撃”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 417-424,2024年 (学生論文賞)
  • 森 啓華,伊藤 大貴,福永 拓海,渡邉 卓弥,高田 雄太,神薗 雅紀,森 達哉,“プライバシーポリシーに対するユーザの理解度測定のための 大規模言語モデル評価“,コンピュータセキュリティシンポジウム 2024 論文集,pp.571-578,2024年 (優秀論文賞)
  • 髙瀬由梨,秋山満昭,戸田宇亮,若井琢朗,荒井ひろみ,大木哲史,森達哉,“AI 開発におけるセキュリティ・プライバシー・倫理・法令に関する開発者の認識と対策“,コンピュータセキュリティシンポジウム 2024 論文集, pp. 595-602,2024年 
  • 若井琢朗,戸田宇亮,久保佑介,森達哉,“公開されたAI モデルに潜むリスクと新たな攻撃手法”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 1250-1257,2024年
  • 佐古健太郎,森博志,高田雄太,熊谷裕志,神薗雅紀,森達哉,“スマートコントラクト脆弱性検知ツールの体系的評価”,コンピュータセキュリティシンポジウム2024論文集,pp.1799-1806,2024年
Posted in Lab

A paper got accepted!

We are happy to announce that a very interesting study entitled “An Investigation of Privacy and Security in VR APPs through URL String Analysis” has recently been accepted for publication in the Journal of Information Processing. Congraturations, Shu-pei and the team!

Shu-pei Huang, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori, “An Investigation of Privacy and Security in VR APPs through URL String Analysis,” Journal of Information Processing, vol. xx, no. xx., pp. xxxx-xxxxx (in press).

Overview.

In this research, we set out to investigate the privacy concerns inherent in the URLs used by virtual reality (VR) applications. In particular, we looked at static, hard-coded URLs that lead to destinations such as advertising and analytics services. These can have a big impact on user privacy. Using the Oculus Go VR device, the team applied a categorization methodology that helped identify the most common sources of advertising and analytics embedded in these VR applications. This approach revealed some potential privacy threats and showed us how they could impact user rights. It’s so important to look closely at external libraries and resources that VR app developers often use. The URLs we found that lead to privacy-sensitive services show us how much work there is to do to make VR safer for everyone.