JST CREST研究課題(自動運転セキュリティ)

Featured

JST CREST研究領域「基礎理論とシステム基盤技術の融合によるSociety 5.0のための基盤ソフトウェアの創出」に、研究課題「AI駆動型サイバーフィジカルシステムのセキュリティ評価・対策基盤」が採択されました。

自動運転システムを構成するセンサ、カメラ、AI、制御機構を対象とした敵対的入力の脅威を明らかにし、全体として攻撃に対する耐性を高めるための技術を開発し、有効性を評価します。本研究課題は主たる共同研究者として、東工大の佐久間淳先生(機械学習)、電気通信大学の菅原健先生(物理セキュリティ)、澤田賢治先生(制御システム)、慶應義塾大学の吉岡健太郎先生(センサ、システム回路)、UCIの佐藤貴海さん(自動運転セキュリティ)という、それぞれ異なるバックグランドを持つ大変強力なメンバにご参画頂きました。自動運転システムを構成する個別の要素技術に加え、それらを統合したEnd-to-Endのシステム全体を対象とし、理論、応用、実装を組み合わせた幅広いアプローチで研究を進めていきます。本研究課題で得られた知見や各種の artifact (データ、ソースコードなど)は、オープンソースの形式で公開していく予定です。

プロジェクトウェブサイト: https://crest.seclab.jp/

A paper got accepted!

We are happy to announce that a very interesting study entitled “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection” has recently been accepted for publication in the Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024). Congratulations, Toshikawa-kun and Iijijma-kun!

Yuto Toshikawa, Ryo Iijima, and Tatsuya Mori, “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection,” Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024), December 2024 (to appear).

Overview.

The proliferation of Deepfake technology has raised concerns about its potential misuse for malicious purposes, such as defaming celebrities or causing political unrest. While existing methods have reported high accuracy in detecting Deepfakes, challenges remain in adapting to the rapidly evolving technology and developing efficient and effective detectors. In this study, the authors propose a novel approach to address these challenges by utilizing advanced Neural Architecture Search (NAS) methods, specifically DARTS, PC-DARTS, and DU-DARTS. The experimental results demonstrate that the PC-DARTS method achieves the highest test AUC of 0.88 with a learning time of only 2.86 GPU days, highlighting the efficiency and effectiveness of this approach. Moreover, models generated through NAS exhibit competitive performance compared to state-of-the-art architectures such as XceptionNet, EfficientNet, and MobileNet. These findings suggest that NAS can quickly and easily construct adaptive and high-performance Deepfake detection models, providing a promising direction for combating the ever-evolving Deepfake technology. The PC-DARTS results especially emphasize the importance of efficient training time while achieving high test AUC, offering a fresh perspective on the automatic search for optimal network structures in Deepfake detection.

コンピュータセキュリティシンポジウム(CSS 2024)で11件の研究発表

10月22日から25日まで開催されたコンピュータセキュリティシンポジウム (CSS 2024) にて、当研究室から11件の研究を発表しました。この内、7件の発表に対して表彰を頂きました。当日発表でのフィードバック、ならびに表彰等で奨励頂きましたことを糧に、次の研究につなげたいと思います。

  • 飯島 涼,長谷川 幸己,河岡 諒,森 達哉,“rPPG 信号に基づく個人識別攻撃の提案と対策”,コンピュータセキュリティシンポジウム 2024 論文集,pp.46-53,2024年 (優秀論文賞)
  • 河岡諒,海老根佑雅,森達哉,“ステレオカメラ深度推定技術を用いたドローンの 衝突回避機構に対する錯視画像の影響評価“,コンピュータセキュリティシンポジウム 2024 論文集, pp. 54-60,2024年
  • 山岸伶,藤井翔太,森達哉,“違法ソフトウェア導入を騙ったYouTube動画によるマルウェア拡散手法の実態解明“,コンピュータセキュリティシンポジウム 2024 論文集, pp. 98-105,2024年 (コンセプト研究)
  • 野本一輝,福永拓海,鶴岡豪,小林竜之輔,田中優奈,神薗雅紀,森達哉,“自動運転システムのセキュリティ評価プラットフォーム Overpass による敵対的攻撃の E2E 評価“,コンピュータセキュリティシンポジウム 2024 論文集,pp. 393-400,2024年 (優秀論文賞)
  • 鶴岡豪,佐藤貴海,Qi Alfred Chen,野本一輝,小林竜之輔,田中優奈,森達哉,“ヘッドライトの反射光を悪用する敵対的パッチ攻撃の提案と評価”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 401-408,2024年 (学生論文賞)
  • 小林竜之輔,野本一輝,田中優奈,鶴岡豪,森達哉,”LiDAR点群の物理的消失による誤検出誘発攻撃と防御”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 409-416,2024年 (学生論文賞)
  • 田中優奈,野本一輝,小林竜之輔,鶴岡豪,森達哉,“自動運転システムの LiDAR 点群前処理フィルタに対する人工霧を用いた敵対的攻撃”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 417-424,2024年 (学生論文賞)
  • 森 啓華,伊藤 大貴,福永 拓海,渡邉 卓弥,高田 雄太,神薗 雅紀,森 達哉,“プライバシーポリシーに対するユーザの理解度測定のための 大規模言語モデル評価“,コンピュータセキュリティシンポジウム 2024 論文集,pp.571-578,2024年 (優秀論文賞)
  • 髙瀬由梨,秋山満昭,戸田宇亮,若井琢朗,荒井ひろみ,大木哲史,森達哉,“AI 開発におけるセキュリティ・プライバシー・倫理・法令に関する開発者の認識と対策“,コンピュータセキュリティシンポジウム 2024 論文集, pp. 595-602,2024年 
  • 若井琢朗,戸田宇亮,久保佑介,森達哉,“公開されたAI モデルに潜むリスクと新たな攻撃手法”,コンピュータセキュリティシンポジウム 2024 論文集,pp. 1250-1257,2024年
  • 佐古健太郎,森博志,高田雄太,熊谷裕志,神薗雅紀,森達哉,“スマートコントラクト脆弱性検知ツールの体系的評価”,コンピュータセキュリティシンポジウム2024論文集,pp.1799-1806,2024年
Posted in Lab

A paper got accepted!

We are happy to announce that a very interesting study entitled “An Investigation of Privacy and Security in VR APPs through URL String Analysis” has recently been accepted for publication in the Journal of Information Processing. Congraturations, Shu-pei and the team!

Shu-pei Huang, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori, “An Investigation of Privacy and Security in VR APPs through URL String Analysis,” Journal of Information Processing, vol. xx, no. xx., pp. xxxx-xxxxx (in press).

Overview.

In this research, we set out to investigate the privacy concerns inherent in the URLs used by virtual reality (VR) applications. In particular, we looked at static, hard-coded URLs that lead to destinations such as advertising and analytics services. These can have a big impact on user privacy. Using the Oculus Go VR device, the team applied a categorization methodology that helped identify the most common sources of advertising and analytics embedded in these VR applications. This approach revealed some potential privacy threats and showed us how they could impact user rights. It’s so important to look closely at external libraries and resources that VR app developers often use. The URLs we found that lead to privacy-sensitive services show us how much work there is to do to make VR safer for everyone.

A paper got accepted!

We are thrilled to announce that our paper has been accepted for presentation at the twentieth Symposium on Usable Privacy and Security (SOUPS 2024). Congratulations to Lachlan-kun and Hasegawa-san!

Lachlan Moore, Tatsuya Mori, Ayako Hasegawa, “Negative Effects of Social Triggers on User Security and Privacy Behaviors,” Proceedings of the twentieth Symposium on Usable Privacy and Security (SOUPS 2024), Aug 2024 (accepted) (acceptance rate: 33/156=21.1%)

Overview
People often make decisions influenced by those around them. Previous studies have shown that users frequently adopt security practices based on advice from others and have proposed collaborative and community-based approaches to enhance user security behaviors.

In this paper, we focused on the negative effects of social triggers and investigated whether users’ risky behaviors are socially triggered. We conducted an online survey to understand the triggers for risky behaviors and the sharing practices associated with these behaviors. Our findings revealed that a significant percentage of participants experienced social triggers before engaging in risky behaviors. Moreover, we found that these socially triggered risky behaviors are more likely to be shared with others, creating negative chains of risky behaviors.

Our results suggest the need for more efforts to reduce the negative social effects on user security and privacy behaviors. We propose specific approaches to mitigate these effects and enhance overall user security.

A paper got accepted!

We are thrilled to announce that our paper has been accepted for presentation at the 9th IEEE European Symposium on Security and Privacy (Euro S&P 2024). Congratulations to Oyama-kun and the team!

H. Oyama, R. Iijima, T. Mori, “DeGhost: Unmasking Phantom Intrusions in Autonomous Recognition Systems,” Proceedings of Euro S&P 2024 (accepted for publication), pp. xxxx-xxxx, July 2024

This study addresses the vulnerability of autonomous systems to phantom attacks, where adversaries project deceptive illusions that are mistaken for real objects. Initial research assessed attack success rates from various distances and angles. Experiments used two setups: a black-box with DJI Mavic Air, and a white-box with Tello drone equipped with YOLOv3. To counteract these threats, the DeGhost deep learning framework was developed to distinguish between real objects and illusions, testing it across multiple surfaces and against top object detection models. DeGhost demonstrated excellent performance, achieving an AUC of 0.998, with low false negative and positive rates, and was further enhanced by an advanced Fourier technique. This study substantiates the risk of phantom attacks and presents DeGhost as an effective security measure for autonomous systems.