A paper got accepted!

We are happy to announce that our new study entitled “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability” has recently been accepted for publication in the Proceedings of the Symposium on Usable Security and Privacy (USEC 2025). Congratulations, Mori-san and the team!

K. Mori, D. Ito, T. Fukunaga, T. Watanabe, Y. Takata, M. Kamizono, T. Mori, “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability,” Proceedings of the Symposium on Usable Security and Privacy (USEC 2025), February 2025 (to appear).

Overview.

Companies publish privacy policies to improve transparency regarding the handling of personal information. However, discrepancies between the descriptions in privacy policies and users’ understanding can lead to a decline in trust. Therefore, assessing users’ comprehension of privacy policies is essential. Traditionally, such evaluations have relied on user studies, which are time-consuming and costly.

This study explores the potential of large language models (LLMs) as an alternative for evaluating privacy policy understandability. The authors prepared obfuscated privacy policies alongside comprehension questions to assess both LLMs and human users. The results revealed that LLMs achieved an average correct answer rate of 85.2%, whereas users scored 63.0%. Notably, the questions that LLMs answered incorrectly were also difficult for users, suggesting that LLMs can effectively identify problematic descriptions that users tend to misunderstand.

Moreover, while LLMs demonstrated a strong grasp of technical terms commonly found in privacy policies, users struggled with them. These findings highlight key gaps in comprehension between LLMs and users, offering valuable insights into the feasibility of automating privacy policy evaluations. The study marks an important step toward leveraging LLMs for improving the clarity and accessibility of privacy policies, reducing the reliance on costly user studies in the future.

A paper got accepted!

We are happy to announce that our study entitled “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection” has recently been accepted for publication in the Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024). Congratulations, Toshikawa-kun and Iijijma-kun!

Yuto Toshikawa, Ryo Iijima, and Tatsuya Mori, “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection,” Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024), December 2024 (to appear).

Overview.

The proliferation of Deepfake technology has raised concerns about its potential misuse for malicious purposes, such as defaming celebrities or causing political unrest. While existing methods have reported high accuracy in detecting Deepfakes, challenges remain in adapting to the rapidly evolving technology and developing efficient and effective detectors. In this study, the authors propose a novel approach to address these challenges by utilizing advanced Neural Architecture Search (NAS) methods, specifically DARTS, PC-DARTS, and DU-DARTS. The experimental results demonstrate that the PC-DARTS method achieves the highest test AUC of 0.88 with a learning time of only 2.86 GPU days, highlighting the efficiency and effectiveness of this approach. Moreover, models generated through NAS exhibit competitive performance compared to state-of-the-art architectures such as XceptionNet, EfficientNet, and MobileNet. These findings suggest that NAS can quickly and easily construct adaptive and high-performance Deepfake detection models, providing a promising direction for combating the ever-evolving Deepfake technology. The PC-DARTS results especially emphasize the importance of efficient training time while achieving high test AUC, offering a fresh perspective on the automatic search for optimal network structures in Deepfake detection.

A paper got accepted!

We are happy to announce that a very interesting study entitled “An Investigation of Privacy and Security in VR APPs through URL String Analysis” has recently been accepted for publication in the Journal of Information Processing. Congraturations, Shu-pei and the team!

Shu-pei Huang, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori, “An Investigation of Privacy and Security in VR APPs through URL String Analysis,” Journal of Information Processing, vol. xx, no. xx., pp. xxxx-xxxxx (in press).

Overview.

In this research, we set out to investigate the privacy concerns inherent in the URLs used by virtual reality (VR) applications. In particular, we looked at static, hard-coded URLs that lead to destinations such as advertising and analytics services. These can have a big impact on user privacy. Using the Oculus Go VR device, the team applied a categorization methodology that helped identify the most common sources of advertising and analytics embedded in these VR applications. This approach revealed some potential privacy threats and showed us how they could impact user rights. It’s so important to look closely at external libraries and resources that VR app developers often use. The URLs we found that lead to privacy-sensitive services show us how much work there is to do to make VR safer for everyone.

A paper got accepted!

We are thrilled to announce that our paper has been accepted for presentation at the twentieth Symposium on Usable Privacy and Security (SOUPS 2024). Congratulations to Lachlan-kun and Hasegawa-san!

Lachlan Moore, Tatsuya Mori, Ayako Hasegawa, “Negative Effects of Social Triggers on User Security and Privacy Behaviors,” Proceedings of the twentieth Symposium on Usable Privacy and Security (SOUPS 2024), Aug 2024 (accepted) (acceptance rate: 33/156=21.1%)

Overview
People often make decisions influenced by those around them. Previous studies have shown that users frequently adopt security practices based on advice from others and have proposed collaborative and community-based approaches to enhance user security behaviors.

In this paper, we focused on the negative effects of social triggers and investigated whether users’ risky behaviors are socially triggered. We conducted an online survey to understand the triggers for risky behaviors and the sharing practices associated with these behaviors. Our findings revealed that a significant percentage of participants experienced social triggers before engaging in risky behaviors. Moreover, we found that these socially triggered risky behaviors are more likely to be shared with others, creating negative chains of risky behaviors.

Our results suggest the need for more efforts to reduce the negative social effects on user security and privacy behaviors. We propose specific approaches to mitigate these effects and enhance overall user security.

A paper got accepted!

We are thrilled to announce that our paper has been accepted for presentation at the 9th IEEE European Symposium on Security and Privacy (Euro S&P 2024). Congratulations to Oyama-kun and the team!

H. Oyama, R. Iijima, T. Mori, “DeGhost: Unmasking Phantom Intrusions in Autonomous Recognition Systems,” Proceedings of Euro S&P 2024 (accepted for publication), pp. xxxx-xxxx, July 2024

This study addresses the vulnerability of autonomous systems to phantom attacks, where adversaries project deceptive illusions that are mistaken for real objects. Initial research assessed attack success rates from various distances and angles. Experiments used two setups: a black-box with DJI Mavic Air, and a white-box with Tello drone equipped with YOLOv3. To counteract these threats, the DeGhost deep learning framework was developed to distinguish between real objects and illusions, testing it across multiple surfaces and against top object detection models. DeGhost demonstrated excellent performance, achieving an AUC of 0.998, with low false negative and positive rates, and was further enhanced by an advanced Fourier technique. This study substantiates the risk of phantom attacks and presents DeGhost as an effective security measure for autonomous systems.