We are happy to announce that a very interesting study entitled “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection” has recently been accepted for publication in the Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024). Congratulations, Toshikawa-kun and Iijijma-kun!
Yuto Toshikawa, Ryo Iijima, and Tatsuya Mori, “Automated Exploration of Optimal Neural Network Structures for Deepfake Detection,” Proceedings of the 17th International Symposium on Foundations & Practice of Security (FPS 2024), December 2024 (to appear).
Overview.
The proliferation of Deepfake technology has raised concerns about its potential misuse for malicious purposes, such as defaming celebrities or causing political unrest. While existing methods have reported high accuracy in detecting Deepfakes, challenges remain in adapting to the rapidly evolving technology and developing efficient and effective detectors. In this study, the authors propose a novel approach to address these challenges by utilizing advanced Neural Architecture Search (NAS) methods, specifically DARTS, PC-DARTS, and DU-DARTS. The experimental results demonstrate that the PC-DARTS method achieves the highest test AUC of 0.88 with a learning time of only 2.86 GPU days, highlighting the efficiency and effectiveness of this approach. Moreover, models generated through NAS exhibit competitive performance compared to state-of-the-art architectures such as XceptionNet, EfficientNet, and MobileNet. These findings suggest that NAS can quickly and easily construct adaptive and high-performance Deepfake detection models, providing a promising direction for combating the ever-evolving Deepfake technology. The PC-DARTS results especially emphasize the importance of efficient training time while achieving high test AUC, offering a fresh perspective on the automatic search for optimal network structures in Deepfake detection.
We are happy to announce that a very interesting study entitled “An Investigation of Privacy and Security in VR APPs through URL String Analysis” has recently been accepted for publication in the Journal of Information Processing. Congraturations, Shu-pei and the team!
Shu-pei Huang, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori, “An Investigation of Privacy and Security in VR APPs through URL String Analysis,” Journal of Information Processing, vol. xx, no. xx., pp. xxxx-xxxxx (in press).
Overview.
In this research, we set out to investigate the privacy concerns inherent in the URLs used by virtual reality (VR) applications. In particular, we looked at static, hard-coded URLs that lead to destinations such as advertising and analytics services. These can have a big impact on user privacy. Using the Oculus Go VR device, the team applied a categorization methodology that helped identify the most common sources of advertising and analytics embedded in these VR applications. This approach revealed some potential privacy threats and showed us how they could impact user rights. It’s so important to look closely at external libraries and resources that VR app developers often use. The URLs we found that lead to privacy-sensitive services show us how much work there is to do to make VR safer for everyone.
We are thrilled to announce that our paper has been accepted for presentation at the twentieth Symposium on Usable Privacy and Security (SOUPS 2024). Congratulations to Lachlan-kun and Hasegawa-san!
Lachlan Moore, Tatsuya Mori, Ayako Hasegawa, “Negative Effects of Social Triggers on User Security and Privacy Behaviors,” Proceedings of the twentieth Symposium on Usable Privacy and Security (SOUPS 2024), Aug 2024 (accepted) (acceptance rate: 33/156=21.1%)
Overview People often make decisions influenced by those around them. Previous studies have shown that users frequently adopt security practices based on advice from others and have proposed collaborative and community-based approaches to enhance user security behaviors.
In this paper, we focused on the negative effects of social triggers and investigated whether users’ risky behaviors are socially triggered. We conducted an online survey to understand the triggers for risky behaviors and the sharing practices associated with these behaviors. Our findings revealed that a significant percentage of participants experienced social triggers before engaging in risky behaviors. Moreover, we found that these socially triggered risky behaviors are more likely to be shared with others, creating negative chains of risky behaviors.
Our results suggest the need for more efforts to reduce the negative social effects on user security and privacy behaviors. We propose specific approaches to mitigate these effects and enhance overall user security.
We are thrilled to announce that our paper has been accepted for presentation at the 9th IEEE European Symposium on Security and Privacy (Euro S&P 2024). Congratulations to Oyama-kun and the team!
H. Oyama, R. Iijima, T. Mori, “DeGhost: Unmasking Phantom Intrusions in Autonomous Recognition Systems,” Proceedings of Euro S&P 2024 (accepted for publication), pp. xxxx-xxxx, July 2024
This study addresses the vulnerability of autonomous systems to phantom attacks, where adversaries project deceptive illusions that are mistaken for real objects. Initial research assessed attack success rates from various distances and angles. Experiments used two setups: a black-box with DJI Mavic Air, and a white-box with Tello drone equipped with YOLOv3. To counteract these threats, the DeGhost deep learning framework was developed to distinguish between real objects and illusions, testing it across multiple surfaces and against top object detection models. DeGhost demonstrated excellent performance, achieving an AUC of 0.998, with low false negative and positive rates, and was further enhanced by an advanced Fourier technique. This study substantiates the risk of phantom attacks and presents DeGhost as an effective security measure for autonomous systems.