A paper got accepted!

We are happy to announce that our new study entitled “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability” has recently been accepted for publication in the Proceedings of the Symposium on Usable Security and Privacy (USEC 2025). Congratulations, Mori-san and the team!

K. Mori, D. Ito, T. Fukunaga, T. Watanabe, Y. Takata, M. Kamizono, T. Mori, “Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability,” Proceedings of the Symposium on Usable Security and Privacy (USEC 2025), February 2025 (to appear).

Overview.

Companies publish privacy policies to improve transparency regarding the handling of personal information. However, discrepancies between the descriptions in privacy policies and users’ understanding can lead to a decline in trust. Therefore, assessing users’ comprehension of privacy policies is essential. Traditionally, such evaluations have relied on user studies, which are time-consuming and costly.

This study explores the potential of large language models (LLMs) as an alternative for evaluating privacy policy understandability. The authors prepared obfuscated privacy policies alongside comprehension questions to assess both LLMs and human users. The results revealed that LLMs achieved an average correct answer rate of 85.2%, whereas users scored 63.0%. Notably, the questions that LLMs answered incorrectly were also difficult for users, suggesting that LLMs can effectively identify problematic descriptions that users tend to misunderstand.

Moreover, while LLMs demonstrated a strong grasp of technical terms commonly found in privacy policies, users struggled with them. These findings highlight key gaps in comprehension between LLMs and users, offering valuable insights into the feasibility of automating privacy policy evaluations. The study marks an important step toward leveraging LLMs for improving the clarity and accessibility of privacy policies, reducing the reliance on costly user studies in the future.