The Meta AI App Failed at User Privacy

Updated:June 13, 2025

Reading Time: 2 minutes
An AI-generated image of a chat with Meta AI

Meta AI app users are unknowingly sharing private conversations, audio clips, and images with the public. 

The app’s design makes it easy to share content, sometimes by accident. As a result, users are exposing sensitive, personal, and even incriminating information.

Oversharing

Meta AI includes a “share” button on its chat interface. When clicked, it brings up a preview screen. From there, users can publish their AI conversations. 

However, the app does not clearly explain who will see the shared content. If someone logs in with a public Instagram account, their post becomes publicly visible. 

Many users are unaware of this link. As a result, private questions are being shared widely.

Some users have posted personal health concerns. Others have asked legal questions, including how to evade taxes. 

A few even included full names and home addresses. This has created a serious risk to user safety and privacy.

Private conversations with Meta AI
Image credit: Meta AI

Embarrassing Posts

Public posts on the app reveal a wide range of user behavior. One person asked why some farts smell worse than others. 

Another requested Meta’s help writing a legal reference letter for an employee in court. One user asked Meta to post his phone number in Facebook groups to find a partner.

There are also strange, AI-generated images. Some show fictional characters like Mario or Goku in odd scenarios. 

Security experts, such as Rachel Tobac, have found cases of exposed court records and residential addresses. These are serious breaches of personal privacy.

A Foreseen Risk 

Meta is no stranger to privacy controversies. Despite this history, the company released an app with a risky feature. 

Turning AI queries into public posts mirrors a failed 2006 AOL experiment, which ended in a major privacy scandal.

Most people treat AI tools as private; they expect confidentiality. This is unlike Google, which keeps user queries private.

The Numbers

The app launched on April 29, 2025, and according to Appfigures, it has been downloaded 6.5 million times. 

That might sound impressive. However, for a company of Meta’s size, this number is relatively low. 

Meta has invested billions into AI. Yet, this launch, plagued by privacy issues, reflects poor planning and oversight. 

Real Consequences

Real people are being affected. Imagine asking the AI about a legal issue or medical problem, thinking it was private. Later, you find your name, photo, or voice posted publicly.

This is not only embarrassing, but in some cases, it may be dangerous. Some posts include information that could be used for harassment or identity theft.

Lolade

Contributor & AI Expert