Scroll Top

B(IA)S

Meta Is Putting Us in the Dataset. But Can We Say No (Maybe)

by Alessandro Mancini

New technologies are changing our world forever. The question is: for the better or for the worse?
What are the risks, the shadows, the dangers?

Starting May 27, every photo of your dog, every comment under a meme, every nostalgic post published on Facebook or Instagram could end up in the data pool used to train Meta’s artificial intelligence. Not to improve your feed, but to feed the company’s generative models. Not to enhance your feed or suggest friends, but to power the company’s generative models. Like ChatGPT, but with a Menlo Park label.

This development was announced via an update to the privacy policy. From that date, Meta will begin using European users’ public content—photos, posts, captions, comments—to “develop and improve” its artificial intelligence technologies. All without any explicit consent.

Meta claims this practice is based on “legitimate interest,” one of the legal grounds provided by the GDPR. However, the use of personal data to train AI models raises significant concerns. Once data is used to train a model, it’s difficult to ensure that it can be completely removed or that it won’t be used in unintended ways.

To exercise the right to object, users must fill out an online form, providing a reason for their refusal. The link will be sent via email or in-app notifications, although it is already available on Meta’s official support page. The right to object is based on Article 21 of the GDPR, but take note: it must be exercised before the system is activated, otherwise it will only apply to content posted after May 27.

Although the tech giant claims that any reason will be accepted, the requirement to provide an explanation may discourage many users from completing the process. Moreover, the form is not easily accessible, making the process less intuitive and transparent.

Meta has also stated that it will not use private messages or content from accounts belonging to users under 18. However, anything that is publicly visible or shared with a broad audience is potentially usable.

Max Schrems, founder of the nonprofit NOYB (None of Your Business), harshly criticized Meta’s approach, stating: “Meta claims that anything ‘public’ can be used to train any kind of AI. This also includes posts about illnesses, sexual orientation, or political views. People don’t expect their posts to be used in this way.”

The issue raises broader questions about the transparency of large tech companies’ practices and the protection of personal data in the age of artificial intelligence. It is essential that users are properly informed and maintain control over their data.

May 27 is just around the corner. Meta seems to have already made the choice for us, but it’s important to know that there is a way to make our voices heard and say “No.”

 

Alessandro Mancini

Is a graduate in Publishing and Writing from La Sapienza University in Rome, he is a freelance journalist, content creator and social media manager. Between 2018 and 2020, he was editorial director of the online magazine he founded in 2016, Artwave.it, specialising in contemporary art and culture. He writes and speaks mainly about contemporary art, labour, inequality and social rights.

READ MORE