Crossing the line? Meta’s AI training sparks worldwide privacy concerns

Brazil’s data protection office suspended Meta’s privacy policy after allegations of unauthorized data use for AI training, sparking legal and ethical debates.

Crossing the line? Meta’s AI training sparks worldwide privacy concerns

A laptop keyboard and Meta logo displayed on a phone screen.

Jakub Porzycki/NurPhoto via Getty Images

Last month, global digital giant Meta faced an international debate over its AI training procedures.

The company was accused of using data in Brazil without approval, raising legal and ethical concerns. We explored ‌Meta’s use of Brazilian data for AI training, highlighting the potential exploitation of personal data in the rapidly growing field of artificial intelligence. Meta’s reach extends to Facebook, Instagram, and WhatsApp, influencing billions of people’s interactions and communication.

The controversy

Brazil’s national data protection office has suspended Meta’s privacy policy, allowing the company to train generative AI models based on user postings. The decision, which carries a daily fine of R$ 50,000, raises concerns about insufficient legal bases for data processing, lack of transparency, and potential violations of user rights, particularly those of children and adolescents.

The company’s spokesperson confirmed the decision: “We decided to suspend genAI features that were previously live in Brazil while we engage with the ANPD to address their questions around genAI.” This suspension affects AI-powered tools already operational in the country, marking a significant step back for Meta’s regional AI ambitions.

A spokesperson from Meta told BBC that the company was “disappointed” and insisted that the policy update “complied with privacy laws and regulations” and the ban was “a step backward for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil.”

The move puts a dent in Facebook’s attempt to build out its AI products in Brazil, a market with more than 200 million people. Meta has about 100 million Facebook users and‌ more than 113 million Instagram users nationwide.

The National Data Protection Agency Action said it had acted over the “imminent risk of serious and irreparable damage, or difficulty repairing fundamental rights of the affected [account] holders.”

European regulators and activists challenge Meta’s AI data policy

Meta’s policy change was put on hold in Europe after Meta said it had received a request from the Irish Data Protection Commission (DPC) on behalf of other European stakeholders to delay its training of large language models (LLMs).

The UK’s Information Commissioner’s Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised. The policy change would include posts, images, image captions, comments, and stories that users over 18 had shared with a public audience on Facebook and Instagram, but not private messages.

However, Meta last month began notifying users of an upcoming change to its privacy policy, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos, and their associated captions. The company argued that it needed to do this to reflect “the diverse languages, geography and cultural references of the people in Europe”.

Noyb, a European campaign group that advocates for digital rights, filed 11 complaints with constituent EU countries, arguing that Meta contravenes various facets of GDPR. One relates to ‌opt-in versus opt-out, vis-à-vis where personal data processing occurs. Users should be asked for permission first rather than requiring action to refuse.

Meta has been criticized for its approach to informing users about changes in their data usage. Facebook and Instagram users in the UK and Europe received notifications or emails outlining how their information would be used for AI starting June 26.

The firm’s legal basis for processing personal data is legitimate interests, and users must exercise their “right to object” to opt out.

To do so, users can click on the hyperlinked “right to object” text and explain how the processing would affect them, but unlike other important public messaging that is shown at the top of users’ feeds, such as prompts to go out and vote, these notifications appeared alongside users’ standard notifications: friends’ birthdays, photo tag alerts, group announcements and more. So, if someone regularly checks their notifications, it is easy to miss this.

Potential implications for AI development and data privacy globally

Here is a case that spotlights transparent data-gathering procedures and obtaining consent from users. Meta’s plan to take advantage of Instagram and Facebook posts without explicit consent is illegal under Brazil’s LGPD, which requires corporations to ask for permission from users before processing their data. The controversy again focuses on data privacy and the requirement for robust regulatory mechanisms.

The case again underlines the fact that consent and transparency in usage are significant concerns for technology businesses as more people become informed about how their data is being used.

It looks at data sovereignty, where countries are starting to claim a right to decide how their data should be used and treated. Brazil’s action against Meta clearly shows its care towards citizens’ data. This is not, in fact, a one-case trend in Brazil but seems part of the global trend of tightening control over domestic data.

Tight data protection rules, adopted in countries, take tech businesses through a complex web of regulations, giving way to further localization of data practices.

This raises the question of ethics in the training of AIs, as the use of personal data without proper authorization has raised serious ethical issues. The development of AIs has to be done within the bounds of protecting individual privacy and autonomy. Ethical AI practice will be the most debated topic concerning AI research, and large corporations are to resort to increasingly severe standards.

That means that the limits imposed by the country limit the debate about the progress of AI. However, that brings out the challenge of balancing innovation with privacy and ethical concerns.

The IT sector must find a way to better AI technologies while holding on to ethical and legal norms. The global reaction to what Meta did may determine the future of AI policies to be formed since the verdict in the legal battles in Brazil will set precedence for other countries on the handling of AI laws and data privacy.

The debate surrounding Meta’s AI training with Brazilian data has significant implications for AI development and privacy worldwide. It underscores the need for transparency, user permission, data sovereignty, and ethical AI methods.

As the technology sector evolves, balancing innovation and individual rights becomes crucial, marking a turning point in AI and data privacy discussions.

0COMMENT

ABOUT THE EDITOR

Eric Ezenwa Eric Ezenwa is a writer with a penchant for blending technical content with humor. Eric's hobbies include reading and the occasional game of chess.