Friday, November 22, 2024

Meta ordered to stop training its AI on Brazilian personal data

Must read

Brazil’s data protection authority (ANPD) has banned Meta from training its artificial intelligence models on Brazilian personal data, citing the “risks of serious damage and difficulty to users.” The decision follows an update to Meta’s privacy policy in May in which the social media giant granted itself permission to use public Facebook, Messenger, and Instagram data from Brazil — including posts, images, and captions — for AI training.

The decision follows a report published by Human Rights Watch last month which found that LAION-5B — one of the largest image-caption datasets used to train AI models — contains personal, identifiable photos of Brazilian children, placing them at risk of deepfakes and other exploitation.

As reported by The Associated Press, ANPD told the country’s official gazette that the policy carries “imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights” of Brazilian users. The region is one of Meta’s largest markets, with 102 million Brazilian user accounts found on Facebook alone according to the ANPD. The notification published by the agency on Tuesday gives Meta five working days to comply with the order, or risk facing daily fines of 50,000 reais (around $8,808).

Meta said in a statement to the AP that its updated policy “complies with privacy laws and regulations in Brazil,” and that the ruling is “a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil.” While Meta says users can opt out of having their data used to train AI, ANPD says there are “excessive and unjustified obstacles” in place that make it difficult to do so.

Meta received similar pushback from regulators in the EU causing the company to pause plans to train its AI models on European Facebook and Instagram posts. Meta’s updated data collection policies are already in effect in the US, however, which lacks comparable user privacy protections.

Latest article