A new study has revealed that Instagram, owned by parent company Meta, is actively promoting the spread of self-harm content among teenagers. The study found that Meta’s moderation efforts on the social media platform were “extremely inadequate,” as it failed to remove explicit images and even encouraged users engaging with such content to connect with one another.
Researchers from Denmark conducted an experiment in which they created a private self-harm network on Instagram, including fake profiles of individuals as young as 13 years old. Over the course of a month, they shared 85 pieces of self-harm-related content that gradually increased in severity, featuring disturbing images of blood, razor blades, and encouragement of self-harm.
The purpose of this study was to test Meta’s claim that it had significantly improved its processes for removing harmful content using artificial intelligence (AI). However, the findings were alarming. Not a single image was removed during the experiment. In contrast, when Digitalt Ansvar, an organization promoting responsible digital development, developed its own simple AI tool to analyze the content, it was able to automatically identify 38% of the self-harm images and 88% of the most severe ones. This demonstrated that Instagram had access to technology capable of addressing the issue but chose not to implement it effectively.
The inadequate moderation on Instagram raises concerns about compliance with EU law. According to Digitalt Ansvar, the platform’s failure to address systemic risks related to self-harm content suggests a lack of compliance with the Digital Services Act.
A survey conducted by youth mental health charity stem4 revealed that nearly half of the children and young people questioned experienced negative effects on their well-being due to online bullying and trolling about their physical appearance. These effects ranged from withdrawal and excessive exercise to complete social isolation or engaging in self-harming behaviors.
In response to the study, a spokesperson from Meta stated that content encouraging self-injury goes against their policies, and they remove such content when detected. They claimed to have removed over 12 million pieces of suicide and self-injury-related content on Instagram in the first half of 2024, with 99% of it being proactively taken down. Additionally, they highlighted the launch of Instagram Teen Accounts, which aim to provide stricter control over sensitive content for teenagers.
However, the Danish study found that instead of trying to shut down the self-harm network, Instagram’s algorithm actively facilitated its expansion. The research revealed that after connecting with one member of the self-harm group, 13-year-olds were prompted to become friends with all other members. This disturbing finding suggests that Instagram’s algorithm contributes to the formation and spread of self-harm networks.
Ask Hesby Holm, CEO of Digitalt Ansvar, expressed shock at the results. The researchers had expected that as they shared increasingly severe images during their experiment, AI or other tools would recognize and identify them. To their surprise, this did not happen. Hesby Holm raised concerns about the potential severe consequences of failing to moderate self-harm images effectively. He emphasized that this issue is closely associated with suicide and without timely intervention from parents, authorities, or support systems, these harmful groups can go unnoticed.
Hesby Holm speculated that Meta may prioritize maintaining high traffic and engagement by neglecting to moderate smaller private groups like the one created for the study. While it remains uncertain whether they moderate larger groups, he pointed out that self-harming groups tend to be small in size.
The findings of this study shed light on Instagram’s failure to address and mitigate the spread of harmful content related to self-harm among teenagers. The implications are far-reaching as it not only raises concerns about compliance with regulations but also highlights the urgent need for effective moderation practices to protect vulnerable individuals. The impact on mental health and well-being cannot be underestimated, making it imperative for social media platforms like Instagram to prioritize the safety of their users.