Internal Meta research reveals teens with body dissatisfaction are exposed to 3 times more eating disorder-related and other sensitive content on Instagram than peers, highlighting algorithmic risks and inadequate content screening.
Bengaluru: Teenagers struggling with body dissatisfaction are being exposed to dramatically higher volumes of potentially harmful content on Instagram, according to confidential internal research from Meta that has been exclusively reviewed by Reuters. The company’s own researchers discovered that teens who frequently reported feeling bad about their bodies after using Instagram encountered three times more eating disorder-adjacent content than their peers who didn’t experience such negative feelings. Meta conducted a comprehensive study tracking 1,149 teenagers throughout the 2023-2024 academic year, surveying them about their emotional responses to Instagram content while manually analyzing what appeared in their feeds over three months. The results revealed a stark disparity: among the 223 teens who regularly experienced body dissatisfaction, eating disorder-adjacent content comprised 10.5% of their Instagram feed. For other participants, such material accounted for merely 3.3% of what they viewed.
What Was Displayed?
This problematic content featured prominent displays of bodies, specifically chests, buttocks, and thighs, along with explicit judgments about body types and material related to disordered eating and negative body image. While not explicitly banned under Instagram’s rules, parents, teenagers, pediatricians, and external experts have repeatedly warned Meta that such content poses serious risks to young users. The research uncovered an even more concerning trend. Teens reporting the most severe negative feelings weren’t just seeing more body-focused content. They were being exposed to higher volumes of disturbing material across multiple categories that Meta classifies as “mature themes,” “risky behavior,” “harm and cruelty,” and “suffering.” Collectively, this sensitive content represented 27% of what vulnerable teens encountered on the platform, compared to just 13.6% for their peers who hadn’t reported negative emotional experiences.
Sample posts flagged by researchers included images of extremely thin women in lingerie and bikinis, violent fight videos, illustrations of distressed figures captioned with phrases like “how could I ever compare” and “make it all end,” and graphic images including a close-up of a woman’s lacerated neck. Though none violated Instagram’s content policies, Meta’s own researchers found some material disturbing enough to attach sensitive content warnings for their colleagues.
Proverbial Chicken and Egg Question
Meta’s researchers acknowledged a critical limitation: their findings couldn’t determine whether Instagram’s algorithm was causing teens to feel worse about themselves, or whether teens already struggling with body image issues were actively seeking out such content. “It is not possible to establish the causal direction of these findings,” the researchers wrote in the confidential document, which was marked “Do not distribute internally or externally without permission.”
However, Jenny Radesky, an Associate Professor of Pediatrics at the University of Michigan who reviewed the study at Reuters’ request, found this explanation insufficient. “This supports the idea that teens with psychological vulnerabilities are being profiled by Instagram and fed more harmful content,” she said, praising the study’s robust methodology while expressing alarm at its findings. “We know that a lot of what people consume on social media comes from the feed, not from search.”
The internal research revealed that Meta has been repeatedly urged to address this issue by multiple stakeholders, including its own Eating Disorder & Body Image Advisory Council. Teens, parents, pediatricians, and external advisors have all warned that excessive exposure to fashion, beauty, and fitness content “may be detrimental to teen well-being, specifically by precipitating or exacerbating feelings of body dissatisfaction.” The document noted that past internal Meta research has consistently demonstrated an association between consuming Instagram fitness and beauty content and reporting worse feelings about one’s body.
Existing Safety Tools Inadequate
Perhaps most revealing, Meta’s study found that its current content screening systems designed to catch violations of platform rules were incapable of detecting 98.5% of the sensitive content that the company itself believes may be inappropriate for teenagers. The researchers described this finding as “not necessarily surprising,” explaining that Meta had only recently begun developing algorithms specifically designed to identify such potentially harmful material that doesn’t technically violate community guidelines.
In a statement, Meta spokesperson Andy Stone defended the research as evidence of the company’s commitment to improvement. “This research is further proof we remain committed to understanding young people’s experiences and using those insights to build safer, more supportive platforms for teens,” Stone said. He pointed to Meta’s recent announcement that it would attempt to align content shown to minors with PG-13 movie standards, and claimed that since July, the company has reduced age-restricted content shown to teenage Instagram users by half.
However, Meta continues to face mounting legal pressure as the company is currently defending itself against state and federal investigations into Instagram’s effects on children, as well as civil lawsuits filed by school districts alleging harmful product design and deceptive marketing of its platforms as safe for teens. These legal actions have prominently cited previous leaked internal research from Meta in which the company’s own researchers expressed concerns that Instagram’s content recommendation algorithms might be harmful to young users already struggling with body image issues. The newly revealed study adds to this growing body of evidence that Meta has long been aware of potential harms its platform poses to vulnerable teenage users, even as it publicly maintains its commitment to youth safety.