Instagram users were alarmed after their Reels feeds unexpectedly displayed highly sensitive content. Users took to social media to complain about their feeds being filled with violent and explicit content. Many questioned whether the platform had been hacked as disturbing videos appeared even for those with strict content filters. As frustration grew online, Meta addressed the issue.
In this article, we will delve into the details of the issue and statement by the company.
Why was Instagram showing sensitive content?
Instagram users were left surprised and concerned when they noticed an unexpected increase in the violent and explicit content appearing in their Reels feeds. Some people took to social media to ask if Instagram had been hacked, as disturbing videos, ranging from graphic violence to adult material, continued to appear after enabling content filters.
Meta has apologized for an error that caused Instagram to recommend violent and explicit content in users’ Reels feeds. On Thursday, the company acknowledged the issue and confirmed it had been resolved. A Meta spokesperson, in a statement shared with CNBC, said, “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.”
The issue gained attention after numerous Instagram users reported being shown disturbing videos, even with the highest “Sensitive Content Control” settings enabled. Some described seeing violent attacks, explicit material, and other graphic content unrelated to their browsing habits. Others noted that marking such posts as “Not Interested” did not prevent similar content from appearing.
Instagram typically removes extreme graphic content, such as dismemberment or explicit material, and applies warning labels to sensitive posts. Its moderation system relies on internal technology and a team of reviewers to filter harmful content. Despite these measures, the recent glitch allowed disturbing videos to slip through.
The incident follows recent changes in Meta’s content moderation strategy. In January, the company announced it would focus on “high-severity violations” such as terrorism and child exploitation. Meanwhile, the company will ease restrictions on less serious content. It also began phasing out some automated content demotions.
Originally reported by Disheeta Maheshwari on ComingSoon.