Instagram has updated its policies to hide “possibly dangerous” content from users. The algorithm that determines the order in which posts appear in users’ feeds and in Stories, the firm says, will now give less weight to content that “may involve bullying, hate speech, or may promote violence.”
Even though the majority of such postings are already forbidden by Instagram’s rules, the update may affect those that are on the fence or haven’t been flagged by the app’s censors. In an update, the business explains that it will “look at things like if a caption is similar to a caption that previously violates our rules” in order to determine whether or not something may violate the rules.
Instagram has already attempted to conceal potentially offensive material from public-facing portions of the program like Explore but has not altered how it appears to those who follow the accounts publishing such material. As a result of the latest update, even your followers will see significantly less content that is “similar” to that which has been deleted. A representative for Meta said that “possibly dangerous” messages may still be deleted in the future if they violate the site’s rules.
This move follows a similar one made by Instagram in 2020, which lowered the profile rankings of users who posted false information that was subsequently debunked by independent fact-checkers. Instagram claims that the new policy would only apply to specific postings and “not accounts overall,” in contrast to the previous one.
Instagram also claims it will now take into account a user’s reporting history when determining the order in which content is displayed. Instagram states that if its algorithms determine that a user is likely to report a post based on their past reporting behavior, the post will appear lower in the user’s feed.