When was the last time you were shocked by a nasty comment on your favorite social-media site? It may help to know that you’re not alone.
Instagram FB, -0.06% this month launched two new features on its platform meant to prevent bullying. One feature asks users who try to post offensive comments to reconsider their choices before their comment goes live. The other allows individuals to restrict certain users. Restricted users will continue to be able to see and comment on posts from users who restricted them, but their comments will not be visible to the public.
Instagram, the app that was founded in 2010, says it developed the artificial intelligence (AI) features after hearing that young people are “reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” the company said in a statement. Online bullying has resulted in some tragic consequences — in some cases, suicide.
This isn’t the first time Instagram has attempted to address bullying. In June 2017, Instagram launched a feature that would use AI to hide offensive comments from everyone except for the individual who wrote them. Before that, Instagram began allowing users to filter out comments containing specific words or phrases.
The photo-filtering app is still heavily reliant on content moderators to help remove bullying from Instagram, Stephanie Otway, an Instagram spokeswoman, told MarketWatch. But it’s hoping that will change. “As our AI improves, we will be able to proactively find and remove even more bullying,” she said.
Instagram wants to make sure young people don’t leave the site
Instagram, which was bought by Facebook in 2012, knows that the public is fickle and, despite the platform’s 1 billion monthly users, wants to make sure it’s not overtaken by any number of new social-media platforms.
Preventing bullying keeps users happy and on the platform, and keeping users on the platform keeps ad revenue high. But some anti-bullying policies may cost tech giants revenue they don’t want to lose, said Sheri Bauman, a professor at the University of Arizona who once worked as a school teacher and now researches bullying. “I think some of the most effective strategies to prevent bullying may be ones that cause Instagram to lose users and revenue,” she said.
Bauman, who has multiple Instagram accounts herself, says she would like to see some kind of restriction on the number of pages individuals can have. “Multiple accounts aren’t always harmful, but you definitely shouldn’t be able to make one in someone else’s name,” she said.
Social-media sites find it harder to find and delete fake accounts
If Instagram began limiting the number of accounts individuals can have — or asking individuals to verify their identity with an ID card — they would likely have fewer users on their platform. Advertisers come to Instagram because of the large audience it provides. If that audience shrunk, so would ad revenue.
That said, fake accounts and bots are becoming a problem for social media platforms. In the summer of 2018, Twitter TWTR, +2.67% removed millions of fake or suspicious accounts. Instagram regularly removes fake accounts as well. And so does Facebook, which deleted 2.2 billion accounts in the first quarter of 2019.
Bauman is also hopeful Instagram will expand a feature it’s currently testing that hides the number of likes a photo receives from everyone except the user who posted it. Otway told MarketWatch Instagram is now testing the feature in other countries. Tests began in Canada a few months ago. It’s unclear how such a change, if implemented, would affect Instagram’s popularity and how many users the platform has.
Bullying on social media is a big problem among teens
More than half — 59% — of teens say they have been bullied or harassed online, according to a study released by Pew last September. This bullying comes in a variety of different forms. Rumors and name calling were most common, the Pew report said.
Instagram is one of the most popular social media platforms among young people. Nearly three-quarters of teens — 72% — use Instagram, according to a report from Pew released in May 2018. Some 85% use YouTube, and 69% use Snapchat SNAP, -0.86% Facebook and Twitter were slightly less popular, with 51% claiming to use the former and just under a third saying use the latter.
One-fifth of 12- to 20-year-olds have experienced bullying on Instagram specifically, according to anti-bullying group Ditch The Label. And in October 2018, The Atlanticreleased a report detailing the nature of bullying teens face on Instagram. The report said many users create separate Instagram pages that they use to bully others anonymously. These “hate pages” can make the bullying worse because the individuals affected don’t even know who is behind the harassment.
Not all online platforms are making bullying a priority
Some platforms leave content monitoring to their users. Slack, which recently went public, has faced criticism for its lax policy on harassment. Some have said the inability to block other users allows workplace bullying to go unchecked. It’s also in a process of building users. It currently has about 10 million daily users, up from 4 million in 2016.
A representative from Slack WORK, +2.43% told MarketWatch that the app is “an enterprise software platform for business, not a social media platform. Similar to other technologies used in the workplace, employers are responsible for setting the standards for behavior, and they are in the best position to enforce those standards.”
As Instagram has perhaps found, it may be more successful to be proactive in addressing bullying. Practically every social network has policies against bullying, and a lot of them are implementing stricter ones. In April, YouTube announced a review of its harassment policies. The Google-owned GOOG, +0.22% GOOGL, +0.19% platform also began removing comments from most videos that feature minors.
TikTok, founded in 2016, has its own set of anti-bullying initiatives, including the ability to delete comments, make videos private, control who may comment on your posts, and block individuals. TikTok also allows users to report bullying or harassment on another individual’s behalf.
In September 2018, Twitter announced an expansion of its harassment policy that bans “content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.”
“In 2018, we made more than 70 changes to our product, policies and processes to build a healthier Twitter,” Raki Wane, a Twitter spokeswoman, told MarketWatch. Twitter, which was launched in 2016 and now has 330 million monthly users, has received criticism for allowing hate speech and bullying on its site. But experts say it’s hard to contain once the problem has become so widespread.
An automated system will prompt Instagram bullies to think twice
In an ideal world, a bully would be asked if he or she was sure about making a nasty comment and would have a light-bulb moment and see the error of their ways. Real life, however, is rarely that simple. Instagram recognizes that it must do something about bullying, Bauman said, “but it’s so complicated.”
Still, anti-bullying advocates are hopeful. “I was very pleased to see what Instagram is doing. Others have tried to stop bullying but haven’t been successful,” Ross Ellis, the CEO of national anti-bullying organization Stomp Out Bullying, told MarketWatch. Ellis noted that a lot of platforms ask teens to report bullying. “The kids do, and then nothing happens,” she said.
What makes her hopeful about Instagram’s new features is that they require much action from the individuals being bullied. “The comment feature is encouraging because it gives kids food for thought” before the comment goes up, Ellis said.
Bauman said she is pleased Instagram is looking at bullying but isn’t sure its new features will be effective at preventing it. “With the comment feature, if it’s true cyberbullying, kids may think ‘hell yes’ when it asked them if they are sure they want to post,” Bauman said. “I think it could prevent a few comments, but I don’t think it’s likely to have a huge impact.”
For its part, Instagram says it has already seen evidence that its new anti-bullying features are effective at preventing harassment. “From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” Instagram said in its statement on the feature that warns users against potentially offensive comments.