Facebook under fire over violence, fake news handling

Facebook under fire over violence, fake news handling

Facebook’s media business came under scrutiny this week over its lack of monitoring of extreme material following a live-steamed murder and a “crackdown” on fake news.

On Sunday morning, a Cleveland man live-streamed a video titled “Easter day slaughter”, in which he murdered a 74-year-old man. The video, and various copies, were quickly viewed and shared by users before being removed several hours later.

Facebook vice president of global operations, Justin Osofsky, said in a blog post that the original video and subsequent others were removed within 23 minutes of being reported, two hours and 13 minutes after upload.

He also addressed concerns regarding the ongoing monitoring of content and an improved review process.

“In addition to improving our reporting flows, we are constantly exploring ways that new technologies can help us make sure Facebook is a safe environment. Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being re-shared in their entirety,” Mr Osofsky said.

Justin Osofsky is spoke out against the live-stream murder in a blog post. Picture: LinkedIn

Justin Osofsky is spoke out against the live-stream murder in a blog post. Picture: LinkedIn

While Facebook’s content policy prohibits depictions of violence or content promoting violence from the site, the low level of entry for live video and content sharing make monitoring it a difficult task.

The social media giant has a dedicated content-monitoring team, which is trained to identify and remove content which violate these guidelines. The team assesses the context of the material before making a decision. Facebook is also developing artificial intelligence to combat the issue.

Last year, the reporting process came under fire for censorship and ineffectiveness following the removal of an iconic Vietnam War photo from the site.

Mark Zuckerberg, Facebook founder and CEO, acknowledged that the system was not completely effective, stating: “We have a lot more to do here”.

Facebook also launched an offensive on fake news this week, removing tens of thousands of fake accounts.

New software implemented in the past six months was used to identify and remove an active spam ring on the site. The software found the accounts were participating in “illegitimate activity”, including inauthentic likes and befriending people to distribute spam.

While no specific number was given, it is believed tens of thousands had been removed.

The fake accounts, which mostly originated in Bangladesh, Indonesia and Saudi Arabia, were created individually and disguised by proxies to make them harder to be found.

In a post from Shabnam Sharik, technical program manager on the Protect and Care Team, he explained that page owners and advertisers would see little to no change in their metrics.

“As we remove the rest of the inauthentic likes, we expect that 99 per cent of impacted pages with more than 10,000 likes will see a drop of less than 3 per cent. None of these likes were the result of paid ads from the affected pages,” Mr Sharik said.

Earlier in the week, Facebook had suspended 30,000 accounts in France to combat the spread of fake news and misinformation during the election period.

However, many users believed more needed to be done to improve the reporting process on a user level rather than just mass removal.

Facebook spam scam 1 Facebook spam scam 2 Facebook spam scam 3

For more news from NewsMediaWorks, click here.

Leave a comment