Latest News / How fake news is destroying democracy

How fake news is destroying democracy

Concerns about the growing phenomenon of “fake news” and its use to sway people’s opinions and the risks it poses to democratic processes have been mounting since the UK general election earlier this month.

In the run-up to the election, Facebook published a series of newspaper advertisements to help people identify fake news. It has also introduced several other measures, such as making it difficult for people to buy advertising on platforms which post false news, using machine learning techniques to detect fraudulent behaviour and putting techniques in place to reduce stories with clickbait headlines.

Meanwhile, Twitter has put practices in place to detect spam on Twitter, and is suspending accounts which have suspicious activity.

Dhruv Ghulati, CEO at artificial intelligence (AI) fact-checking firm Factmata, says: “False claims affect democracy. It can detract from the real issues that politicians should work on.”

He points to the Hillary Clinton ‘Pizzagate’ fiasco during last year’s US presidential election as an example. False and fictitious claims were made against members of the Democratic Party, which claimed they were involved in a human trafficking operation.

“Clinton’s campaign advisers had to spend time and money to say that was wrong,” he says. “That was two to three days at a time where they could have been addressing issues about Syria. [They had] much more important things to deal with. It’s a kind of distraction mechanism.”

It has been difficult to put practices in place to stop fake news spreading. In the age of social media people can share stories instantly without knowing whether they are true or not, and human fact-checkers are no longer adequate to stop fake news from disseminating. By the time it becomes clear that a story is false, it will already have been digested by thousands of people, and the damage has been done.

“This problem is going to get a lot worse,” says Ghulati. “AI can help us, at least in the first instance, to detect something that might be misleading. Just like you have an Experian credit check or you have a check that this building is this percentage energy-efficient, we think that same thing should exist for content.

“There is research that shows it takes a human being on average 12 hours to debunk a claim. The problem with a human doing this is that they have to go and manually check [stories] out. Humans are also at risk of bias, and being able to catch something in real time at the source is very important.”

Shares