Image Image Image Image Image Image Image Image Image Image

REVEAL | December 4, 2023

Scroll to top


Pants on fire: content verification tools and other ways to deal with the fake news problem

Pants on fire: content verification tools and other ways to deal with the fake news problem
Aleksandra Kuczerawy

The implications of the filter bubble are manifold and people in the field have been talking about the phenomenon for a while. Yet, this is the first time that a broad audience became aware that something unusual is happening. Recent events, such as Brexit and the US elections, indicate that filter bubbles can be particularly worrisome as they amplify misinformation, or what became the latest ‘hot topic’ – the fake news. It is now up to those concerned to decide how to respond. Finding a right solution, however, is not an easy task.

Content verification tools might be a solution to address the problem of fake news online. The REVEAL project aims to provide (and integrate) tools for social media content mining and verification, such as, for example: Truthnest, the Image Verification Assistant, the Tweet Verification Assistant, the Disturbing Image Detector, and Popularity Prediction.

Soon after the US elections, Eli Pariser – who coined the term ‘filter bubble’, started an open Google document, where technologists, academics and media experts are gathering ideas to address the problem of fake news. It quickly becomes clear that the problem is extremely complex. For example, how to create a solution that would not harm proper media? How to distinguish (especially when using algorithms) between a fake story and satire? And, most importantly, who should do it?

What’s a platform to do?

Some argue that the problem should be addressed by the platform providers, such as Facebook, Twitter or Google. The question is, however, whether they have an incentive to do so. After all, if this is what people want to see and click on, to stay in their bubble and read stories that confirm their beliefs, then why should platform providers stop them? In a (twisted) way it is also a form of exercising one’s freedom of expression. The problem starts when people make political decisions based on fake news or take a shotgun to a local pizza joint to “self-investigate” a story.

There is of course a question of the reputation of a platform. That is why Facebook has been offering the possibility to report a story as false news since 2015. Recently, Facebook introduced a new functionality where it asks its users to rate articles’ use of ‘misleading language’. Facebook did not provide additional information clarifying how the functionality works, and how the data is used and retained. It is an interesting approach, albeit not very transparent, since it makes its own users go after their (other? same?) users who post fake news on the platform.

The main problem with demanding platform providers to act is that, similarly as in case of hate speech, they would be given even more power to decide what is permitted content and what is not. And as seen recently in a story about the picture of the Vietnam Napalm Girl, this is not working smoothly. Once again, this becomes a problem of social media providers becoming editors and delegating decision making processes on fundamental rights to private entities, which can easily result in censorship.

From media literacy to algorithmic accountability

It is possible, therefore, that we should rather look for long-term solutions to address the problem. Any short-term answer will face a risk of hindering freedom of expression. As pointed out by the OSCE Representative on Freedom of the Media, Dunja Mijatović, people lie and they always have. Taking a stringent action, according to Mijatović, “may just cause greater harm to free expression than any lie, no matter how damaging”. Instead, she recommends addressing the problem exclusively through self-regulation, education and literacy, and not through new restrictions. This includes digital literacy in school programmes – these would be extremely beneficial. Such programmes should start early in the education path. As shown by project AdLit, a majority of children have problems with critical processing of commercial messages and with distinguishing between advertising and regular stories. Learning about online specific environments could help people with tackling numerous threats such as infringements of privacy, phishing emails or spreading unverified information.

Another solution that is slowly gaining popularity is algorithmic accountability. It is based on the idea that platforms such as Facebook and Google do not disclose how their algorithms work. They are, in fact, “black boxes”. They are not, however, entirely neutral and objective but can reflect biases of their creators. Since they have a power to influence so many aspects of people’s lives, they should be transparent and accountable.

The role of content verification tools

Finally, the problem will not be addressed properly without strong verification tools. A number of tools were created by developers who wanted to show the big players such as Facebook that the problem is not impossible to tackle. Tools such as B.S.Detector or use the similar logic as used in REVEAL. For example, B.S.Detector assesses the reputation of the source, while allows assessing different factors of the story (for example sentiment analysis or truthfulness). Interestingly, the former was initially attributed to Facebook, which appeared to be a fake story, only to be blocked by Facebook, for alleged security purposes.

It is furthermore important to note that content verification tools should not work as censorship tools but rather opt-in mechanisms to remind people that not everything they read online is true. As stated by the creator of the B.S.Detector, such tools are meant “to encourage people to be suspicious by default”.

The tip of the iceberg

REVEAL provides a variety of content verification tools which examine different modalities of a story. For example, the image forensics tools (e.g. the Image Verification Assistant) help spotting manipulated content. Other modules (e.g. Truthnest) allow for the assessment of the reputation of the author or truthfulness of a particular post. They could be used, therefore, to facilitate the discovery of “original” fake content, i.e. content that has been created “from scratch”.

REVEAL offers many tools and metrics that could definitely be used to help address the situation. The problem of fake news, however, is so complex that more effort needs to be devoted to this topic. In REVEAL, we strongly recommend the European Commission to direct more resources to research projects dealing with the problem of fake news, filter bubbles and verification tools, as well as the promotion of media literacy.

Author’s note: this post has also appeared on the CiTiP blog.