Image Image Image Image Image Image Image Image Image Image

REVEAL | March 27, 2023

Scroll to top


No Comments

The Facts of Flight MH17 – dealing with UGC in news

The Facts of Flight MH17 – dealing with UGC in news
Jochen Spangenberg

Every week, we provide links from around the web that discuss issues regarding the verification of content from Social Media. This week we are taking a look at the downing of flight MH17 on 17 July 2014 and issues regarding the dealing with user-generated content as part of the journalistic reporting process.

On 17 July 2014 the world was shocked by a rather grim event: the shooting down of Malaysia Airlines flight MH17 over Eastern Ukraine. The Boeing 777, carrying about 300 passengers and crew on board, was en route from Amsterdam to Kuala Lumpur.

On the Media's Breaking News Consumer's Handbook

‘On the Media’s’ Consumer’s Guide to Breaking News. Source here

Shortly after flight MH17 disappeared from radar screens, Social Networks started buzzing. There were many tweets and posts speculating about the cause of the disaster, who was responsible and what political consequences this would yield. Not surprisingly, a lot of false or deliberately misleading information, propaganda and hoaxes also ‘made the rounds’. News organisations all over the world were confronted with the situation and had to try and make sense of what really had happened (see for example this article by the New York Times, or what Russian media wrote about MH17, compiled by Radio Free Europe / Radio Liberty), separating facts from rumours and simply false information or even abusing the situation (as an example for the latter see this article by the BBC about how the MH17 disaster was exploited by spammers).

Not everyone always got things right, and speculations ran high, especially in the initial hours after the fatal incident.

We take this rather sad occasion to look at the event in the context of newsgathering, information provision and verification – all aspects that are of significance for what we do in REVEAL as part of the journalism / media scenario.

Being alerted

Any breaking (and unforeseen or unscheduled) news event starts with journalists or news organisations being alerted about something newsworthy happening. In addition to usual and established channels and sources such as news agencies and own reporters ‘on the ground’, Social Networks are a valuable source for detecting events. It can thus be very beneficial to have access to and knowledge of tools and services that detect trends or trending topics, such as Newswhip, Hootsuite, Tweetdeck, Google Trends, Twitter’s What’s Trending and/or Trendsmap, to name but a few.

Once the journalist or media organisation has detected a news event (in the recent case the fact that flight MH17 disappeared from the radar, either as a result of an air crash or a shoot-down), it comes to assessing the credibility of posted information and deciding on how to deal with it. Depending on the type of media, various tools can be of help.

Verifying images, video and location

Fotoshopped image as shared via Twitter

Fotoshopped image of supposed flight MH17 as shared via Twitter. More & source here.

For assessing the credibility of images posted on the web, the following tools (again, only a selection) have proven to be helpful: Google’s (Reverse) Image Search, Tineye, FotoForensic or Jeffrey’s Exif Viewer. Checking images before they are used in journalistic reporting is vital. Otherwise, the likelihood of them spreading fast and further is high. This has damaging effects on both the credibility and reputation of the provider, and impacts the opinion forming process.

Determining the accuracy of a video can be slightly more complex  (here is a good example of a video that does not portray what it claims to be at the said time, place and context). It is advisable to combine techniques and tools for that. This includes using tools that aid in determining the location of an event (such as Google Earth, Google Maps / Street View and Geofeedia), the analysis of individual key frames of a video, and looking out for clues such as (road) signs, number plates and the like. Checking accents or dialects spoken, if included in a video, is also advisable. The WolframAlpha search engine is another useful (and free) tool that can aid here. For example, it shows what the weather was like in a particular region at a particular time.

Another very useful (new) resource for verifying user-generated videos is the work of the Citizen Evidence Lab, launched in July 2014 by Amnesty International. It offers step-by-step guidance on how to verify user-generated videos and other resources, incorporating the YouTube Data Viewer. The service has been primarily developed to support human rights activists. While it aims at sharing best practices, techniques and tools for authenticating user-generated content for human rights defence, it is also applicable in other topical areas.

A different way of verifying Social Media content is by outsourcing the whole process and having it done by an external entity. This is what Storyful offers its clients, among them AlJazeera, Sky News, France 24 and the New York Times. Storyful, in its own words “the world’s first social news agency”, monitors and unearthes information circulating on the web, then verifies it (also using some of the tools and techniques mentioned above) and ultimately acquires or clears the rights for subsequent publication and use by its customers.

Role of the journalist

Above all, it is the role of the journalist to filter, assess and ultimately distribute content that has been thoroughly checked and verified beforehand. While tools such as those mentioned above can greatly assist in the verification process, it is (or should be), ultimately, up to the journalist and his professional judgment to decide what should be published or not. This includes sticking to traditional skills and processes such as contacting sources directly, getting confirmation from at least a second source, clearing copyrights, properly crediting the source (without putting the source in danger) and such like.

That is also why (especially news) journalists need to acquire new skills and get accustomed to new working practices. These include

  • being familiar with various Social Networks and platforms
  • setting up lists of (trusted, reputable, known) networks of contacts for particular topics or geographic regions, and having these in place before news break
  • using crowdsourcing techniques to obtain or verify information
  • matching eyewitness reports with reports found on Social Networks, or unearthing contradicting reports
  • performing credibility checks of contributors (investigating their history, networks, peculiarities etc)
  • finding ways of quickly getting in touch with sources (e.g. linking posts to telephone numbers or email addresses. So-called ‘Who Is’ tools or people directories (e.g. Pipl) can be of great help here)
  • abiding by ethical rules and standards (these need to be established and implemented by media organisations that use user-generated content in their news reporting, and followed by staff)
  • making sure that sources and contributors are not put in danger as a result of their information being used or their identity or location being revealed.

Once a story that is partly or mostly based on user-generated content is published, it should be clearly stated what levels of verification have been applied. If new facts become known, or material is being deleted in the process of updating a story (e.g. because it proved to be wrong), corrections should be made on all platforms that have been used for information dissemination.

What to make of all this?

User-generated content has made its way into traditional news reporting. It offers many rewards, for example being faster or obtaining information that would otherwise be unavailable. However, there are also dangers involved.

Being fast at the expense of accuracy can have rather damaging effects, especially as trust and credibility of the provider are at stake. That is why journalists need to be trained in getting the best out of user-generated content, knowing what tools there are to use and how to operate them. Additionally, journalists need to be aware of the shortcomings and limitations of tools available, plus the interests and skills of those who post and spread deliberately misleading information.

The REVEAL research and development work tackles some of the challenges that still exist in the process of verification. This includes both improving individual features and modalities (such as improved algorithms for image similarity retrieval, to name but one field of research) and coming up with hitherto new concepts and aids in the process of verification. The focus of our project work is on the ‘three Cs’: Content, Context and Contributor. We will inform about research results as they become available. You can follow us via this website or on Twitter under @RevealEU.

Author’s note

Here, we have only provided a small selection of tools, services and practices that can aid in the analysis and verification of content residing in Social Networks. The interested reader who wants to find out more is referred to the resources listed below, as well as other posts on this blog. Especially to mention is our weekly ‘Verify this Week’ column, where we publish and review more tools, services and working practices that may be of interest.

Nevertheless, we would be delighted to receive further tips, clues and comments regarding the topic. 

Selection of useful resources / additional reading

Submit a Comment