Image Image Image Image Image Image Image Image Image Image

REVEAL | March 24, 2017

Scroll to top

Top

Automated Multimedia Verification @ MediaEval 2016

Automated Multimedia Verification @ MediaEval 2016
Stuart Middleton

The MediaEval 2016 workshop took place 20-21 October 2016 at the Netherlands Institute for Sound and Vision in Hilversum, Netherlands, right after ACM Multimedia 2016 in Amsterdam. The REVEAL team were present and won the ‘automated verification of multimedia’ challenge!

 

The REVEAL team were present in force at the latest MediaEval 2016 event. We helped organize the verifying multimedia use task which saw international teams of computer science researchers compete to automatically classify images and videos as fake or real. Teams could use image, video and/or tweet data for each event, and were compared on their classification performance. Researchers had a great time, comparing notes and testing their algorithms against each other. Our organising team won the challenge – go team @RevealEU!

Verifying multimedia use task definition

Images and videos can be faked in many ways. The challenge dataset focussed on three main areas for faking; wrong location, wrong time and photoshopping.

Wrong Location

A wrong location occurs when an image or video from somewhere else is misrepresented as being part of a different news event. The example in figure 1 shows an image from an Eagles Of Death Metal concert in Dublin. This image went viral during the Paris November 2015 terror attacks where the Eagles Of Death Metal were playing in the Paris Bataclan Theatre which was attacked by gunmen. Right subject, wrong location.

Figure 1. EaglesOfDeathMetal concert image in Dublin which was widely misrepresented as the Bataclan theatre in Paris. Source @EODMofficial

Figure 1. Eagles Of Death Metal concert image in Dublin which was widely misrepresented as the Bataclan theatre in Paris. Source @EODMofficial

Wrong Time

A wrong time occurs when an image or video of a real subject is reused from a previous historical event. The example in figure 2 was used claiming to show a girl starving in the besieged Syrian town of Madaya. In reality, the girl in the photo is named Marina Mazeh, lives in southern Lebanon and is in no way connected to the humanitarian catastrophe unfolding in Madaya. Wrong location, wrong time.

Figure 2. Debunked image of a starving ‘Syrian’ girl selling gum that went viral. Source France24

Figure 2. Debunked image of a starving ‘Syrian’ girl selling gum that went viral. Source France24

Photoshopping

The last case is image tampering, which can be performed in many ways, and sometimes leaves digital traces that can be detected. The original image of the example in figure 3 is an image of a Canadian Sikh man explaining people how to fix a laptop. It was photoshopped in order to add a ‘suicide belt’ and the cover of a Koran and then reposted on social media labelling the man as one of the wanted terrorists from the Paris November 2015 terror attack. Simple case of image tampering.

Figure 3. Debunked photoshopped image of a ‘terrorist’ that went viral. Source @GrasswireFacts

Figure 3. Debunked photoshopped image of a ‘terrorist’ that went viral. Source @GrasswireFacts

Challenge dataset

The challenge dataset consisted of a training and test set of images and videos. The training set (where labels were provided to challenge teams) had 17 news events, with over 400 images/videos and over 15,000 tweets about them. The test set (where labels were hidden from challenge teams and used to score the results) had 36 news events, with over 120 images/videos and over 2,000 tweets about them.

Results

The task results can be seen in figure 4 below. The @RevealEU top team scored F1 0.93 and precision 0.88. This means 88% of the test image and video classifications were correct. Full details can be found in this task overview presentation or in the associated paper [1]. These results are likely to be worse when applied ‘in the field’, as opposed to a controlled and well prepared challenge dataset, but are non-the-less very promising and show the value social media analytics can add when verifying viral user generated content.

Figure 4. MediaEval 2016 Verifying multimedia use challenge results for all teams

Figure 4. MediaEval 2016 Verifying multimedia use challenge results for all teams

 

Acknowledgement
The work presented in this article is part of the research and development in the REVEAL project (grant agreement 610928), supported by the 7th Framework Program of the European Commission.

References
[1]   Boididou, C. Papadopoulos, S. Middleton, S.E. Nguyen, D.T.D. Riegler, M. Petlund, A. Kompatsiaris, Y. “The VMU Participation @ Verifying Multimedia Use 2016”, MediaEval 2016 workshop, Hilversum, Netherlands, Oct 2016