Automated Multimedia Verification @ MediaEval 2016
The MediaEval 2016 workshop took place 20-21 October 2016 at the Netherlands Institute for Sound and Vision in Hilversum, Netherlands, right after ACM Multimedia 2016 in Amsterdam. The REVEAL team were present and won the ‘automated verification of multimedia’ challenge!
The REVEAL team were present in force at the latest MediaEval 2016 event. We helped organize the verifying multimedia use task which saw international teams of computer science researchers compete to automatically classify images and videos as fake or real. Teams could use image, video and/or tweet data for each event, and were compared on their classification performance. Researchers had a great time, comparing notes and testing their algorithms against each other. Our organising team won the challenge – go team @RevealEU!
Verifying multimedia use task definition
Images and videos can be faked in many ways. The challenge dataset focussed on three main areas for faking; wrong location, wrong time and photoshopping.
A wrong location occurs when an image or video from somewhere else is misrepresented as being part of a different news event. The example in figure 1 shows an image from an Eagles Of Death Metal concert in Dublin. This image went viral during the Paris November 2015 terror attacks where the Eagles Of Death Metal were playing in the Paris Bataclan Theatre which was attacked by gunmen. Right subject, wrong location.
A wrong time occurs when an image or video of a real subject is reused from a previous historical event. The example in figure 2 was used claiming to show a girl starving in the besieged Syrian town of Madaya. In reality, the girl in the photo is named Marina Mazeh, lives in southern Lebanon and is in no way connected to the humanitarian catastrophe unfolding in Madaya. Wrong location, wrong time.
The last case is image tampering, which can be performed in many ways, and sometimes leaves digital traces that can be detected. The original image of the example in figure 3 is an image of a Canadian Sikh man explaining people how to fix a laptop. It was photoshopped in order to add a ‘suicide belt’ and the cover of a Koran and then reposted on social media labelling the man as one of the wanted terrorists from the Paris November 2015 terror attack. Simple case of image tampering.
The challenge dataset consisted of a training and test set of images and videos. The training set (where labels were provided to challenge teams) had 17 news events, with over 400 images/videos and over 15,000 tweets about them. The test set (where labels were hidden from challenge teams and used to score the results) had 36 news events, with over 120 images/videos and over 2,000 tweets about them.
The task results can be seen in figure 4 below. The @RevealEU top team scored F1 0.93 and precision 0.88. This means 88% of the test image and video classifications were correct. Full details can be found in this task overview presentation or in the associated paper . These results are likely to be worse when applied ‘in the field’, as opposed to a controlled and well prepared challenge dataset, but are non-the-less very promising and show the value social media analytics can add when verifying viral user generated content.
The work presented in this article is part of the research and development in the REVEAL project (grant agreement 610928), supported by the 7th Framework Program of the European Commission.
 Boididou, C. Papadopoulos, S. Middleton, S.E. Nguyen, D.T.D. Riegler, M. Petlund, A. Kompatsiaris, Y. “The VMU Participation @ Verifying Multimedia Use 2016”, MediaEval 2016 workshop, Hilversum, Netherlands, Oct 2016