Interview with Denis Teyssou, Agence France Presse

- Jochen Spangenberg
- On August 27, 2014
- http://blogs.dw.com/innovation/
How to deal with content from Social Networks? What are current challenges and requirements, especially with regards to verification? As part of our interview series we talked to Denis Teyssou of French news agency AFP to find out!
Denis Teyssou works in the Research & Development Medialab of French news agency AFP (Agence France Presse). As a journalist who has worked many years in the field of international news, he is very interested in and knowledgeable about Social Media and its use for news media. One focus of his work is digital forensics, as well as hoaxes and how they spread and develop. Also, Denis has participated in a number of EC co-funded research projects. All this combined makes him an excellent and very knowledgeable source when it comes to dealing with Social Media content and verification.
We are grateful for Denis’ time to talk to us. Here are some extracts of an interview we conducted with Denis earlier in 2014.
Jochen Spangenberg: Which Social Networks do you use professionally on a more or less regular basis?
Denis Teyssou: I use Twitter, YouTube and LinkedIn almost daily, Blogs and Google+ several times a week.
J.S.: Do you differentiate between private and professional use?
D.T.: I mainly have different accounts for private and professional use.
Use of Social Media
J.S.: Can you tell us more about your professional usage of Social Networks.
D.T.: I work primarily in R&D, where I have been dealing a lot with Social Media content, trying to find out how it can be used by news agencies, and how to verify content from Social Networks, including problems that come along with it.
In Social Media, I look primarily for information from the media sector. I am particularly interested in hoaxes, and “reverse engineering” of hoax cases – by that I mean how did they come about and spread subsequently. Also, I am interested in finding out why stories / topics appeared, why some of them were fakes etc. So my interest is very topic-related.
Talking about hoaxes and related problems: when something is wrong or something that has been manipulated is taken over or published by a reputable media organization and others pass it on, such false information often spreads quickly and causes a lot of damage. One key challenge for media is to avoid this kind of trap.
J.S.: For what do you use content from Social Networks?
D.T.: I try and read what is out there in Social Networks on topics I am interested in, interpreting and using it for my own work. I mainly share links and stuff I find interesting via Social Networks, but do not really have elaborate conversations on Social Networks as this is too time-consuming. But obviously I follow lots of people and organizations that are of relevance to me and my work, such as other journalists, media organisations, experts on specific topics and the like.
Verification of Social Media Content
J.S.: What about verification? Do you verify content you find in Social Networks?
D.T.: I always try to check the accuracy of content I read and share and, in my work, I’m testing new tools, especially image tools, to help verifying the content. Some are technical tools, others are social tools like Tweetdeck.
J.S.: How do you verify content?
D.T.: I use specific tools, for example Google Image Search and TinEye for image verification. I check links if they are provided in tweets or posts and whether they lead to credible information and sources. I also make date checks (e.g. of images) and look at EXIF data and the metadata of shared contents. Then I also check other information elsewhere about the topic or issue in question, for example on Tweetdeck selected searches.
J.S.: What useful tools are there for image verification?
D.T.: TinEye, Google Image Search, EXIF data analysers, Tungstene (a tool for verifying images, created by the French startup company Exo Makina), or any tool that provides metadata. One of the big issues is that technical tools only tell you if a picture has been manipulated or published previously somewhere else. They do not tell you if a picture has NOT been manipulated, and is 100% accurate.
J.S.: What useful tools are there for the verification of videos?
D.T.: There, it’s more about looking for frame inconsistencies in an editing tool, or whether similar videos or sequences exist elsewhere on YouTube or other video platforms. A way of finding such images is via keywords or similarity searches. Also important are metadata investigations. These are mostly done manually, by humans, for example checking a particular dialect and seeing whether it is spoken in the region from where it is supposed to come. Making location checks is also important.
J.S.: What about the verification of sources or contributors?
D.T.: Here, that’s not really a matter to use software tools but more of journalistic know-how. Plus things such as details about Social Media accounts. For example it’s safer to look into when an account was set up, who owns and operates it, whether it looks real and so on. AFP once operated a platform called Citizenside (now under different ownership, ed.). They did things such as consistency checks with their community.
J.S.: And what about the verification of text?
D.T.: I use primarily Google searches with quotation marks as a starting point.
J.S.: What do you find most useful for the verification of Social Media content?
D.T.: I’d probably say Google, as it is widely used and powerful. A problem with TinEye, for example, is that it has only a rather limited database of about 2 billion pictures. This is massive regarding the size of professional image databases, but that’s not that much considering the number of photos available on the web. Google, in turn, is much more powerful, as its picture index set is much larger, especially on news.
Wishes, Challenges and Requirements
J.S.: If you had a wish concerning Social Media verification and dealing with related content, what would that be?
D.T.: I’d like a tool that finds me images that were extracted from videos and shows me if a picture originated in a video. This was the case with the Hugo Chavez fake photo, when an image supposedly showing him under surgery in Cuba was taken from a video of somebody else in Mexico (see also El País’ explanation of how the error happened, ed.). Many fakes come from a re-use of photos or video (including movies) that were published before and in a different context, and then pretending to be something new. These are crossmedia fakes, difficult to detect because there is not necessarily an alteration of the digital file. If all digital content would have a unique signature and more metadata, it would be great but that’s not the case.
The problem is that many hoaxes come from a re-use of previously created videos or images. That is why it would be helpful to have a tool that searches and finds images by similarity that have been taken from somewhere else. These kinds of tools already exist in some big networks but are mainly used so far for rights management, to avoid uploading protected content.
For this to work one needs a huge database with a large index. We did a benchmark test in the GLOCAL European project, comparing results of Google’s image analysis with that of TinEye. While Google got it right 49 times out of 50, TinEye only succeeded 10 times out of 50. It shows that mainly Google, for pictures, has a big enough index to provide accurate enough results.
I think having a tool that does fact-checks over time by analysing what someone has said on a particular issue over time, discovering inconsistencies, would be interesting, too. That’s what I’d like to use.
J.S.: What do you regard as problems or challenges that require attention in this context?
D.T.: A problem with open communities such as Twitter is that they have become just too big. There is too much information in there. That is why filtering according to specific needs and requirements is so important. The danger is to miss what is important to you, as it is difficult or almost impossible to find everything that is of value and importance with the current APIs limitations and the price barrier to get access to Twitter’s firehose. That is why tools are required to provide this monitoring and filtering for the user.
J.S.: Anything else when it comes to verifying Social Media content or dealing with it from a news provider’s perspective?
D.T.: Some studies have shown that, so far, most of the content on Social Networks such as Twitter that is newsworthy comes from established, reputable sources, for example media organisations. User-generated content is of particular importance in breaking news. In such cases time is often critical. So verifying content from Social Networks is of particular importance when there’s urgency to confirm or deny an alert. In other cases, journalists can apply traditional journalistic techniques. Of course, having tools that aid in the process of verification would be useful, but they are not as vital as in breaking news.
J.S.: What about the dealings of other organizations involved in verification or dealing with it? Or tools and services we have not yet talked about?
D.T.: Storyful is doing interesting things, to curate social news content. Truthteller of the Washington Post is also interesting. It checks what people, especially politicians, have said over time and whether there have been changes in opinions or viewpoints. Trooclick is another attempt to check facts and provide a confidence score.
JS: Thank you very much for this interview.
More interviews we conducted as part of our REVEAL work are just a click away:
Related Posts
Software AG & REVEAL at CeBIT 2015 March 12, 2015 | Jochen Spangenberg

Verify This Week (3/2016) February 3, 2016 | Linda Rath-Wiggins

Living Lab Approach March 14, 2014 | Linda Rath-Wiggins

Pants on fire: content verification tools and other ways to deal with the fake n... December 13, 2016 | Aleksandra Kuczerawy

Submit a Comment