Image Image Image Image Image Image Image Image Image Image

REVEAL | September 1, 2016

Scroll to top

Top

No Comments

Multimedia Forensic Investigations

Multimedia Forensic Investigations
Jochen Spangenberg

In this contribution, Markos Zampoglou and Symeon Papadopoulos of the Centre for Research and Technology Hellas (CERTH-ITI) explore the present and future of multimedia forensic investigation, and attempt to take the challenge of detecting tampered images out in the real world.

 

Digital image analysis

“A picture is worth a thousand words”, the saying goes – but it does not go on to tell us how many of these words are actually true. Pictures have been used to accompany news stories from as far back as photography goes, and the first confirmed cases of manipulated photographs are almost just as old. But with the spreading of consumer capturing devices and user-friendly photo editing tools, modifying photographic content has ceased to be the domain of a select few artists and has become a game any of us can play. And with the advent of social media and grassroots journalism, spreading false stories – and having evidence to prove them wrong – is now easier than ever.

Within REVEAL, we have set out to explore and create tools to verify multimedia content in social media and the Web, among other research challenges. This includes verifying the truthfulness of images. Some of our current work for REVEAL at ITI/CERTH focuses on analyzing digital images for traces of tampering, not by using any contextual or high-level information, but exclusively by examining their pixel content. This sub-field of image forensics narrows down the investigation task by leaving out cases in which the fraud was committed on the semantic level (such as when reusing older images in different, irrelevant contexts, or setting up photo shoots and presenting them as on-site reports), and solely focuses on images that had their content altered using processing software.

The state-of-the-art

From the viewpoint of an image forensics expert, we could classify such forgeries in two broad groups: copy-moving and splicing. A copy-move attack occurs when we take a part of an image and replicate it within the same image, either to add content (like making a crowd seem larger) or removing it (such as cloning the background over a person to “erase” them). Splicing occurs when we take parts from one image and place them in another, creating the impression that additional elements were present in the scene. While other image processing operations such as brightness/contrast adjustments can also alter the image content, we can generally assume that these are not meant to alter the meaning of the image (the O.J. Simpson case notwithstanding).

Detecting copy-move attacks is essentially a within-image similarity search task, in which we compare different parts of the image with each other and look for matches. Such approaches have the advantage of being reasonably robust to subsequent image alterations (such as rescaling, blurring or resaving it at a poorer quality). On the other hand, research is still ongoing on how to make the search faster without sacrificing performance – for a large-scale task such as social media verification, the cumulative demands of the search could easily get out of control.

Figure 1: The infamous “Iran missile test” copy-move forgery. A SURF-based search on any copy of the image easily catches the fraud after a few seconds’ processing.

Figure 1: The infamous “Iran missile test” copy-move forgery. A SURF-based search on any copy of the image easily catches the fraud after a few seconds’ processing.

Splicing detection is based on a different premise: we assume that when a part of an image is placed in another, it carries with it some distinct pixel patterns that are different from those of the recipient image. For example, each camera device leaves its own signature trace of fine noise (called Photo Response Non-Uniformity – PRNU noise) on the image pixel values, while each JPEG compression also imposes patterns on an image, which are telltale of specific compression parameter combinations. By seeking inconsistencies on these patterns (invisible to the human eye) within an image, we can identify image regions that seem to differ from the rest – it then falls to the investigator to conclude if this may be due to the image having been tampered.

Figure 2: Using one noise-detecting algorithm [1] and two JPEG-based ones [2], [3] to analyze an image from a well-established experimental dataset. The forgery is easily detected and localized.

Figure 2: Using one noise-detecting algorithm [1] and two JPEG-based ones [2], [3] to analyze an image from a well-established experimental dataset. The forgery is easily detected and localized.

Out of the lab and into the Wild

Recent research has provided us with algorithms that yield very promising results on the currently available experimental datasets. But what happens when we take these algorithms for applications out in the real world? Most splicing detection algorithms are very sensitive to subsequent image alterations – it is part of their theoretical foundations that, if the image is resaved as JPEG, or rescaled, or otherwise filtered, cropped or brightness-adjusted, these fine traces will most likely disappear forever. The problem is that we know that images circulating on the Web are more likely than not to have been altered in such a way at least once.

In our work for REVEAL, we began by investigating what Twitter and Facebook do to images uploaded on their respective platforms. The results were disheartening: while it was already known that both social media platforms strip practically all metadata information from uploaded images – including metadata that could be invaluable in an investigation – we found out that they also resave all JPEG images at a quality of their own choice, and even proceed to rescale them if they consider them to be too large.

We theoretically know that splicing detection algorithms are expected to break down in the face of these operations, and our experimental evaluations [4] confirmed that knowledge. However, a few of the most robust algorithms may retain some of their discriminatory capabilities following a JPEG resave at medium quality, but a rescaling will wreck any trace irreparably.

Essentially, the investigative scenario we are dealing with is the one depicted in Figure 3: we know that, for every forgery, an original (first-posted) forgery once existed and may still exist out there, but what we have to work with is a multitude of images mediated by various Web and social media platforms, which may or may not have destroyed all useful traces.

Figure 3: A depiction of the typical interceding stages between the committing of a new forgery and the forensic analysis of an image for tampering, in a real-world scenario (clip art from openclipart.org).

Figure 3: A depiction of the typical interceding stages between the committing of a new forgery and the forensic analysis of an image for tampering, in a real-world scenario (clip art from openclipart.org).

In order to direct research to this scenario, and away from the cozy environment of artificial datasets, we decided to build the first benchmark collection of real-world forgeries. We investigated recent history and identified 82 separate cases of confirmed forgeries in the form of fake photos or “hoaxes”, all being the result of image tampering (predominantly splicing).

Figure 4: Four indicative forgeries from the Wild Web tampered image dataset.

Figure 4: Four indicative forgeries from the Wild Web tampered image dataset.

Taking the role of the investigator, we were faced with dozens – or even hundreds – of different, unique image versions of each case, with no clue as to which ones were closer to the original forgery. Consequently, we decided to collect them all using the Google and Tineye reverse image search services. We thus built a collection of 13,577 unique images for these 82 cases, which we labeled the Wild Web tampered image dataset (Figure 4). We then proceeded to put aside any derivations of the original forgeries that would be useless to our investigation (Figure 5) and create binary ground-truth masks for each case and each possible forgery operation imaginable (Figure 6). Finally we evaluated a series of powerful splicing detection algorithms on the entire dataset, and published our results in the 2015 International Workshop on Web Multimedia Verification [4].

Figure 5: A splice forgery, a second splice committed on top of the original one (“post-splice”) and a significant cropping. Only the first version is useful to our investigation.

Figure 5: A splice forgery, a second splice committed on top of the original one (“post-splice”) and a significant cropping. Only the first version is useful to our investigation.

Figure 6: A splice forgery, the original source, and the ground-truth binary masks we hand-crafted for evaluating the algorithm results.

Figure 6: A splice forgery, the original source, and the ground-truth binary masks we hand-crafted for evaluating the algorithm results.

Some results

Whether the results of our evaluation were hopeful or disappointing is an issue of interpretation: on the one hand, 47 out of the 82 cases evaded all detection – and allowing for the possibility of overestimation, the true detection capabilities of today’s state-of-the-art could be even below that level. What is worse, if we exclude the three simplest cases, out of the 8,580 remaining files only 333 were successfully detected. That means: by picking a random forged image from the dataset, we roughly have a 3% chance of a successful analysis.

Not all is bleak, however: our investigation showed how, in about half of the investigated cases, some even 10 years old, a few images exist out there that are so close to the original forgery that they can be successfully analyzed using today’s state-of-the-art. Figure 7 gives a few such examples of successful analyses.

Next steps

With these insights in mind, in the next steps of our research we will be looking for new, more robust methods and more resilient traces for detecting forgeries. We ought not to forget that every time a new “too-good-to-be-true” photograph appears on the Web, it is very likely that a “younger” version of it, full of incriminating evidence, may still be out there for an investigator to locate.

REVEAL is nearing the middle of its 3-year time span. We have advanced our work into the current limitations of today’s strongest methods. Next we will be exploring ways to adapt and expand the state-of-the-art towards real-world web applications. We hope that our Wild Web dataset will help foster new ideas in the research community in the tasks ahead, to provide modern tools for social media and Web multimedia verification.

Figure 7: Examples of successful detections within the Wild Web tampered image dataset.

Figure 7: Examples of successful detections within the Wild Web tampered image dataset.

References

[1] B. Mahdian and S. Saic, “Using noise inconsistencies for blind image forensics,” Image and Vision Computing, vol. 27, no. 10, pp. 1497-1503, 2009.

[2] Z. Lin, J. He, X. Tang and C.-K. Tang, “Fast, automatic and fine-grained tampered {JPEG} image detection via {DCT} coefficient analysis,” Pattern Recognition, vol. 42, no. 11, pp. 2492-2501, 2009.

[3] T. Bianchi and A. Piva, “Image Forgery Localization via Block-Grained Analysis of JPEG Artifacts,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 3, pp. 1003-1017, 2012.

[4] M. Zampoglou, S. Papadopoulos and I. Kompatsiaris, “Detecting Image Splicing in the Wild (Web)”, International Workshop on Web Multimedia Verification (WeMuV 2015), held in conjunction with the 2015 IEEE International Conference on Multimedia & Expo (ICME 2015), forthcoming.

 

Authors: Markos Zampoglou & Symeon Papadopoulos (CERTH-ITI)
Editor: Jochen Spangenberg (DW)

 

Submit a Comment