From Twitter-based Crisis Mapping to Large-scale Real-Time Situation Assessment with Trust and Credibility Analysis

- Jochen Spangenberg
- On July 24, 2014
- http://blogs.dw.com/innovation/
First R&D results are becoming available in REVEAL. In this contribution, Stuart Middleton of the University of Southampton’s IT Innovation Centre shares his insights into work done by IT Innovation to date, regarding crisis mapping and real-time situation assessment with trust and credibility analysis. The work partly builds on what has been done in previous research projects.
In recent years there has been a growing trend for the use of publicly available social media content (e.g. content shared via Twitter, YouTube, Facebook or Instagram) for analytics in areas such as journalism, crisis management, political analysis and business intelligence. With social media content freely available and updated in real-time, it is no wonder analysts are turning to it to discover trending topics, buzzing locations, emerging visionaries and so much more.
Current Tools and What They are Good for
In politics, social media sites such as Facebook and Twitter are used to track [5] [8] sentiment (i.e. positive or negative) in real-time during election campaigns, analysing major speeches, political debates and policy announcements as they happen [12]. For crisis management Twitter has been used to plot accurate real-time crisis maps [7] of incident reports during natural disasters such as hurricanes and tornados. Businesses are also making good use of social media [6] [11] to track public response during product launches, plan new store locations and target influential people [3] within market sectors.
Current analytic toolsets for social media can be broadly categorized as dashboards, crisis mapping toolkits and in-depth analysis frameworks. Dashboard applications (e.g. Tweetdeck, Sulia, Storyful, Flumes, WebLyzard) allow people such as journalists to track news stories, alerting them to new and relevant content, trending topics and influential people. These dashboard applications allow click-through from content to author, providing contact details for a subsequent manual verification process (e.g. verification of content via a phone call). Tools for crisis mapping also follow a manual verification process (e.g. Ushahidi), using teams of volunteers to verify web and social media incident reports before adding them to a crisis map for display. For in depth analysis there are tools supporting sentiment analysis (e.g. Bing Elections, SocialMention), social network graph visualization (e.g. MentionMapp, Bottlenose) and topic tracking (e.g. Trackur).
Challenges in Social Media Analysis
Three challenges key to social media analytics are:
- scaling up approaches (i.e. big data techniques) to handle the content volumes we see today,
- augmenting and improving the existing manual process for social media content verification, ultimately improving both trust and credibility in the final analysis, and
- increasing support for multi-lingual analytics.

Figure 1. Examples [4] of fake news stories during Hurricane Sandy 2012, based on doctored images, which were widely propagated across social media. Source – BBC News – © 2012 BBC
Crisis Mapping of Natural Disasters
One example showing how social media analytics are evolving can be seen in the crisis mapping platform [7] developed by the TRIDEC project [10]. In TRIDEC, social media analytics were used to allow Tsunami early warning centres to monitor Twitter for reports within ‘at risk’ coastal areas near known geological fault lines which have the potential to cause a Tsunami. Real-time monitoring is important as early wave impact assessments can be used to warn people on coastlines further away, allowing them to get to safety. Such crisis maps can help inform the general public during a crisis and can be used by emergency response teams to complement official information streams such as hardware-based sensor reports including satellite imaging and aerial photography.
Streams of Twitter messages were geo-parsed in real-time and mentions of known locations detected. These sets of real-time location mentions were then aggregated, statistically analysed and plotted on a real-time crisis map. This approach produces high precision crisis maps as can be seen in figures 2 and 3. In the REVEAL project [9] these techniques are evolving, scaling up to run on a cluster of computers in a data centre using the Storm [2] distributed computation framework. This means that it should be possible to simultaneously monitor large numbers of locations such as the entire coastline of a country.

Figure 2. Crisis mapping of New York’s 2012 flooding from Hurricane Sandy. The left hand image is the official post-event Storm surge assessment map from the US National Geospatial Agency. The right hand image is the incident map TRIDEC was able to create from tweets crawled during the event. Mapping courtesy of ArcGIS ESRI portal and Google Maps.

Figure 3. Crisis mapping of Oklahoma’s 2013 tornado. The left hand image is the official post-event damage assessment map from the US National Geospatial Agency. The right hand image is the incident map TRIDEC was able to create from tweets crawled during the event. Mapping courtesy of ArcGIS ESRI portal and Google Maps.
Large-scale Situation Assessment with Trust and Credibility Analysis
The REVEAL project is adding support for journalists and business analysts. In addition to real-time mapping visualizations, situation assessments will show trending topics (e.g. topics, hashtags, URI’s) and emerging social networks (e.g. influential people, well connected people). These situation assessments will be personalized to the news story and/or business need of each analyst, providing a real-time interactive view across a range of social media sources (e.g. Twitter, Facebook, Instagram, YouTube).
Analysts need more than just clever visualization from a situation assessment of course. These techniques are evolving to support an evidential approach for assessing the trust and credibility of large volumes of social media content. This knowledge-based approach allows automated classification of existing evidence, useful to provide a variety of filtered viewpoints. It also allows prior knowledge from the analyst to be utilized, such as lists of trusted ‘news hounds’ that a journalist has learnt to trust from past experience. Inference can be made of new information resulting from logical deductions from the evidence at hand. Figure 4 shows examples of the types of inference possible.

Figure 4. Example of evidential inference from the Boston bombing event. This simple conclusion is not very useful on its own, but becomes powerful when scaled up by combining with inference from 1000′s of other facts extracted from social media reports. This type of technique could offer insights into trust and credibility not practical to achieve with manual approaches
Future Trends
The technology driving social media analytics is evolving. Applications are moving beyond dashboard type analytics toolkits and scaling things up. The volume of social media content that can be processed is increasing and there is an increasing focus on adding trust and credibility analysis to the situation assessment pictures compiled. This will not replace the need for manual verification – a phone call to the content author is always going to be an effective way to ascertain if someone is telling the truth – but it will help analysts focus their attention on what matters in the sea of social media content out there.
References:
[2] Apache Storm, Distributed and fault-tolerant real-time computation, APACHE incubator project
[3] Chishick, T. “the “Fry” effect”, Blog, Agenda21, Mar 2011,
[4] Eveleth, R. “Hurricane Sandy: Five ways to spot a fake photograph”, BBC Future, Oct 2012
[5] Feldman, R. "Techniques and Applications for Sentiment Analysis", Communications of the ACM, vol.56, issue 4, pp. 82-89, April 2013 Feldman, R. "Techniques and Applications for Sentiment Analysis", Communications of the ACM, vol.56, issue 4, pp. 82-89, April 2013
[6] Kent, P. “Viewpoint: Big data and big analytics means better business”, BBC News, Oct 2012
[8] Moore, M.T. “Twitter index tracks sentiment on Obama, Romney”, USA Today, Jan 2012
[9] REVEAL project, EC FP7 Grant Agreement 610928
[10] TRIDEC project, EC FP7 Grant Agreement 258723
[11] Wall, M. “Location tech and mobile map out way to better business”, BBC News, June 2014
[12] WeGov project, EC FP7 Grant Agreement 248512
Acknowledgement:
The work presented in this article is part of the research and development carried out in the REVEAL project (grant agreement 610928), supported by the 7th Framework Program of the European Commission.
About the author:
Stuart E. Middleton is a senior research engineer at the University of Southampton IT Innovation Centre. His main research interests are social media, sensor systems, data fusion and ontologies. Stuart has a PhD in Computer Science from the University of Southampton.
Related Posts
REVEAL Results Vol. 6: Influence & Search... January 16, 2017 | Jochen Spangenberg

REVEAL Results Vol. 5: Topic modelling and more... January 11, 2017 | Jochen Spangenberg

Multimedia Forensic Investigations May 26, 2015 | Jochen Spangenberg

REVEAL Reports Available February 18, 2015 | Jochen Spangenberg

Submit a Comment