The extensive use of social media platforms, especially during disasters, creates unique opportun... more The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering), and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an ongoing crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.
Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important ... more Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.
19th Italian Symposium on Advanced Database Systems, 2011
Abstract. In this paper, we summarize our experience and first results achieved in the context of... more Abstract. In this paper, we summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation ...
INTERNATIONAL JOURNAL OF NEXT-GENERATION COMPUTING, Jan 7, 2012
The recent emergence of mashup tools has refueled research on end-user development, ie, on enabli... more The recent emergence of mashup tools has refueled research on end-user development, ie, on enabling end-userswithout programming skills to compose own applications. Yet, similar to what happened with analogous promisesin web service composition and business process management, research has mostly focused on technology and, asa consequence, has failed its objective. Plain technology (eg, SOAP/WSDL web services) or simple modelinglanguages (eg, Yahoo! Pipes) don't ...
This document presents the overall architecture of the Liquid Publication integrated platform and... more This document presents the overall architecture of the Liquid Publication integrated platform and describes all components in detail. In addition to that, it also describes LiquidPub applications. This final version not only introduces advances in the development of the platform but also a more mature understanding of the requirements and possibilities, explored in the LiquidPub applications covering the aspects of knowledge creation, dissemination and evaluation. Keyword list: design, architecture, applications, platform, ...
In this paper we present the concepts and the architecture of an open and resource-oriented tool ... more In this paper we present the concepts and the architecture of an open and resource-oriented tool for evaluating research impact of individual researchers and groups. The advantages of the tool are that it relies on several sources for computing metrics, freely available on the Internet, and provide an easy way for defining and computing custom metrics for individuals and research groups.
The extensive use of social media platforms, especially during disasters, creates unique opportun... more The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering), and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an ongoing crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.
Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important ... more Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.
19th Italian Symposium on Advanced Database Systems, 2011
Abstract. In this paper, we summarize our experience and first results achieved in the context of... more Abstract. In this paper, we summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation ...
INTERNATIONAL JOURNAL OF NEXT-GENERATION COMPUTING, Jan 7, 2012
The recent emergence of mashup tools has refueled research on end-user development, ie, on enabli... more The recent emergence of mashup tools has refueled research on end-user development, ie, on enabling end-userswithout programming skills to compose own applications. Yet, similar to what happened with analogous promisesin web service composition and business process management, research has mostly focused on technology and, asa consequence, has failed its objective. Plain technology (eg, SOAP/WSDL web services) or simple modelinglanguages (eg, Yahoo! Pipes) don't ...
This document presents the overall architecture of the Liquid Publication integrated platform and... more This document presents the overall architecture of the Liquid Publication integrated platform and describes all components in detail. In addition to that, it also describes LiquidPub applications. This final version not only introduces advances in the development of the platform but also a more mature understanding of the requirements and possibilities, explored in the LiquidPub applications covering the aspects of knowledge creation, dissemination and evaluation. Keyword list: design, architecture, applications, platform, ...
In this paper we present the concepts and the architecture of an open and resource-oriented tool ... more In this paper we present the concepts and the architecture of an open and resource-oriented tool for evaluating research impact of individual researchers and groups. The advantages of the tool are that it relies on several sources for computing metrics, freely available on the Internet, and provide an easy way for defining and computing custom metrics for individuals and research groups.
International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 2020
During a disaster event, images shared on social media helps crisis managers gain situational awa... more During a disaster event, images shared on social media helps crisis managers gain situational awareness and assess incurred damages, among other response tasks. Recent advances in computer vision and deep neural networks have enabled the development of models for real-time image classification for a number of tasks, including detecting crisis incidents, filtering irrelevant images, classifying images into specific humanitarian categories, and assessing the severity of damage. Despite several efforts, past works mainly suffer from limited resources (i.e., labeled images) available to train more robust deep learning models. In this study, we propose new datasets for disaster type detection, and informativeness classification, and damage severity assessment. Moreover, we relabel existing publicly available datasets for new tasks. We identify exact-and near-duplicates to form non-overlapping data splits, and finally consolidate them to create larger datasets. In our extensive experiments, we benchmark several state-of-the-art deep learning models and achieve promising results. We release our datasets and models publicly, aiming to provide proper baselines as well as to spur further research in the crisis informatics community.
People increasingly use microblogging platforms such as Twitter during natural disasters and emer... more People increasingly use microblogging platforms such as Twitter during natural disasters and emergencies. Research studies have revealed the usefulness of the data available on Twitter for several disaster response tasks. However, making sense of social media data is a challenging task due to several reasons such as limitations of available tools to analyze high-volume and high-velocity data streams, dealing with information overload, among others. To eliminate such limitations, in this work, we first show that textual and imagery content on social media provide complementary information useful to improve situational awareness. We then explore ways in which various Artificial Intelligence techniques from Natural Language Processing and Computer Vision fields can exploit such complementary information generated during disaster events. Finally, we propose a methodological approach that combines several computational techniques effectively in a unified framework to help humanitarian organizations in their relief efforts. We conduct extensive experiments using textual and imagery content from millions of tweets posted during the three major disaster events in the 2017 Atlantic Hurricane season. Our study reveals that the distributions of various types of useful information can inform crisis managers and responders, and facilitate the development of future automated systems for disaster management.
ACM Journal on Computing and Cultural Heritage (JOCCH), 2020
This article describes a method for early detection of disaster-related damage to cultural herita... more This article describes a method for early detection of disaster-related damage to cultural heritage. It is based on data from social media, a timely and large-scale data source that is nevertheless quite noisy. First, we collect images posted on social media that may refer to a cultural heritage site. Then, we automatically categorize these images according to two dimensions: whether they are indeed a photo in which a cultural heritage resource is the main subject, and whether they represent damage. Both categorizations are challenging image classification tasks, given the ambiguity of these visual categories; we tackle both tasks using a convolutional neural network. We test our methodology on a large collection of thousands of images from the web and social media, which exhibit the diversity and noise that is typical of these sources, and contain buildings and other architectural elements, heritage and not-heritage, damaged by disasters as well as intact. Our results show that while the automatic classification is not perfect, it can greatly reduce the manual effort required to find photos of damaged cultural heritage by accurately detecting relevant candidates to be examined by a cultural heritage professional.
Uploads
Papers by Muhammad Imran