Coverage of science in the media 2

The contents of English-language online-news over 5 years have been analyzed to explore the impact of the Fukushima disaster on the media coverage of nuclear power. This big data study, based on millions of news articles, involves the extraction of narrative networks, association networks, and sentiment time series. We gathered over 5 million science articles between 1st May 2008 and 31st December 2013. In order to examine how different science-based issues and events are framed by ...

Approaches to document summarization using semantic graphs

Most of the work involves triplet extraction from documents using various tools and then performing co-reference resolution, anaphora resolution and semantic normalization. Finally the refined triplets are formed into a semantic graph. Later a SVM classifier is trained in order to get only the most relevant triplets for summarization purposes. To do this triplets are assigned a set of attributes like, linguistic attributes: the triplet type – subject, verb or object ...

Constructing semantic graphs from text documents

The rapid development of the World Wide Web and online information services has increased the accessibility of information everywhere. It is necessary to provide information that is more structured and synthesized in order to make things more efficient. Automatic generation of text through information extraction is a key area in linguistic research today.  This includes automatic question and answering, document summarization and visualization techniques. Semantic graph is the major source ...