Data sets and related information used for estimating regional ground-water evapotranspiration in eastern Nevada

2000 ◽  
Author(s):  
J. LaRue Smith ◽  
Brian D. Reece ◽  
Rose L. Medina
Author(s):  
Petr Praus

In this chapter the principals and applications of principal component analysis (PCA) applied on hydrological data are presented. Four case studies showed the possibility of PCA to obtain information about wastewater treatment process, drinking water quality in a city network and to find similarities in the data sets of ground water quality results and water-related images. In the first case study, the composition of raw and cleaned wastewater was characterised and its temporal changes were displayed. In the second case study, drinking water samples were divided into clusters in consistency with their sampling localities. In the case study III, the similar samples of ground water were recognised by the calculation of cosine similarity, the Euclidean and Manhattan distances. In the case study IV, 32 water-related images were transformed into a large image matrix whose dimensionality was reduced by PCA. The images were clustered using the PCA scatter plots.


2020 ◽  
Vol 30 (14) ◽  
pp. 2173-2191
Author(s):  
Robert D. Hall

In this manuscript, I utilize an ethnodramatic methodology in reanalyzing two data sets about college friends disclosing and receiving mental health-related information. After describing ethnodrama and how this methodology applies to mental health–related inquiry, I detail my process of creating an ethnodrama from two extant data sets. The result is an ethnodrama called Amicus cum Laude: Becoming a Friend with Honor for Mental Illness, a one-act play about how friends discuss mental health issues with one another. After providing the ethnodrama, I offer recommendations for taking the ethnodrama from page to stage while reflecting on and critiquing the final product.


2020 ◽  
Vol 12 (7) ◽  
pp. 2795
Author(s):  
Zhuoqian Liang ◽  
Ding Pan ◽  
Yuan Deng

With increasingly strict supervision, the complexity of enterprises’ annual reports has increased significantly, and the size of the text corpus has grown at an enormous rate. Information fusion for financial reporting has become a research hotspot. The difficulty of this problem is in filtering the massive amount of heterogeneous data and integrating related information distributed in different locations according to decision topics. This paper proposes a Graph NetWork (GNW) model that establishes the overall connection between decentralized information, as well as a graph network generation algorithm to filter large and complex data sets in financial reports and to mine key information to make it suitable for different decision situations. Finally, this paper uses the Planar Maximally Filtered Graph (PMFG) as a benchmark to show the effect of the generation algorithm.


2014 ◽  
Vol 955-959 ◽  
pp. 3437-3441
Author(s):  
Xin Wen Wang

At present, there is more storm waterlogging in our urban, one of the main reasons is that there are many disadvantages in rainwater pipe network design, this paper in the pipe network design must be reasonable to determine the ground water runoff coefficient, The ground water time, reduction coefficient, catchment area and position and quantity of gully according to the measured data, and appropriately increasing the recurrence interval according to the actual in engineering, hydraulic calculation as much as possible to select data sets larger diameter, accurate calculation time of flow, to reduce the urban storm waterlogging fundamentally.


2003 ◽  
Vol 18 (03) ◽  
pp. 475-485
Author(s):  
Ulrike Thoma

Partial wave analyses are often done to extract resonance parameters from data. Using analyses of data from the Crystal Barrel Experiment (LEAR) as an example, it will be shown that this can be done with higher reliability and precision if different data sets containing related information are combined in one analysis.


Author(s):  
Emma S. Spiro

Social media have become critical components of all phases of crisis management, including preparedness, response, and recovery. Numerous recent events have demonstrated that during extreme occurrences (such as natural hazards, civil unrest, and domestic terrorist attacks), social media platforms are appropriated for response activities, providing new infrastructure for official responders to disseminate event-related information, interact with members of the public, and monitor public opinion. Emergency responders recognize the potential of social media platforms and actively use these technologies to share information and connect with constituents; however, many questions remain about the effectiveness of social media platforms in reaching members of the public during times of crisis. Moreover, there is a strong tendency for research to focus on the behavior of the public rather than on that of official emergency responders. This chapter reviews prior and ongoing work that contributes to our understanding of usage practices and the effectiveness of networked online communication during times of crisis. In particular, it focuses on empirically driven research that utilizes large-scale data sets of behavioral traces captured from social media platforms. Together this body of work demonstrates how computational techniques combined with rich, curated data sets can be used to explore information and communication behaviors in online networks.


2016 ◽  
Vol 12 (3) ◽  
pp. 359-378 ◽  
Author(s):  
Takahiro Komamizu ◽  
Toshiyuki Amagasa ◽  
Hiroyuki Kitagawa

Purpose Linked data (LD) has promoted publishing information, and links published information. There are increasing number of LD datasets containing numerical data such as statistics. For this reason, analyzing numerical facts on LD has attracted attentions from diverse domains. This paper aims to support analytical processing for LD data. Design/methodology/approach This paper proposes a framework called H-SPOOL which provides series of SPARQL (SPARQL Protocol and RDF Query Language) queries extracting objects and attributes from LD data sets, converts them into star/snowflake schemas and materializes relevant triples as fact and dimension tables for online analytical processing (OLAP). Findings The applicability of H-SPOOL is evaluated using exiting LD data sets on the Web, and H-SPOOL successfully processes the LD data sets to ETL (Extract, Transform, and Load) for OLAP. Besides, experiments show that H-SPOOL reduces the number of downloaded triples comparing with existing approach. Originality/value H-SPOOL is the first work for extracting OLAP-related information from SPARQL endpoints, and H-SPOOL drastically reduces the amount of downloaded triples.


2021 ◽  
pp. 174569162098613
Author(s):  
Cédric Batailler ◽  
Skylar M. Brannon ◽  
Paul E. Teas ◽  
Bertram Gawronski

Researchers across many disciplines seek to understand how misinformation spreads with a view toward limiting its impact. One important question in this research is how people determine whether a given piece of news is real or fake. In the current article, we discuss the value of signal detection theory (SDT) in disentangling two distinct aspects in the identification of fake news: (a) ability to accurately distinguish between real news and fake news and (b) response biases to judge news as real or fake regardless of news veracity. The value of SDT for understanding the determinants of fake-news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news. Implications of SDT for the use of source-related information in the identification of fake news, interventions to improve people’s skills in detecting fake news, and the debunking of misinformation are discussed.


Sign in / Sign up

Export Citation Format

Share Document