Comparing performance metrics of a parallel ECC architecture vs. input data patterns and granularity

Author(s):  
Esmaeil Amini ◽  
Zahra Jeddi ◽  
Salim Farah ◽  
Magdy Bayoumi
2021 ◽  
Vol 10 (2) ◽  
pp. 69
Author(s):  
Katarzyna Słomska-Przech ◽  
Izabela Małgorzata Gołębiowska

It is acknowledged that various types of thematic maps emphasize different aspects of mapped phenomena and thus support different map users’ tasks. To provide empirical evidence, a user study with 366 participants was carried out comparing three map types showing the same input data. The aim of the study is to compare the effect of using choropleth, graduated symbols, and isoline maps to solve basic map user tasks. Three metrics were examined: two performance metrics (answer accuracy and time) and one subjective metric (difficulty). The results showed that the performance metrics differed between the analyzed map types, and better performances were recorded using the choropleth map. It was also proven that map users find the most commonly applied type of the map, choropleth map, as the easiest. In addition, the subjective metric matched the performance metrics. We conclude with the statement that the choropleth map can be a sufficient solution for solving various tasks. However, it should be remembered that making this type of map correctly may seem easy, but it is not. Moreover, we believe that the richness of thematic cartography should not be abandoned, and work should not be limited to one favorable map type only.


2020 ◽  
Author(s):  
Fakhereh Alidoost ◽  
Jerom Aerts ◽  
Bouwe Andela ◽  
Jaro Camphuijsen ◽  
Nick van De Giesen ◽  
...  

<p>eWaterCycle is a framework in which hydrological modelers can work together in a collaborative environment. In this environment, they can, for example, compare and analyze the results of models that use different sources of (meteorological) forcing data. The final goal of eWaterCycle is to advance the state of FAIR (Findable, Accessible, Interoperable, and Reusable) and open science in hydrological modeling.</p><p>Comparing hydrological models has always been a challenging task. Hydrological models exhibit great complexity and diversity in the exact methodologies applied, competing for hypotheses of hydrologic behavior, technology stacks, and programming languages used in those models. Pre-processing of forcing data is one of the roadblocks that was identified during the FAIR Hydrological Modelling workshop organized by the Lorentz Center in April 2019. Forcing data can be retrieved from a wide variety of sources with discrepant variable names and frequencies, and spatial and temporal resolutions. Moreover, some hydrological models make specific assumptions about the definition of the forcing variables. The pre-processing is often performed by various sets of scripts that may or may not be included with model source codes, making it hard to reproduce results. Generally, there are common steps in the data preparation among different models. Therefore, it would be a valuable asset to the hydrological community if the pre-processing of FAIR input data could also be done in a FAIR manner.</p><p>Within the context of the eWaterCycle II project, a common pre-processing system has been created for hydrological modeling based on ESMValTool (Earth System Model Evaluation Tool). ESMValTool is a community diagnostic and performance metrics tool developed for the evaluation of Earth system models. The ESMValTool pre-processing functions cover a broad range of operations on data before diagnostics or metrics are applied; for example, vertical interpolation, land-sea masking, re-gridding, multi-model statistics, temporal and spatial manipulations, variable derivation and unit conversion. The pre-processor performs these operations in a centralized, documented and efficient way. The current pre-processing pipeline of the eWaterCycle using ESMValTool consists of hydrological model-specific recipes and supports ERA5 and ERA-Interim data provided by the ECMWF (European Centre for Medium-Range Weather Forecasts). The pipeline starts with the downloading and CMORization (Climate Model Output Rewriter) of input data. Then a recipe is prepared to find the data and run the preprocessors. When ESMValTool runs a recipe, it will also run the diagnostic script that contains model-specific analysis to derive required forcing variables, and it will store provenance information to ensure transparency and reproducibility. In the near future, the pipeline is extended to include Earth observation data, as these data are paramount to the data assimilation in eWaterCycle.</p><p>In this presentation we will show how using the pre-processor from ESMValTool for Hydrological modeling leads to connecting Hydrology and Climate sciences, and increase the impact and sustainability of ESMValTool.</p>


2018 ◽  
Vol 10 (2) ◽  
pp. 951-968 ◽  
Author(s):  
Maliko Tanguy ◽  
Christel Prudhomme ◽  
Katie Smith ◽  
Jamie Hannaford

Abstract. Potential evapotranspiration (PET) is a necessary input data for most hydrological models and is often needed at a daily time step. An accurate estimation of PET requires many input climate variables which are, in most cases, not available prior to the 1960s for the UK, nor indeed most parts of the world. Therefore, when applying hydrological models to earlier periods, modellers have to rely on PET estimations derived from simplified methods. Given that only monthly observed temperature data is readily available for the late 19th and early 20th century at a national scale for the UK, the objective of this work was to derive the best possible UK-wide gridded PET dataset from the limited data available. To that end, firstly, a combination of (i) seven temperature-based PET equations, (ii) four different calibration approaches and (iii) seven input temperature data were evaluated. For this evaluation, a gridded daily PET product based on the physically based Penman–Monteith equation (the CHESS PET dataset) was used, the rationale being that this provides a reliable “ground truth” PET dataset for evaluation purposes, given that no directly observed, distributed PET datasets exist. The performance of the models was also compared to a “naïve method”, which is defined as the simplest possible estimation of PET in the absence of any available climate data. The “naïve method” used in this study is the CHESS PET daily long-term average (the period from 1961 to 1990 was chosen), or CHESS-PET daily climatology. The analysis revealed that the type of calibration and the input temperature dataset had only a minor effect on the accuracy of the PET estimations at catchment scale. From the seven equations tested, only the calibrated version of the McGuinness–Bordne equation was able to outperform the “naïve method” and was therefore used to derive the gridded, reconstructed dataset. The equation was calibrated using 43 catchments across Great Britain. The dataset produced is a 5 km gridded PET dataset for the period 1891 to 2015, using the Met Office 5 km monthly gridded temperature data available for that time period as input data for the PET equation. The dataset includes daily and monthly PET grids and is complemented with a suite of mapped performance metrics to help users assess the quality of the data spatially. This dataset is expected to be particularly valuable as input to hydrological models for any catchment in the UK. The data can be accessed at https://doi.org/10.5285/17b9c4f7-1c30-4b6f-b2fe-f7780159939c.


2021 ◽  
Vol 10 (8) ◽  
pp. 562
Author(s):  
Katarzyna Słomska-Przech ◽  
Tomasz Panecki ◽  
Wojciech Pokojski

Recently, due to Web 2.0 and neocartography, heat maps have become a popular map type for quick reading. Heat maps are graphical representations of geographic data density in the form of raster maps, elaborated by applying kernel density estimation with a given radius on point- or linear-input data. The aim of this study was to compare the usability of heat maps with different levels of generalization (defined by radii of 10, 20, 30, and 40 pixels) for basic map user tasks. A user study with 412 participants (16–20 years old, high school students) was carried out in order to compare heat maps that showed the same input data. The study was conducted in schools during geography or IT lessons. Objective (the correctness of the answer, response times) and subjective (response time self-assessment, task difficulty, preferences) metrics were measured. The results show that the smaller radius resulted in the higher correctness of the answers. A larger radius did not result in faster response times. The participants perceived the more generalized maps as easier to use, although this result did not match the performance metrics. Overall, we believe that heat maps, in given circumstances and appropriate design settings, can be considered an efficient method for spatial data presentation.


Author(s):  
R.A. Ploc ◽  
G.H. Keech

An unambiguous analysis of transmission electron diffraction effects requires two samplings of the reciprocal lattice (RL). However, extracting definitive information from the patterns is difficult even for a general orthorhombic case. The usual procedure has been to deduce the approximate variables controlling the formation of the patterns from qualitative observations. Our present purpose is to illustrate two applications of a computer programme written for the analysis of transmission, selected area diffraction (SAD) patterns; the studies of RL spot shapes and epitaxy.When a specimen contains fine structure the RL spots become complex shapes with extensions in one or more directions. If the number and directions of these extensions can be estimated from an SAD pattern the exact spot shape can be determined by a series of refinements of the computer input data.


2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


2015 ◽  
Vol 9 (3) ◽  
pp. 273-300 ◽  
Author(s):  
David Savat ◽  
Greg Thompson

One of the more dominant themes around the use of Deleuze and Guattari's work, including in this special issue, is a focus on the radical transformation that educational institutions are undergoing, and which applies to administrator, student and educator alike. This is a transformation that finds its expression through teaching analytics, transformative teaching, massive open online courses (MOOCs) and updateable performance metrics alike. These techniques and practices, as an expression of control society, constitute the new sorts of machines that frame and inhabit our educational institutions. As Deleuze and Guattari's work posits, on some level these are precisely the machines that many people in their day-to-day work as educators, students and administrators assemble and maintain, that is, desire. The meta-model of schizoanalysis is ideally placed to analyse this profound shift that is occurring in society, felt closely in the so-called knowledge sector where a brave new world of continuous education and motivation is instituting itself.


Sign in / Sign up

Export Citation Format

Share Document