Capturing data quality requirements for web applications by means of DQ_WebRE

Author(s):  
César Guerra-García ◽  
Ismael Caballero ◽  
Mario Piattini
2012 ◽  
Vol 15 (3) ◽  
pp. 433-445 ◽  
Author(s):  
César Guerra-García ◽  
Ismael Caballero ◽  
Mario Piattini

2021 ◽  
Vol 25 (4) ◽  
pp. 763-787
Author(s):  
Alladoumbaye Ngueilbaye ◽  
Hongzhi Wang ◽  
Daouda Ahmat Mahamat ◽  
Ibrahim A. Elgendy ◽  
Sahalu B. Junaidu

Knowledge extraction, data mining, e-learning or web applications platforms use heterogeneous and distributed data. The proliferation of these multifaceted platforms faces many challenges such as high scalability, the coexistence of complex similarity metrics, and the requirement of data quality evaluation. In this study, an extended complete formal taxonomy and some algorithms that utilize in achieving the detection and correction of contextual data quality anomalies were developed and implemented on structured data. Our methods were effective in detecting and correcting more data anomalies than existing taxonomy techniques, and also highlighted the demerit of Support Vector Machine (SVM). These proposed techniques, therefore, will be of relevance in detection and correction of errors in large contextual data (Big data).


2021 ◽  
Vol 127 ◽  
pp. 103414
Author(s):  
N. Omri ◽  
Z. Al Masry ◽  
N. Mairot ◽  
S. Giampiccolo ◽  
N. Zerhouni

2020 ◽  
Vol 26 (1) ◽  
pp. 107-126
Author(s):  
Anastasija Nikiforova ◽  
Janis Bicevskis ◽  
Zane Bicevska ◽  
Ivo Oditis

The paper proposes a new data object-driven approach to data quality evaluation. It consists of three main components: (1) a data object, (2) data quality requirements, and (3) data quality evaluation process. As data quality is of relative nature, the data object and quality requirements are (a) use-case dependent and (b) defined by the user in accordance with his needs. All three components of the presented data quality model are described using graphical Domain Specific Languages (DSLs). In accordance with Model-Driven Architecture (MDA), the data quality model is built in two steps: (1) creating a platform-independent model (PIM), and (2) converting the created PIM into a platform-specific model (PSM). The PIM comprises informal specifications of data quality. The PSM describes the implementation of a data quality model, thus making it executable, enabling data object scanning and detecting data quality defects and anomalies. The proposed approach was applied to open data sets, analysing their quality. At least 3 advantages were highlighted: (1) a graphical data quality model allows the definition of data quality by non-IT and non-data quality experts as the presented diagrams are easy to read, create and modify, (2) the data quality model allows an analysis of "third-party" data without deeper knowledge on how the data were accrued and processed, (3) the quality of the data can be described at least at two levels of abstraction - informally using natural language or formally by including executable artefacts such as SQL statements.


2011 ◽  
Vol 58 (4) ◽  
pp. 327-336 ◽  
Author(s):  
Lucy R. Wyatt ◽  
J. Jim Green ◽  
A. Middleditch

2019 ◽  
Vol 214 ◽  
pp. 01030
Author(s):  
Juraj Smiesko

An integrated system for data quality and conditions assessment for the ATLAS Tile Calorimeter is known amongst the ATLAS Tile Calorimeter as the Tile-in-One. It is a platform for combining all of the ATLAS Tile Calorimeter offline data quality tools in one unified web interface. It achieves this by using simple main web server to serve as central hub and group of small web applications called plugins, which provide the data quality assessment tools. Every plugin runs in its own virtual machine in order to prevent interference between the plugins and also to increase stability of the platform.


Author(s):  
Roberto Sassano ◽  
Luis Olsina ◽  
Luisa Mich

The consistent modeling of quality requirements for Web sites and applications at different stages of the life cycle is still a challenge to most Web engineering researchers and practitioners. In the present chapter, we propose an integrated approach to specify quality requirements to Web sites and applications. By extending the ISO 9126-1 quality views characteristics, we discuss how to model internal, external quality, and quality in use views taking into account not only the software features, but also the own characteristics of Web applications. Particularly, we thoroughly analyze the modeling of the content characteristic for evaluating the quality of information–so critical for the whole Web application eras. The resulting model represents a first step towards a multi-dimensional integrated approach to evaluate Web sites at different lifecycle stages.


Author(s):  
Tihomir Orehovački

Quality is an essential determinant of the success of every type of software and social Web applications are not an exception. It is therefore of great importance that the examination of the degree to which social Web applications meet predefined requirements related to particular facets of quality is performed effectively and frequently. With an objective to facilitate evaluation procedure and enable comparison of social Web applications at all levels of the quality model, we initiated a research into development of a methodology that will aggregate quality requirements into a single score. The work presented in this paper draws on the employment of the logic scoring of preference (LSP) method and outlines only some parts of the aforementioned methodology. After identifying quality attributes that constitute the requirement tree, elementary criteria for both objective and subjective performance variables were introduced. As a follow up, field experts were included in the study in order to determine weights of performance variables within particular performance subsystem. Finally, the appropriate logic aggregation operators were selected based on the relevance of performance variables


Sign in / Sign up

Export Citation Format

Share Document