scholarly journals FairCs—Blockchain-Based Fair Crowdsensing Scheme using Trusted Execution Environment

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3172
Author(s):  
Yihuai Liang ◽  
Yan Li ◽  
Byeong-Seok Shin

Crowdsensing applications provide platforms for sharing sensing data collected by mobile devices. A blockchain system has the potential to replace a traditional centralized trusted third party for crowdsensing services to perform operations that involve evaluating the quality of sensing data, finishing payment, and storing sensing data and so forth. The requirements which are codified as smart contracts are executed to evaluate the quality of sensing data in a blockchain. However, regardless of the fact that the quality of sensing data may actually be sufficient, one key challenge is that malicious requesters can deliberately publish abnormal requirements that cause failure to occur in the quality evaluation process. If requesters control a miner node or full node, they can access the data without making payment; this is because of the transparency of data stored in the blockchain. This issue promotes unfair dealing and severely lowers the motivation of workers to participate in crowdsensing tasks. We (i) propose a novel crowdsensing scheme to address this issue using Trusted Execution Environments; (ii) offer a solution for the confidentiality and integrity of sensing data, which is only accessible by the worker and corresponding requester; (iii) and finally, report on the implementation of a prototype and evaluate its performance. Our results demonstrate that the proposed solution can guarantee fairness without a significant increase in overhead.

2020 ◽  
Vol 26 (1) ◽  
pp. 107-126
Author(s):  
Anastasija Nikiforova ◽  
Janis Bicevskis ◽  
Zane Bicevska ◽  
Ivo Oditis

The paper proposes a new data object-driven approach to data quality evaluation. It consists of three main components: (1) a data object, (2) data quality requirements, and (3) data quality evaluation process. As data quality is of relative nature, the data object and quality requirements are (a) use-case dependent and (b) defined by the user in accordance with his needs. All three components of the presented data quality model are described using graphical Domain Specific Languages (DSLs). In accordance with Model-Driven Architecture (MDA), the data quality model is built in two steps: (1) creating a platform-independent model (PIM), and (2) converting the created PIM into a platform-specific model (PSM). The PIM comprises informal specifications of data quality. The PSM describes the implementation of a data quality model, thus making it executable, enabling data object scanning and detecting data quality defects and anomalies. The proposed approach was applied to open data sets, analysing their quality. At least 3 advantages were highlighted: (1) a graphical data quality model allows the definition of data quality by non-IT and non-data quality experts as the presented diagrams are easy to read, create and modify, (2) the data quality model allows an analysis of "third-party" data without deeper knowledge on how the data were accrued and processed, (3) the quality of the data can be described at least at two levels of abstraction - informally using natural language or formally by including executable artefacts such as SQL statements.


Author(s):  
Ilze Kazaine

An increasing number of educational institutions in the study process uses one of the e-learning systems. Consequently, more and more students are offered learning materials in electronic format. E-materials in distance learning and e-learning is one of the most important elements, therefore much attention and enough time should be paid for their development. There are a number of studies on e-learning quality, where criteria of quality are discussed in the context of chosen e-learning environment and the process of implementation. This article examines only the quality of e-material. The aim was to find a way to reduce the effort and time of electronic learning material quality evaluation. The study used content analysis by summarizing the most important factors influencing the quality for teaching materials. Based on the quality criteria mentioned in the literature and personal experience, a criterion, which affects quality of e-learning material, were summarized and grouped. The criteria were grouped into four groups resulting from didactic, media, usability and formal quality. Quality evaluation is performed by using one of the methods used in software engineering - checklist. Based on the identified quality criteria a checklist was established. In order to facilitate the evaluation process a web-based tool is offered. The tool includes a defined checklist with assessment rating scale and three levels of impact. Evaluation of material quality is shown in the terms of percentage. After testing the tool, it could be used for course developers, program managers or other persons involved in evaluation process of e-learning resources.


Author(s):  
Gintaute Zibeniene.

The author analyzes the international nature of study programme evaluation with regard to the assurance of study quality. The organisation of the evaluation process of the non-university study programmes which were developed and submitted for realisation in Lithuania and other countries is also presented and compared. It is being analysed whether it is possible to identify the quality of these programmes based on quantitative and qualitative indicators.


Crowdsourcing ◽  
2019 ◽  
pp. 1173-1201
Author(s):  
Hongyu Zhang ◽  
Jacek Malczewski

A large amount of crowd-sourced geospatial data have been created in recent years due to the interactivity of Web 2.0 and the availability of Global Positioning System (GPS). This geo-information is typically referred to as volunteered geographic information (VGI). OpenStreetMap (OSM) is a popular VGI platform that allows users to create or edit maps using GPS-enabled devices or aerial imageries. The issue of quality of geo-information generated by OSM has become a trending research topic because of the large size of the dataset and the inapplicability of Linus' Law in a geospatial context. This chapter systematically reviews the quality evaluation process of OSM, and demonstrates a case study of London, Canada for the assessment of completeness, positional accuracy and attribute accuracy. The findings of the quality evaluation can potentially serve as a guide of cartographic product selection and provide a better understanding of the development of OSM quality over geographic space and time.


Author(s):  
Hongyu Zhang ◽  
Jacek Malczewski

A large amount of crowd-sourced geospatial data have been created in recent years due to the interactivity of Web 2.0 and the availability of Global Positioning System (GPS). This geo-information is typically referred to as volunteered geographic information (VGI). OpenStreetMap (OSM) is a popular VGI platform that allows users to create or edit maps using GPS-enabled devices or aerial imageries. The issue of quality of geo-information generated by OSM has become a trending research topic because of the large size of the dataset and the inapplicability of Linus' Law in a geospatial context. This chapter systematically reviews the quality evaluation process of OSM, and demonstrates a case study of London, Canada for the assessment of completeness, positional accuracy and attribute accuracy. The findings of the quality evaluation can potentially serve as a guide of cartographic product selection and provide a better understanding of the development of OSM quality over geographic space and time.


2013 ◽  
Vol 712-715 ◽  
pp. 2611-2614
Author(s):  
Xi Liang Wang ◽  
Xuan Qin ◽  
Dao Xin Liu ◽  
Zi Jian Wang ◽  
Jun Wang ◽  
...  

The demand for electric power data is more and more widely, and put forward higher requirements to the quality of statistical data. This paper combined with the features of electric power data. Evaluate data quality from the accuracy, completeness, uniqueness, consistency, accuracy, efficiency and timeliness seven aspects. And put forward the specific evaluation methods of each evaluation index. Then build a whole data quality evaluation process on this basis, quantitative analysis the data in the database, to acquaintance the data quality condition.


2021 ◽  
Author(s):  
Natnatee Dokmai ◽  
Can Kockan ◽  
Kaiyuan Zhu ◽  
XiaoFeng Wang ◽  
S. Cenk Sahinalp ◽  
...  

AbstractGenotype imputation is an essential tool in genetics research, whereby missing genotypes are inferred based on a panel of reference genomes to enhance the power of downstream analyses. Recently, public imputation servers have been developed to allow researchers to leverage increasingly large-scale and diverse genetic data repositories for imputation. However, privacy concerns associated with uploading one’s genetic data to a third-party server greatly limit the utility of these services. In this paper, we introduce a practical, secure hardware-based solution for a privacy-preserving imputation service, which keeps the input genomes private from the service provider by processing the data only within a Trusted Execution Environment (TEE) offered by the Intel SGX technology. Our solution features SMac, an efficient, side-channel-resilient imputation algorithm designed for Intel SGX, which employs the hidden Markov model (HMM)-based imputation strategy also utilized by a state-of-the-art imputation software Minimac. SMac achieves imputation accuracies virtually identical to those of Minimac and provides protection against known attacks on SGX while maintaining scalability to large datasets. We additionally show the necessity of our strategies for mitigating side-channel risks by identifying vulnerabilities in existing imputation software and controlling their information exposure. Overall, our work provides a guideline for practical and secure implementation of genetic analysis tools in SGX, representing a step toward privacy-preserving analysis services that can facilitate data sharing and accelerate genetics research.†AvailabilityOur software is available at https://github.com/ndokmai/sgx-genotype-imputation.


Sign in / Sign up

Export Citation Format

Share Document