scholarly journals Examining Compliance with Personal Data Protection Regulations in Interorganizational Data Analysis

2021 ◽  
Vol 13 (20) ◽  
pp. 11459
Author(s):  
Szu-Chuang Li ◽  
Yi-Wen Chen ◽  
Yennun Huang

The development of big data analysis technologies has changed how organizations work. Tech giants, such as Google and Facebook, are well positioned because they possess not only big data sets but also the in-house capability to analyze them. For small and medium-sized enterprises (SMEs), which have limited resources, capacity, and a relatively small collection of data, the ability to conduct data analysis collaboratively is key. Personal data protection regulations have become stricter due to incidents of private data being leaked, making it more difficult for SMEs to perform interorganizational data analysis. This problem can be resolved by anonymizing the data such that reidentifying an individual is no longer a concern or by deploying technical procedures that enable interorganizational data analysis without the exchange of actual data, such as data deidentification, data synthesis, and federated learning. Herein, we compared the technical options and their compliance with personal data protection regulations from several countries and regions. Using the EU’s GDPR (General Data Protection Regulation) as the main point of reference, technical studies, legislative studies, related regulations, and government-sponsored reports from various countries and regions were also reviewed. Alignment of the technical description with the government regulations and guidelines revealed that the solutions are compliant with the personal data protection regulations. Current regulations require “reasonable” privacy preservation efforts from data controllers; potential attackers are not assumed to be experts with knowledge of the target data set. This means that relevant requirements can be fulfilled without considerably sacrificing data utility. However, the potential existence of an extremely knowledgeable adversary when the stakes of data leakage are high still needs to be considered carefully.

2020 ◽  
Vol 12 (1) ◽  
pp. 225-245
Author(s):  
Célia Zolynski

Objective ”“ The article contrasts the problem of Big Data with the possibilities and limits of personal data protection. It is an original contribution to the academic discussion about the regulation of the Internet and the management of algorithms, focusing on Big Data. Methodology/approach/design ”“ The article provides bibliographic research on the opposition between Big Data and personal data protection, focusing on European Union law and French law. From the research is possible to identify regulatory alternatives do Big Data, whether legal-administrative nature or technological nature. Findings ”“ The article enlightens that, in addition to the traditional regulatory options, based on the law, there are technological options for regulating Big Data and algorithms. The article goes through an analysis of administrative performance, such as France’s CNIL (Commission nationale informatique et libertés, CNIL), to show that it has limits. Thus, the article concludes that there is a need to build a new type of regulation, one that is open to the inputs of regulated parties and civil society, in the form of new co-regulatory arrangements. Practical implications ”“ The article has an obvious application since the production of legal solutions for Internet regulation requires combining them with technological solutions. Brazil and several Latin American countries are experiencing this agenda, as they are building institutions and solutions to solve the dilemma of personal data protection. Originality/value ”“ The article clarifies several parts of the General Data Protection Regulation (EU Regulation 2016/679) and its applicability to Big Data. These new types of data processing impose several legal and regulatory challenges, whose solutions cannot be trivial and will rely on new theories and practices.


2021 ◽  
Vol 6 (5) ◽  
pp. 203-212
Author(s):  
Atiqah Azman ◽  
Nur Shaura Azrin Binti Azman ◽  
Nurul Sahira Binti Kamal Azwan ◽  
Sherie Aneesa Binti Johary Al Bakry ◽  
Wan Nur Afiqah Binti Wan Daud ◽  
...  

Big Data has revolutionized the process of online activities such as marketing and advertisement based on individual preferences in the eCommerce industry. In Malaysia, the integration of Big Data in the commercial and business environment is keenly felt by establishing the National Big Data Analytics Framework catalyzing further economic growth in all sectors. However, the distinct features of Big Data spawn issues relating to privacy, such as data profiling, lack of transparency regarding privacy policies, accidental disclosures of data, false data or false analytics results. Hence, this research provides an insight into the intersection between Big Data and an individual's fundamental rights. The trade-off between privacy breaching and preserving is becoming more intense due to the rapid advancement of Big Data. Suggesting comparative analysis method as the data analysis approach, the adequacy of the Malaysian Personal Data Protection Act 2010 (PDPA 2010) in governing the risks of Big Data is evaluated against the European Union General Data Protection Regulation (GDPR) in managing the risk arising from the integration of Big Data. This research is hoped to initiate the improvement to the legislative framework, provides fundamentals to the formulation of national policy, and creation of specific law on Big Data in Malaysia, which will subsequently benefit industrial players and stakeholders.


Author(s):  
Artur Potiguara Carvalho ◽  
Fernanda Potiguara Carvalho ◽  
Edna Dias Canedo ◽  
Pedro Henrique Potiguara Carvalho

2021 ◽  
Vol 13 (3) ◽  
pp. 66
Author(s):  
Dimitra Georgiou ◽  
Costas Lambrinoudakis

The General Data Protection Regulation (GDPR) harmonizes personal data protection laws across the European Union, affecting all sectors including the healthcare industry. For processing operations that pose a high risk for data subjects, a Data Protection Impact Assessment (DPIA) is mandatory from May 2018. Taking into account the criticality of the process and the importance of its results, for the protection of the patients’ health data, as well as the complexity involved and the lack of past experience in applying such methodologies in healthcare environments, this paper presents the main steps of a DPIA study and provides guidelines on how to carry them out effectively. To this respect, the Privacy Impact Assessment, Commission Nationale de l’Informatique et des Libertés (PIA-CNIL) methodology has been employed, which is also compliant with the privacy impact assessment tasks described in ISO/IEC 29134:2017. The work presented in this paper focuses on the first two steps of the DPIA methodology and more specifically on the identification of the Purposes of Processing and of the data categories involved in each of them, as well as on the evaluation of the organization’s GDPR compliance level and of the gaps (Gap Analysis) that must be filled-in. The main contribution of this work is the identification of the main organizational and legal requirements that must be fulfilled by the health care organization. This research sets the legal grounds for data processing, according to the GDPR and is highly relevant to any processing of personal data, as it helps to structure the process, as well as be aware of data protection issues and the relevant legislation.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Iwona Karasek-Wojciechowicz

AbstractThis article is an attempt to reconcile the requirements of the EU General Data Protection Regulation (GDPR) and anti-money laundering and combat terrorist financing (AML/CFT) instruments used in permissionless ecosystems based on distributed ledger technology (DLT). Usually, analysis is focused only on one of these regulations. Covering by this research the interplay between both regulations reveals their incoherencies in relation to permissionless DLT. The GDPR requirements force permissionless blockchain communities to use anonymization or, at the very least, strong pseudonymization technologies to ensure compliance of data processing with the GDPR. At the same time, instruments of global AML/CFT policy that are presently being implemented in many countries following the recommendations of the Financial Action Task Force, counteract the anonymity-enhanced technologies built into blockchain protocols. Solutions suggested in this article aim to induce the shaping of permissionless DLT-based networks in ways that at the same time would secure the protection of personal data according to the GDPR rules, while also addressing the money laundering and terrorist financing risks created by transactions in anonymous blockchain spaces or those with strong pseudonyms. Searching for new policy instruments is necessary to ensure that governments do not combat the development of all privacy-blockchains so as to enable a high level of privacy protection and GDPR-compliant data processing. This article indicates two AML/CFT tools which may be helpful for shaping privacy-blockchains that can enable the feasibility of such tools. The first tool is exceptional government access to transactional data written on non-transparent ledgers, obfuscated by advanced anonymization cryptography. The tool should be optional for networks as long as another effective AML/CFT measures are accessible for the intermediaries or for the government in relation to a given network. If these other measures are not available and the network does not grant exceptional access, the regulations should allow governments to combat the development of those networks. Effective tools in that scope should target the value of privacy-cryptocurrency, not its users. Such tools could include, as a tool of last resort, state attacks which would undermine the trust of the community in a specific network.


2021 ◽  
Author(s):  
Mirna El Ghosh ◽  
Habib Abdulrab

The primary goal of the General Data Protection Regulation (GDPR) is to regulate the rights and duties of citizens and organizations over personal data protection. Implementing the GDPR is recently gaining much importance for legal reasoning and compliance checking purposes. In this work, we aim to capture the basics of GDPR in a well-founded legal domain modular ontology named OPPD (Ontology for the Protection of Personal Data). Ontology-Driven Conceptual Modeling (ODCM), ontology layering, modularization, and reuse processes are applied. These processes aim to support the ontology engineer in overcoming the complexity of the legal knowledge and developing an ontology model faithful to reality. ODCM is used for grounding OPPD in the Unified Foundational Ontology (UFO). Ontology modularization and layering aim to simplify the ontology building process. Ontology reuse focuses on selecting and reusing Conceptual Ontology Patterns (COPs) from UFO and the legal core ontology UFO-L. OPPD intends to overcome the lack of a representation of legal procedures that most ontologies encountered. The potential use of OPPD is proposed to formalize the GDPR rules by combining ontological reasoning and Logic Programming.


Hypertension ◽  
2021 ◽  
Vol 77 (4) ◽  
pp. 1029-1035
Author(s):  
Antonia Vlahou ◽  
Dara Hallinan ◽  
Rolf Apweiler ◽  
Angel Argiles ◽  
Joachim Beige ◽  
...  

The General Data Protection Regulation (GDPR) became binding law in the European Union Member States in 2018, as a step toward harmonizing personal data protection legislation in the European Union. The Regulation governs almost all types of personal data processing, hence, also, those pertaining to biomedical research. The purpose of this article is to highlight the main practical issues related to data and biological sample sharing that biomedical researchers face regularly, and to specify how these are addressed in the context of GDPR, after consulting with ethics/legal experts. We identify areas in which clarifications of the GDPR are needed, particularly those related to consent requirements by study participants. Amendments should target the following: (1) restricting exceptions based on national laws and increasing harmonization, (2) confirming the concept of broad consent, and (3) defining a roadmap for secondary use of data. These changes will be achieved by acknowledged learned societies in the field taking the lead in preparing a document giving guidance for the optimal interpretation of the GDPR, which will be finalized following a period of commenting by a broad multistakeholder audience. In parallel, promoting engagement and education of the public in the relevant issues (such as different consent types or residual risk for re-identification), on both local/national and international levels, is considered critical for advancement. We hope that this article will open this broad discussion involving all major stakeholders, toward optimizing the GDPR and allowing a harmonized transnational research approach.


Author(s):  
Agnese Reine-Vītiņa

Mūsdienās tiesības uz privāto dzīvi nepieciešamas ikvienā demokrātiskā sabiedrībā, un šo tiesību iekļaušana konstitūcijā juridiski garantē fiziskas personas rīcības brīvību un vienlaikus arī citu – valsts pamatlikumā noteikto – cilvēka tiesību īstenošanu [5]. Personas datu aizsardzības institūts tika izveidots, izpratnes par tiesību uz personas privātās dzīves neaizskaramību saturu paplašinot 20. gadsimta 70. gados, kad vairāku Eiropas valstu valdības uzsāka informācijas apstrādes projektus, piemēram, tautas skaitīšanu u. c. Informācijas tehnoloģiju attīstība ļāva arvien vairāk informācijas par personām glabāt un apstrādāt elektroniski. Viena no tiesību problēmām bija informācijas vākšana par fizisku personu un tiesību uz privātās dzīves neaizskaramību ievērošana. Lai nodrošinātu privātās dzīves aizsardzību, atsevišķas Eiropas valstis pēc savas iniciatīvas pieņēma likumus par datu aizsardzību. Pirmie likumi par personas datu aizsardzību Eiropā tika pieņemti Vācijas Federatīvajā Republikā, tad Zviedrijā (1973), Norvēģijā (1978) un citur [8, 10]. Ne visas valstis pieņēma likumus par datu aizsardzību vienlaikus, tāpēc Eiropas Padome nolēma izstrādāt konvenciju, lai unificētu datu aizsardzības noteikumus un principus. Nowadays, the right to privacy is indispensable in every democratic society and inclusion of such rights in the constitution, guarantees legally freedom of action of a natural person and, simultaneously, implementation of other human rights established in the fundamental law of the state. The institute of personal data protection was established by expanding the understanding of the content of the right to privacy in the 70’s of the 19th century, when the government of several European countries initiated information processing projects, such as population census etc. For the development of information technology, more and more information on persons was kept and processed in electronic form. One of the legal problems was gathering of information on natural persons and the right to privacy. In order to ensure the protection of privacy, separate European countries, on their own initiative, established a law on data protection. The first laws on the protection of personal data in Europe were established in the Federal Republic of Germany, then in Sweden (1973), Norway (1978) and elsewhere. Not all countries adopted laws on data protection at the same time, so the Council of Europe decided to elaborate a convention to unify data protection rules and principles.


2020 ◽  
Vol 28 (4) ◽  
pp. 531-553 ◽  
Author(s):  
Aggeliki Tsohou ◽  
Emmanouil Magkos ◽  
Haralambos Mouratidis ◽  
George Chrysoloras ◽  
Luca Piras ◽  
...  

Purpose General data protection regulation (GDPR) entered into force in May 2018 for enhancing personal data protection. Even though GDPR leads toward many advantages for the data subjects it turned out to be a significant challenge. Organizations need to implement long and complex changes to become GDPR compliant. Data subjects are empowered with new rights, which, however, they need to become aware of. GDPR compliance is a challenging matter for the relevant stakeholders calls for a software platform that can support their needs. The aim of data governance for supporting GDPR (DEFeND) EU project is to deliver such a platform. The purpose of this paper is to describe the process, within the DEFeND EU project, for eliciting and analyzing requirements for such a complex platform. Design/methodology/approach The platform needs to satisfy legal and privacy requirements and provide functionalities that data controllers request for supporting GDPR compliance. Further, it needs to satisfy acceptance requirements, for assuring that its users will embrace and use the platform. In this paper, the authors describe the methodology for eliciting and analyzing requirements for such a complex platform, by analyzing data attained by stakeholders from different sectors. Findings The findings provide the process for the DEFeND platform requirements’ elicitation and an indicative sample of those. The authors also describe the implementation of a secondary process for consolidating the elicited requirements into a consistent set of platform requirements. Practical implications The proposed software engineering methodology and data collection tools (i.e. questionnaires) are expected to have a significant impact for software engineers in academia and industry. Social implications It is reported repeatedly that data controllers face difficulties in complying with the GDPR. The study aims to offer mechanisms and tools that can assist organizations to comply with the GDPR, thus, offering a significant boost toward the European personal data protection objectives. Originality/value This is the first paper, according to the best of the authors’ knowledge, to provide software requirements for a GDPR compliance platform, including multiple perspectives.


2016 ◽  
Vol 26 (1) ◽  
pp. 85-93
Author(s):  
Ryuichi Yamamoto

Sign in / Sign up

Export Citation Format

Share Document