scholarly journals A Data Protection Framework for Learning Analytics

2016 ◽  
Vol 3 (1) ◽  
Author(s):  
Andrew Nicholas Cormack

Most studies on the use of digital student data adopt an ethical framework derived from human-studies research, based on the informed consent of the experimental subject. However consent gives universities little guidance on the use of learning analytics as a routine part of educational provision: which purposes are legitimate and which analyses involve an unacceptable risk of harm. Obtaining consent when students join a course will not give them meaningful control over their personal data three or more years later. Relying on consent may exclude those most likely to benefit from early interventions. This paper proposes an alternative framework based on European Data Protection law. Separating the processes of analysis (pattern-finding) and intervention (pattern-matching) gives students and staff continuing protection from inadvertent harm during data analysis; students have a fully informed choice whether or not to accept individual interventions; organisations obtain clear guidance: how to conduct analysis, which analyses should not proceed, and when and how interventions should be offered. The framework provides formal support for practices that are already being adopted and helps with several open questions in learning analytics, including its application to small groups and alumni, automated processing and privacy-sensitive data.

2019 ◽  
pp. 248-262
Author(s):  
Lee A Bygrave

This chapter focuses on Articles 22 and 25 of the EU’s General Data Protection Regulation (Regulation 2016/679). It examines how these provisions will impact automated decisional systems. Article 22 gives a person a qualified right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’. Article 25 imposes a duty on controllers of personal data to implement technical and organizational measures so that the processing of the data will meet the Regulation’s requirements and otherwise ensure protection of the data subject’s rights. Both sets of rules are aimed squarely at subjecting automated decisional systems to greater accountability. The chapter argues that the rules suffer from significant weaknesses that are likely to hamper their ability to meet this aim.


2020 ◽  
pp. 155-186
Author(s):  
María Dolores Mas Badia

Despite the differences between credit risk and insurance risk, in many countries large insurance companies include credit history amongst the information to be taken into account when assigning consumers to risk pools and deciding whether or not to offer them an auto or homeowner insurance policy, or to determine the premium that they should pay. In this study, I will try to establish some conclusions concerning the requirements and limits that the use of credit history data by insurers in the European Union should be subject to. In order to do this, I shall focus my attention primarily on Regulation (EU) 2016/679. This regulation, that came into force on 24 May 2018, not only forms the backbone of personal data protection in the EU, but is also set to become a model for regulation beyond the borders of the Union. This article will concentrate on two main aspects: the lawful basis for the processing of credit history data by insurers, and the rules that should apply to decisions based solely on automated processing, including profiling.Received: 30 December 2019Accepted: 07 February 2020Published online: 02 April 2020


2020 ◽  
Vol 6(161) ◽  
pp. 47-67
Author(s):  
Karol Grzybowski

By adapting the provisions of the Labour Code to EU regulations on personal data protection, the legislator has explicitly allowed employers to process personal data of employees and applicants for employment on the basis of their consent. However, the new provisions exclude the processing of data on convictions on this basis and limit the possibility of giving effective consent to the processing of sensitive data. The article attempts to analyze the solutions adopted in the context of the constitutional guarantee of informational self-determination. The author defends the thesis that the provisions of Article 221a § 1 and Article 221b § 1 of the Labour Code disproportionately interfere with an individual’s right to dispose of data concerning him or her. These provisions do not meet the criterion of the intervention’s necessity. The protective goal of the regulation, as established by the legislator, may be achieved by means of the legal instruments indicated in the article, which do not undermine the freedom aspect of the informational self-determination.


2021 ◽  
Author(s):  
Ventsislav Karadjov ◽  

The concept of data protection by default and by design is fundamental step for understanding contemporary personal data protection processes. The principle of "data protection by design" has been introduced to protect the rights of individuals in the automated processing of personal data. It should be reflected in all contemporary epitome of digitalisation, including artificial intelligence. Its continuation is the data protection by default.


2020 ◽  
pp. 1-9
Author(s):  
Tataru Stefan Razvan ◽  
Irene Nica

Sports activities attract an impressive number of participants, manifesting themselves in a multitude of forms, in leisure or performance sports, in and out of the sports ground. In the context in which the sports industry processes a variety of personal data of athletes, including sensitive data such as information concerning health, we aim to analyse the impact of the General Regulation on the protection of personal data in sports activities. In the first part of the study we analysed the incidence of sport in daily life and the forms of organization of sports structures. Subsequently, we focused our attention in particular on the way in which the personal data of the athletes are processed, the rights they enjoy under the new European regulations and the measures that the operators should ensure for the protection of these data.


2021 ◽  
Vol 54 (1) ◽  
pp. 1-35
Author(s):  
Nikolaus Marsch ◽  
Timo Rademacher

German data protection laws all provide for provisions that allow public authorities to process personal data whenever this is ‘necessary’ for the respective authority to fulfil its tasks or, in the case of sensitive data in the meaning of art. 9 GDPR, if this is ‘absolutely necessary’. Therewith, in theory, data protection law provides for a high degree of administrative flexibility, e. g. to cope with unforeseen situations like the Coronavirus pandemic. However, these provisions, referred to in German doctrine as ‘Generalklauseln’ (general clauses or ‘catch-all’-provisions in English), are hardly used, as legal orthodoxy assumes that they are too vague to form a sufficiently clear legal basis for public purpose processing under the strict terms of the German fundamental right to informational self-determination (art. 2‍(1), 1‍(1) German Basic Law). As this orthodoxy appears to be supported by case law of the German Constitutional Court, legislators have dutifully reacted by creating a plethora of sector specific laws and provisions to enable data processing by public authorities. As a consequence, German administrative data protection law has become highly detailed and confusing, even for legal experts, therewith betraying the very purpose of legal clarity and foreseeability that scholars intended to foster by requiring ever more detailed legal bases. In our paper, we examine the reasons that underlie the German ‘ban’ on using the ‘Generalklauseln’. We conclude that the reasons do not justify the ban in general, but only in specific areas and/or processing situations such as security and criminal law. Finally, we list several arguments that do speak in favour of a more ‘daring’ approach when it comes to using the ‘Generalklauseln’ for public purpose data processing.


2018 ◽  
Vol 42 (3) ◽  
pp. 290-303 ◽  
Author(s):  
Montserrat Batet ◽  
David Sánchez

Purpose To overcome the limitations of purely statistical approaches to data protection, the purpose of this paper is to propose Semantic Disclosure Control (SeDC): an inherently semantic privacy protection paradigm that, by relying on state of the art semantic technologies, rethinks privacy and data protection in terms of the meaning of the data. Design/methodology/approach The need for data protection mechanisms able to manage data from a semantic perspective is discussed and the limitations of statistical approaches are highlighted. Then, SeDC is presented by detailing how it can be enforced to detect and protect sensitive data. Findings So far, data privacy has been tackled from a statistical perspective; that is, available solutions focus just on the distribution of the data values. This contrasts with the semantic way by which humans understand and manage (sensitive) data. As a result, current solutions present limitations both in preventing disclosure risks and in preserving the semantics (utility) of the protected data. Practical implications SeDC captures more general, realistic and intuitive notions of privacy and information disclosure than purely statistical methods. As a result, it is better suited to protect heterogenous and unstructured data, which are the most common in current data release scenarios. Moreover, SeDC preserves the semantics of the protected data better than statistical approaches, which is crucial when using protected data for research. Social implications Individuals are increasingly aware of the privacy threats that the uncontrolled collection and exploitation of their personal data may produce. In this respect, SeDC offers an intuitive notion of privacy protection that users can easily understand. It also naturally captures the (non-quantitative) privacy notions stated in current legislations on personal data protection. Originality/value On the contrary to statistical approaches to data protection, SeDC assesses disclosure risks and enforces data protection from a semantic perspective. As a result, it offers more general, intuitive, robust and utility-preserving protection of data, regardless their type and structure.


Author(s):  
Yue WANG

LANGUAGE NOTE | Document text in Chinese; abstract in English only.At present, the development of AI depends on three core elements: high-quality data, accurate algorithms and sufficient computing power. New technologies represented by big data, cloud computing and AI are exerting a significant impact on traditional data protection. Individuals' control over their personal data is weakening, data protection is becoming more difficult, and traditional measures of privacy protection are at risk of failure. These are the most representative problems in the conflict between the development of new technology and privacy protection. A new legal and ethical framework that values humans' physical safety, health and dignity should be established and deeply integrated into the entire life cycle of the design, production and application of medical AI. Based on this premise, effort should be made to promote the development of medical AI for the benefit of mankind.DOWNLOAD HISTORY | This article has been downloaded 38 times in Digital Commons before migrating into this platform.


Author(s):  
Ammar Younas ◽  

The increasing ‘datafication of society’1 and ubiquitous computing resulted in high privacy risks such as commercial exploitation of personal data, discrimination, identity theft and profiling (automated processing of personal data). 2 Especially, minor data subjects are more likely to be victims of unfair commercial practices due to their behavioral characteristics (emotional volatility and impulsiveness) and unawareness of consequences of their virtual activities.3 Accordingly, it has been claimed that thousands of mobile apps utilized by children collected their data and used it for tracking their location, processed it for the development of child profiles so as to tailor behavioral advertising targeted at them and shared it with third parties without children’s or parent’s knowledge.4 Following these concerns, recently adopted EU General Data Protection Regulation (679/2016) departed from its Data Protection Directive (DPD) in terms of children’s data protection by explicitly recognizing that minors need more protection than adults5 and providing specific provisions aimed at protecting children’s right to data protection.6 Unlike the GDPR, the DPD was designed to provide “equal” protection for all data subjects irrespective of their age.7 This paper argues that consent principle along with the requirement of parental consent cannot effectively be implemented for the protection of children’s data due to the lack of actual choice, verification issues and complexity of data processing, and also the outcome of the privacy notices in a child-appropriate form is limited. However, there are other mechanisms and restrictions embodied in the GDPR, which provide opportunities for the protection of children’s data by placing burden on data controllers rather than data subjects.


Author(s):  
Marco Alessi ◽  
Alessio Camillò ◽  
Enza Giangreco ◽  
Marco Matera ◽  
Stefano Pino ◽  
...  

Sharing personal data with service providers is a fundamental resource for the times we live in. But data sharing represents an unavoidable issue, due to improper data treatment, lack of users' awareness to whom they are sharing with, wrong or excessive data sharing from end users who ignore they are exposing personal information. The problem becomes even more complicate if we try to consider the devices around us: how to share devices we own, so that we can receive pervasive services, based on our contexts and device functionalities. The European Authority has provided the General Data Protection Regulation (GDPR), in order to implement protection of sensitive data in each EU member, throughout certification mechanisms (according to Art. 42 GDPR). The  certification assures compliance to  the regulation, which represent a mandatory requirement for any service which may come in contact with sensitive data. Still the certification is an open process and not constrained by strict rule. In this paper we describe our decentralized approach in sharing personal data in the era of smart devices, being those considered sensitive data as well. Having in mind the centrality of users in the ownership of the data, we have proposed a decentralized Personal Data Store prototype, which stands as a unique data sharing endpoint for third party services.  Even if blockchain technologies may seem fit to solve the issue of data protection, because of the absence of a central authority, they lay to additional concerns especially relating such technologies with specifications described in the regulation. The current work offers a contribution in the advancements of personal data sharing management systems in a distributed environment by presenting a real prototype and an architectural blueprint, which advances the state of the art in order to meet the GDPR regulation. Address those arisen issues, from a technological perspective, stands as an important challenge, in order to empower end users in owning their personal data for real.


Sign in / Sign up

Export Citation Format

Share Document