Administrative Law and the Governance of Automated Decision-Making: A Critical Look at Canada’s Directive on Automated Decision-Making

2020 ◽  
Author(s):  
Teresa Scassa
2021 ◽  
Vol 44 (3) ◽  
Author(s):  
Anna Huggins

Automation is transforming how government agencies make decisions. This article analyses three distinctive features of automated decision-making that are difficult to reconcile with key doctrines of administrative law developed for a human-centric decision-making context. First, the complex, multi-faceted decision-making requirements arising from statutory interpretation and administrative law principles raise questions about the feasibility of designing automated systems to cohere with these expectations. Secondly, whilst the courts have emphasised a human mental process as a criterion of a valid decision, many automated decisions are made with limited or no human input. Thirdly, the new types of bias associated with opaque automated decision-making are not easily accommodated by the bias rule, or other relevant grounds of judicial review. This article, therefore, argues that doctrinal and regulatory evolution are both needed to address these disconnections and maintain the accountability and contestability of administrative decisions in the digital age.


Legal Studies ◽  
2019 ◽  
Vol 39 (4) ◽  
pp. 636-655 ◽  
Author(s):  
Jennifer Cobbe

AbstractThe future is likely to see an increase in the public-sector use of automated decision-making systems which employ machine learning techniques. However, there is no clear understanding of how English administrative law will apply to this kind of decision-making. This paper seeks to address the problem by bringing together administrative law, data protection law, and a technical understanding of automated decision-making systems in order to identify some of the questions to ask and factors to consider when reviewing the use of these systems. Due to the relative novelty of automated decision-making in the public sector, this kind of study has not yet been undertaken elsewhere. As a result, this paper provides a starting point for judges, lawyers, and legal academics who wish to understand how to legally assess or review automated decision-making systems and identifies areas where further research is required.


Author(s):  
Michèle Finck

This chapter examines the uses of automated decision-making (ADM) systems in administrative settings. First, it introduces the current enthusiasm surrounding computational intelligence before a cursory overview of machine learning and deep learning is provided. The chapter thereafter examines the potential of these forms of data analysis in administrative processes. In addition, this chapter underlines that, depending on how they are used; these tools risk impacting pejoratively on established concepts of administrative law. This is illustrated through the example of the principle of transparency. To conclude, a number of guiding principles designed to ensure the sustainable use of these tools are outlined and topics for further research are suggested.


2020 ◽  
Vol 11 (1) ◽  
pp. 18-50 ◽  
Author(s):  
Maja BRKAN ◽  
Grégory BONNET

Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.


Sign in / Sign up

Export Citation Format

Share Document