scholarly journals Toward Learning Trustworthily from Data Combining Privacy, Fairness, and Explainability: An Application to Face Recognition

Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1047
Author(s):  
Danilo Franco ◽  
Luca Oneto ◽  
Nicolò Navarin ◽  
Davide Anguita

In many decision-making scenarios, ranging from recreational activities to healthcare and policing, the use of artificial intelligence coupled with the ability to learn from historical data is becoming ubiquitous. This widespread adoption of automated systems is accompanied by the increasing concerns regarding their ethical implications. Fundamental rights, such as the ones that require the preservation of privacy, do not discriminate based on sensible attributes (e.g., gender, ethnicity, political/sexual orientation), or require one to provide an explanation for a decision, are daily undermined by the use of increasingly complex and less understandable yet more accurate learning algorithms. For this purpose, in this work, we work toward the development of systems able to ensure trustworthiness by delivering privacy, fairness, and explainability by design. In particular, we show that it is possible to simultaneously learn from data while preserving the privacy of the individuals thanks to the use of Homomorphic Encryption, ensuring fairness by learning a fair representation from the data, and ensuring explainable decisions with local and global explanations without compromising the accuracy of the final models. We test our approach on a widespread but still controversial application, namely face recognition, using the recent FairFace dataset to prove the validity of our approach.

Author(s):  
Chris Reed

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable. This naturally leads to calls for regulation, but I argue that it is too early to attempt a general system of AI regulation. Instead, we should work incrementally within the existing legal and regulatory schemes which allocate responsibility, and therefore liability, to persons. Where AI clearly creates risks which current law and regulation cannot deal with adequately, then new regulation will be needed. But in most cases, the current system can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made. Transparency ex post can often be achieved through retrospective analysis of the technology's operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions. Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used. Masterly inactivity in regulation is likely to achieve a better long-term solution than a rush to regulate in ignorance. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.


2021 ◽  
Vol 9 (1) ◽  
pp. 51-66
Author(s):  
Kristi Joamets ◽  
◽  
Archil Chochia ◽  

Digitalisation and emerging technologies affect our lives and are increasingly present in a growing number of fields. Ethical implications of the digitalisation process have therefore long been discussed by the scholars. The rapid development of artificial intelligence (AI) has taken the legal and ethical discussion to another level. There is no doubt that AI can have a positive impact on the society. The focus here, however, is on its more negative impact. This article will specifically consider how the law and ethics in their interaction can be applied in a situation where a disabled person needs some kind of assistive technology to participate in the society as an equal member. This article intends to investigate whether the EU Guidelines for Trustworthy AI, as a milestone of ethics concerning technology, has the power to change the current practice of how social and economic rights are applied. The main focus of the article is the ethical requirements ‘Human agency and oversight’ and, more specifically, fundamental rights.


2021 ◽  
Author(s):  
Joel Grunhut ◽  
Oge Marques ◽  
Adam TM Wyatt

UNSTRUCTURED Artificial intelligence (AI) is on course to become a mainstay in the patient's room, physicians office and the surgical suite. Current advancements in healthcare technology put future physicians in an insufficiently equipped position and even possible inferiority to machines. Physicians will be regularly tasked with clinical decision making with the assistance of AI driven predictions. Present-day physicians are not trained to incorporate the suggestions of statistical predictions on a regular basis nor are they knowledgeable in an ethical approach to incorporating AI in their distribution of care. Medical schools do not currently incorporate AI in the curriculum due to the lack of faculty expertise or knowledge on the matter, the lack of evidence in students desire to learn about AI, complacency with an already rigorous curriculum or lack of guidance on AI in medical education from medical education governing bodies. Medical schools should incorporate AI in the curriculum as a longitudinal thread in current subjects. Current students should have an understanding in the breadth of AI tools, the framework of engineering and designing AI solutions to clinical issues and acquiring knowledge about data appropriate to AI innovations. Study cases in the curriculum should include an AI recommendation that may present critical decision making challenges. Finally, the ethical implications of AI in medicine must be at the forefront of any comprehensive medical education.


2020 ◽  
Author(s):  
Christopher Welker ◽  
David France ◽  
Alice Henty ◽  
Thalia Wheatley

Advances in artificial intelligence (AI) enable the creation of videos in which a person appears to say or do things they did not. The impact of these so-called “deepfakes” hinges on their perceived realness. Here we tested different versions of deepfake faces for Welcome to Chechnya, a documentary that used face swaps to protect the privacy of Chechen torture survivors who were persecuted because of their sexual orientation. AI face swaps that replace an entire face with another were perceived as more human-like and less unsettling compared to partial face swaps that left the survivors’ original eyes unaltered. The full-face swap was deemed the least unsettling even in comparison to the original (unaltered) face. When rendered in full, AI face swaps can appear human and avoid aversive responses in the viewer associated with the uncanny valley.


2020 ◽  
Author(s):  
Avishek Choudhury

UNSTRUCTURED Objective: The potential benefits of artificial intelligence based decision support system (AI-DSS) from a theoretical perspective are well documented and perceived by researchers but there is a lack of evidence showing its influence on routine clinical practice and how its perceived by care providers. Since the effectiveness of AI systems depends on data quality, implementation, and interpretation. The purpose of this literature review is to analyze the effectiveness of AI-DSS in clinical setting and understand its influence on clinician’s decision making outcome. Materials and Methods: This review protocol follows the Preferred Reporting Items for Systematic Reviews and Meta- Analyses reporting guidelines. Literature will be identified using a multi-database search strategy developed in consultation with a librarian. The proposed screening process consists of a title and abstract scan, followed by a full-text review by two reviewers to determine the eligibility of articles. Studies outlining application of AI based decision support system in a clinical setting and its impact on clinician’s decision making, will be included. A tabular synthesis of the general study details will be provided, as well as a narrative synthesis of the extracted data, organised into themes. Studies solely reporting AI accuracy an but not implemented in a clinical setting to measure its influence on clinical decision making were excluded from further review. Results: We identified 8 eligible studies that implemented AI-DSS in a clinical setting to facilitate decisions concerning prostate cancer, post traumatic stress disorder, cardiac ailment, back pain, and others. Five (62.50%) out of 8 studies reported positive outcome of AI-DSS. Conclusion: The systematic review indicated that AI-enabled decision support systems, when implemented in a clinical setting and used by clinicians might not ensure enhanced decision making. However, there are very limited studies to confirm the claim that AI based decision support system can uplift clinicians decision making abilities.


Sign in / Sign up

Export Citation Format

Share Document