scholarly journals Integrative Biological Simulation, Neuropsychology, and AI Safety

2018 ◽  
Author(s):  
Gopal Sarma ◽  
Adam Safron ◽  
Nick Hay
2018 ◽  
Author(s):  
Gopal P. Sarma ◽  
Adam Safron ◽  
Nick J. Hay

We describe a biologically-inspired research agenda with parallel tracks aimed at AI and AI safety. The bottom-up component consists of building a sequence of biophysically realistic simulations of simple organisms such as the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio to serve as platforms for research into AI algorithms and system architectures. The top-down component consists of an approach to value alignment that grounds AI goal structures in neuropsychology, broadly considered. Our belief is that parallel pursuit of these tracks will inform the development of value-aligned AI systems that have been inspired by embodied organisms with sensorimotor integration. An important set of side benefits is that the research trajectories we describe here are grounded in long-standing intellectual traditions within existing research communities and funding structures. In addition, these research programs overlap with significant contemporary themes in the biological and psychological sciences such as data/model integration and reproducibility.


2019 ◽  
Author(s):  
Gopal P Sarma ◽  
Adam Safron ◽  
Nick J. Hay

We describe a biologically-inspired research agenda with parallel tracks aimed at AI and AI safety. The bottom-up component consists of building a sequence of biophysically realistic simulations of simple organisms such as the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio to serve as platforms for research into AI algorithms and system architectures. The top-down component consists of an approach to value alignment that grounds AI goal structures in neuropsychology, broadly considered. Our belief is that parallel pursuit of these tracks will inform the development of value-aligned AI systems that have been inspired by embodied organisms with sensorimotor integration. An important set of side benefits is that the research trajectories we describe here are grounded in long-standing intellectual traditions within existing research communities and funding structures. In addition, these research programs overlap with significant contemporary themes in the biological and psychological sciences such as data/model integration and reproducibility.


2019 ◽  
Author(s):  
Gopal P Sarma ◽  
Adam Safron ◽  
Nick J. Hay

We describe a biologically-inspired research agenda with parallel tracks aimed at AI and AI safety. The bottom-up component consists of building a sequence of biophysically realistic simulations of simple organisms such as the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio to serve as platforms for research into AI algorithms and system architectures. The top-down component consists of an approach to value alignment that grounds AI goal structures in neuropsychology, broadly considered. Our belief is that parallel pursuit of these tracks will inform the development of value-aligned AI systems that have been inspired by embodied organisms with sensorimotor integration. An important set of side benefits is that the research trajectories we describe here are grounded in long-standing intellectual traditions within existing research communities and funding structures. In addition, these research programs overlap with significant contemporary themes in the biological and psychological sciences such as data/model integration and reproducibility.


2018 ◽  
Author(s):  
Gopal P Sarma ◽  
Adam Safron ◽  
Nick J. Hay

We propose a biologically-inspired research agenda with parallel tracks aimed at AI and AI safety. The bottom-up component consists of building a sequence of biophysically realistic simulations of simple organisms such as the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio to serve as platforms for research into AI algorithms and system architectures. The top-down component consists of an approach to value alignment that grounds AI goal structures in neuropsychology. Our belief is that parallel pursuit of these tracks will inform the development of value-aligned AI systems that have been inspired by embodied organisms with sensorimotor integration. An important set of side benefits is that the research trajectories we describe here are grounded in long-standing intellectual traditions within existing research communities and funding structures. In addition, these research programs overlap with significant contemporary themes in the biological and psychological sciences such as data/model integration and reproducibility.


Philosophies ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 6
Author(s):  
Nadisha-Marie Aliman ◽  
Leon Kester ◽  
Roman Yampolskiy

In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.


Author(s):  
Wendell Wallach ◽  
Shannon Vallor

Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.


2020 ◽  
Vol 8 (4) ◽  
pp. 121-131
Author(s):  
Lukasz Burzawa ◽  
Linlin Li ◽  
Xu Wang ◽  
Adrian Buganza-Tepole ◽  
David M. Umulis

2020 ◽  
Vol 07 (01) ◽  
pp. 109-118
Author(s):  
Roman V. Yampolskiy

The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know the terminal goals of the system. In conclusion, the impact of Unpredictability on AI Safety is discussed.


2003 ◽  
Vol 13 (2_suppl) ◽  
pp. 17-27 ◽  
Author(s):  
E. Ingham ◽  
J. Fisher ◽  
M.H. Stone

Polyethylene wear debris induced osteolysis is a major cause of failure in artificial hip joints. Sub micrometre size particles are taken up by macrophages which are stimulated to release osteolytic cytokines such as TNFα. This leads to bone resorption, loosening and failure. In vitro cell culture studies have shown particles in the size range 0.1 to 1 micrometre to be at least six times more reactive than larger particles. Studies of historically used gamma irradiated in air polyethylene show increased wear rate with damaged femoral heads and with aged and oxidised polyethylene. The aged and oxidised polyethylene also produced a greater percentage of smaller particles leading to increased osteolytic potential. Combined tribological and biological simulation models have been developed for pre-clinical assessment of osteolytic potential of artificial hip joints.


Sign in / Sign up

Export Citation Format

Share Document