willful action
Recently Published Documents


TOTAL DOCUMENTS

3
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Author(s):  
Victor Shestak ◽  
Aleksander Volevodz ◽  
Vera Alizade

The authors examine the possibility of holding artificial intelligence (AI) criminally liable under the current U.S. criminal legislation and study the opinions of Western lawyers who believe that this possibility for a machine controlled by AI may become reality in the near future. They analyze the requirements for criminal liability as determined by American legislators: a willful unlawful act or omission of an act (actus reus), criminal intent (mens rea), i.e. the person knowingly commits a criminal act or is negligent, as well as three basic models of AI’s criminal liability. In the first model, a crime is committed through the actions of another person, i.e. the cases when the subject of crime does not have sufficient cognitive abilities to understand the criminal intent and, moreover, to be guided by it. This category of persons includes minors, persons with limited legal capacity and modern cybernetic systems, who cannot be viewed as capable of cognition that equals human cognition. The latter are consi­dered to be innocent of a criminal act because their actions are controlled by an algorithm or a person who has indirect program control. In the second model, a crime is committed by a being who is objectively guilty of it. A segment of the program code in intellectual systems allows for some illegal act by default, for example, includes a command to unconditionally destroy all objects that the system recognizes as dange­rous for the purpose that such AI is working to fulfill. According to this model, the person who gives the unlawful command should be held liable. If such a «collaborator» is not hidden, criminal liability should be imposed on the person who gives an unlawful command to the system, not on the performer, because the algorithmic system that determines the actions of the performer is itself unlawful. Thus, criminal liability in this case should be imposed on the persons who write or use the program, on the condition that they were aware of the unlawfulness of orders that guide the actions of the performer. Such crimes include acts that are criminal but cannot be prevented by the performer — the AI system. In the third model, AI is directly liable for the acts that contain both a willful action and the unlawful intent of the machine. Such liability is possible if AI is recognized as a subject of criminal law, and also if it independently works out an algorithm to commit an act leading to publically dangerous consequen­ces, or if such consequences are the result of the system’s omission to act according to the initial algorithm, i.e. if its actions are willful and guilty.


Author(s):  
Alaina Lemon

Capitalist and socialist countries alike accused the other of brainwashing its citizens, creating cogs and robots instead of artists or free thinkers. These worries, again, have historical roots in transnational, imperial-era scientific, spiritual, and artistic conversations about the ways energy and matter create or hinder thought and willful action. We can trace them, for instance, through the ways Russian directors appropriated Western psychophysics and Eastern martial arts and yoga into theatrical training. Means of dividing and aligning energy and matter—as signs of contact and its failures—have proliferated across media for performance. On various stages, energy and matter are shaped to test for free movements of thought or feeling, the impulses that belie automation. At the same time, it is by attending to the social division of sensory fields—and to differences among ways those divisions are themselves made visible or not—that we can see where efforts to signal contact lead to additional, unexpected effects.


Sign in / Sign up

Export Citation Format

Share Document