Primed Prediction: A Critical Examination of the Consequences of Exclusion of the Ontological Now in AI Protocol
Revisiting Norbert Wiener’s cybernetic prediction as the theoretical foundation of AI this chapter makes a plea how we need to uncover the black box of what is behind prediction and simulation. It explores the shortcomings of cybernetic prediction, the theoretical foundation of Artificial Intelligence, through the lens of Jean Baudrillard’s simulacra and simulation. Specifically, what prediction excludes – namely, an accounting for the ontological now – is what Baudrillard warned against in his analysis of the role technological innovations play in untethering reality from the material plane, leading to a crisis of simulacrum of experience. From this perspective, any deep-learning system rooted in the Wiener’s view of cybernetic feedback loops risks creating behaviour more so than predicting it. As this chapter will argue, such prediction is a narrow, self-referential system of feedback that ultimately becomes a self-fulfilling prophecy girded by the psycho-social effects of the very chaos it seeks to rationalise.