The scope of inductive risk

2022 ◽  
Author(s):  
P. D. Magnus
Keyword(s):  
2021 ◽  
Vol 54 (3) ◽  
pp. 1-18
Author(s):  
Petr Spelda ◽  
Vit Stritecky

As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, then an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The article asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach “elsewhere” in space and time or deploy ML models in non-benign environments. The article argues that the only viable version of the contract can be based on optimality (instead of on reliability, which cannot be justified without circularity) and aligns this position with Schurz's optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (“elsewhere” and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full.


Synthese ◽  
2014 ◽  
Vol 192 (1) ◽  
pp. 79-96 ◽  
Author(s):  
Stephen John
Keyword(s):  

Author(s):  
Miguel Ohnesorge

This article develops a constructive criticism of methodological conventionalism. Methodological conventionalism asserts that standards of inductive risk ought to be justified in virtue of their ability to facilitate coordination in a research community. On that view, industry bias occurs when conventional methodological standards are violated to foster industry preferences. The underlying account of scientific conventionality, however, is insufficient for theoretical and practical reasons. Conventions may be justified in virtue of their coordinative functions, but often qualify for posterior empirical criticism as research advances. Accordingly, industry bias does not only threaten existing conventions but may impede their empirically warranted improvement if they align with industry preferences. My empiricist account of standards of inductive risk avoids such a problem by asserting that conventional justification can be pragmatically warranted but has, in principle, only a provisional status. Methodological conventions, therefore, should not only be defended from preference-based infringements of their coordinative function but ought to be subjected to empirical criticism.


2015 ◽  
Vol 45 (3) ◽  
pp. 326-356 ◽  
Author(s):  
Ingo Brigandt

The ‘death of evidence’ issue in Canada raises the spectre of politicized science, and thus the question of what role social values may have in science and how this meshes with objectivity and evidence. I first criticize philosophical accounts that have to separate different steps of research to restrict the influence of social and other non-epistemic values. A prominent account that social values may play a role even in the context of theory acceptance is the argument from inductive risk. It maintains that the more severe the social consequences of erroneously accepting a theory would be, the more evidence is needed before the theory may be accepted. However, an implication of this position is that increasing evidence makes the impact of social values converge to zero; and I argue for a stronger role for social values. On this position, social values (together with epistemic values and other empirical considerations) may determine a theory's conditions of adequacy, which among other things can include considerations about what makes a scientific account unbiased and complete. I illustrate this based on recent theories of human evolution and the social behaviour of non-human primates, where some of the social values implicated are feminist values. While many philosophical accounts (both arguments from inductive risk and from underdetermination) conceptualize the relevance of social values in terms of making inferences from evidence, I argue for the need for a broader philosophical framework, which is also motivated by issues pertaining to scientific explanation.


Author(s):  
Justin B. Biddle ◽  
Rebecca Kukla

At each stage of inquiry, actions, choices, and judgments carry with them a chance that they will lead to mistakes and false conclusions. One of the most vigorously discussed kinds of epistemic risk is inductive risk—that is, the risk of inferring a false positive or a false negative from statistical evidence. This chapter develops a more fine-grained typology of epistemic risks and argues that many of the epistemic risks that have been classified as inductive risks are actually better seen as examples of a more expansive category, which this paper dubs “phronetic risk.” This more fine-grained typology helps to show that values in science often operate not exclusively at the level of individual psychologies but also at the level of knowledge-generating social institutions.


Author(s):  
Robin Andreasen ◽  
Heather Doty

The focus of this chapter is on the argument from inductive risk in the context of social science research on disparate impact in employment outcomes. It identifies three types of situations in the testing of scientific theories, not sufficiently emphasized in the inductive risk literature, that raise considerations of inductive risk: choice of significance test, choice of how to measure disparate impact, and the operationalization of scientific variables. It argues that non-epistemic values have a legitimate role in two of these situations but not in the third. It uses this observation to build on the discussion of when and under what conditions considerations of inductive risk help to justify a role for non-epistemic values in science.


Sign in / Sign up

Export Citation Format

Share Document