I have a couple of criticisms that feel simultaneously naive and unaddressed by these infinitarian arguments:
since it is always possible to get evidence that infinite payoffs are available (God could always appear before you with various multi-colored buttons), non-zero-credences seem mandatory.
This kind of argument seems far too handwavey. What does this scenario mean concretely? A white beardy guy who can walk on water and tell a great story about that time he wrestled Jacob to a standstill teleports in front of you with some ominously labelled buttons? I cannot see any comprehensible version of this leading me to the belief that any particular action of mine (or his) could generate infinite value. Ie if extraordinary claims require extraordinary evidence, then infinitely extraordinary claims should require infinite evidence.
Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough.
‘Decision theory’ doesn’t feel like a concept that parses as a parameter in Bayes’ theorem. That is, Bayes theorem seems like a statement about physical properties, and how likely they are to obtain. A decision theory is an algorithm that takes (the output of) Bayesian reasoning as a parameter. Obviously this leaves us with the question of which decision theory we follow and why, but to me this is best conceived not as a choice—and certainly not as a thing you can update on given data about physical properties—but as a process of clarifying what decision algorithm you’re already running and bugfixing its execution. Conceived this way it doesn’t make sense to describe it as something you can have credences in.
We could perhaps develop some vaguely analogous-to-credences concept, since there are obviously still difficulties in determining such an algorithm, but I don’t think we should assume that a concept that feels vaguely analogous will still behave exactly like an input in a theorem from another conceptual domain.
(very speculative)
It doesn’t feel obviously inconsistent to think (there’s a chance) we live in a universe with infinite utilons and concern ourselves with finite value. We might coherently talk about total value in some contexts, but if I consider a utilitarian algorithm to be something like ‘maximise the expected value caused by my action’, it doesn’t seem to matter if beyond my light cone, infinite utility is being had.
This gets messier if I assume a nonzero probability of us (eg) reversing entropy, and so of my action having arbitrarily many future consequences, but I can imagine this being solvable with a model of epistemic uncertainty in which my estimates of value difference between actions asymptotically approach 0 as we look further into the future (ie with a more formal modelling of cluelessness).
I think this approach makes more sense if, per 2), you don’t think of a moral/decision theory as being something true or false, but as an understanding of an algorithm whose execution we bugfix.
I have a couple of criticisms that feel simultaneously naive and unaddressed by these infinitarian arguments:
This kind of argument seems far too handwavey. What does this scenario mean concretely? A white beardy guy who can walk on water and tell a great story about that time he wrestled Jacob to a standstill teleports in front of you with some ominously labelled buttons? I cannot see any comprehensible version of this leading me to the belief that any particular action of mine (or his) could generate infinite value. Ie if extraordinary claims require extraordinary evidence, then infinitely extraordinary claims should require infinite evidence.
‘Decision theory’ doesn’t feel like a concept that parses as a parameter in Bayes’ theorem. That is, Bayes theorem seems like a statement about physical properties, and how likely they are to obtain. A decision theory is an algorithm that takes (the output of) Bayesian reasoning as a parameter. Obviously this leaves us with the question of which decision theory we follow and why, but to me this is best conceived not as a choice—and certainly not as a thing you can update on given data about physical properties—but as a process of clarifying what decision algorithm you’re already running and bugfixing its execution. Conceived this way it doesn’t make sense to describe it as something you can have credences in.
We could perhaps develop some vaguely analogous-to-credences concept, since there are obviously still difficulties in determining such an algorithm, but I don’t think we should assume that a concept that feels vaguely analogous will still behave exactly like an input in a theorem from another conceptual domain.
(very speculative)
It doesn’t feel obviously inconsistent to think (there’s a chance) we live in a universe with infinite utilons and concern ourselves with finite value. We might coherently talk about total value in some contexts, but if I consider a utilitarian algorithm to be something like ‘maximise the expected value caused by my action’, it doesn’t seem to matter if beyond my light cone, infinite utility is being had.
This gets messier if I assume a nonzero probability of us (eg) reversing entropy, and so of my action having arbitrarily many future consequences, but I can imagine this being solvable with a model of epistemic uncertainty in which my estimates of value difference between actions asymptotically approach 0 as we look further into the future (ie with a more formal modelling of cluelessness).
I think this approach makes more sense if, per 2), you don’t think of a moral/decision theory as being something true or false, but as an understanding of an algorithm whose execution we bugfix.