Hi Elliott, just a few side comments from someone sympathetic to Vaden’s critique:
I largely agree with your take on time preference. One thing I’d like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there’s often a move made where people say “in theory we should have a zero discount factor, so let’s focus on the future!”. But the conclusion ignores that in practice we never have such unconditional knowledge of the future.
Re: the dice example:
First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1⁄6 to the hypothesis that the die lands on 1.
True—there are infinitely many things that can happen while the die is in the air, but that’s not the outcome space about which we’re concerned. We’re concerned about the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they’re defined by us, because we’ve chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we’re trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool—sometimes it’s useful, sometimes it’s not). We can’t even write out the outcome space in this scenario, let alone define a probability measure over it.
So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.
Once you buy into the idea that you must quantify all your beliefs with numbers, then yes—you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary—again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this briefly here.) Anyway, the Bayesianism debate is a much longer one (one that I think the community needs to have, however), so I won’t yell about any longer, but I do want to emphasize that it is only one way to reason about the world (and leads to many paradoxes and inconsistencies, as you all know).
Appreciate your engagement :)
Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I’m the Ben acknowledged in the piece, and I’m in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) )
I’m confused about the claim
This seems in direct opposition to what the authors say (and what Vaden quoted above), namely that:
I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized. Next, you write that if
I don’t think so. The “immeasurability” of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?
Finally, I’m not sure what to make of
When reading their paper, I honestly did not read it as a toy example. And I don’t believe the authors state it as such. When discussing Shivani’s options they write:
and when discussing AI risk in particular:
Considering that the Open Philanthropy Project has poured millions into AI Safety, that it’s listed as a top cause by 80K, and that EA’s far-future-fund makes payouts to AI safety work, if Shivani’s reasoning isn’t to be taken seriously then now is probably a good time to make that abundantly clear. Apologies for the harshness in tone here, but for an august institute like GPI to make normative suggestions in its research and then expect no one to act on them is irresponsible.
Anyway, I’m a huge fan of 95% of EA’s work, but really think it has gone down the wrong path with longtermism. Sorry for the sass—much love to all :)