or (1b), this is a big problem and I’m not sure what to do about it. The obvious other thing to do is to essentially take claims about the far future literally—if calculations suggest that things you do now affect 10^55 QALYs in the far future, then that’s totally reasonable. (Obviously these aren’t the only things you can do, but you can move in the direction of “take calculations more seriously” or “take calculations less seriously”, and there are different ways to do that.)
I think you would benefit a lot from separating out ‘can we make this change in the world, e.g. preventing an asteroid from hitting the Earth, answering this scientific question, convincing one person to be vegan’ from the size of the future. A big future (as big as the past, the fossil records shows billions of years of life) doesn’t reach backwards in time to warp all of these ordinary empirical questions about life today.
It doesn’t even have much of an efficient market effect, because essentially no actors are allocating resources in a way that depends on whether the future is 1,000,000x as important as the past 100 years or 10^50. Indeed almost no one is allocating resources as though the next 100 million years are 10x as important as the past 100 years.
The things that come closest are generic Doomsday Arguments, and the Charity Doomsday Prior can be seen as falling into this class. The best cashed out relies on the simulation argument:
Let the size of the universe be X
Our apparent position seems to let us affect a big future, with value that grows with X
The larger X is, the greater the expected number of simulations of people in seemingly pivotal positions like ours
So X appears on both sides of the equation, cancels out, and the value of future relative to past is based on the ratio of the size and density of value in simulations vs basement reality
You then get figures like 10^12 or 10^20, not 10^50 or infinity, for the value of local helping (multiplied by the number of simulations) vs the future
Your Charity Doomsday Prior might force you into a view something like that, and I think it bounds the differences in expected value between actions. Now this isn’t satisfying for you because you (unlike Holden in the thread where you got the idea for this prior) don’t want to assess charities in their relative effects, but in absolute effects in QALY-like units. But it does mean that you don’t have to worry about 10^55x differences.
In any case, in your model the distortions get worse as you move further out from the prior, and much of that is being driven by the estimates of the size of the future (making the other problems worse). You could try reformulating with an empirically-informed prior over success at political campaigns, scientific discovery, and whatnot, and then separately assess the value of the different goals according to your total utilitarianish perspective. Then have a separate valuation of the future.
The way I calculate posteriors has problems, but I believe the direct-effect posteriors better reflect reality than the raw estimates (I have no idea about the far-future posteriors). For (2), I don’t think they’re as wrong as you think they are, but nonetheless I don’t really rely on these and I wouldn’t suggest that people do. I could get more into this but it doesn’t seem that important to me.
I’m not sure I fully understand what you are relying on. I think your model goes awry on estimating the relative long-run fruit of the things you consider, and in particular on the bottom-line conclusion/ranking. If I wanted to climb the disagreement hierarchy with you, and engage with and accept or reject your key point, how would you suggest I do it?
You then get figures like 10^12 or 10^20, not 10^50 or infinity, for the value of local helping (multiplied by the number of simulations) vs the future
Are the adjusted lower numbers based on calculations such as these?
I think you would benefit a lot from separating out ‘can we make this change in the world, e.g. preventing an asteroid from hitting the Earth, answering this scientific question, convincing one person to be vegan’ from the size of the future. A big future (as big as the past, the fossil records shows billions of years of life) doesn’t reach backwards in time to warp all of these ordinary empirical questions about life today.
It doesn’t even have much of an efficient market effect, because essentially no actors are allocating resources in a way that depends on whether the future is 1,000,000x as important as the past 100 years or 10^50. Indeed almost no one is allocating resources as though the next 100 million years are 10x as important as the past 100 years.
The things that come closest are generic Doomsday Arguments, and the Charity Doomsday Prior can be seen as falling into this class. The best cashed out relies on the simulation argument:
Let the size of the universe be X
Our apparent position seems to let us affect a big future, with value that grows with X
The larger X is, the greater the expected number of simulations of people in seemingly pivotal positions like ours
So X appears on both sides of the equation, cancels out, and the value of future relative to past is based on the ratio of the size and density of value in simulations vs basement reality
You then get figures like 10^12 or 10^20, not 10^50 or infinity, for the value of local helping (multiplied by the number of simulations) vs the future
Your Charity Doomsday Prior might force you into a view something like that, and I think it bounds the differences in expected value between actions. Now this isn’t satisfying for you because you (unlike Holden in the thread where you got the idea for this prior) don’t want to assess charities in their relative effects, but in absolute effects in QALY-like units. But it does mean that you don’t have to worry about 10^55x differences.
In any case, in your model the distortions get worse as you move further out from the prior, and much of that is being driven by the estimates of the size of the future (making the other problems worse). You could try reformulating with an empirically-informed prior over success at political campaigns, scientific discovery, and whatnot, and then separately assess the value of the different goals according to your total utilitarianish perspective. Then have a separate valuation of the future.
I’m not sure I fully understand what you are relying on. I think your model goes awry on estimating the relative long-run fruit of the things you consider, and in particular on the bottom-line conclusion/ranking. If I wanted to climb the disagreement hierarchy with you, and engage with and accept or reject your key point, how would you suggest I do it?
Hi Carl,
Are the adjusted lower numbers based on calculations such as these?