How do you understand the claim about expected value? What is the expectation being taken over?
Over my probability distribution for the future. In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future. Moreover, the variance in values of such a variable between possible futures is almost entirely due to differences in the long-term future.
What are some examples of such proxies?
General instrumentally convergent goods like power, money, influence, skills, and knowledge
Success in projects that we choose for longtermist reasons but then pursue without constantly thinking about the effect on the long-term future. For me these include doing well in college and organizing an EA group; for those with directly valuable careers it would mostly be achieving their day-to-day career goals.
Why would we care about a hypothetical scenario where we’re omniscient? Shouldn’t we focus on the actual decision problem being faced?
Sure, for the sake of making decisions. For the sake of abstract propositions about “what matters most,” it’s not necessarily constrained by what we know.
In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future.
Okay, so you’re thinking about what an outside observer would expect to happen. (Another approach is to focus on a single action A, and think about how A affects the long-run future in expectation.)
But regardless, the first quote is just about value, not about what we ought to do.
Coming back to this, in my experience the quote is used to express what we should do; it’s saying we should focus on affecting the far future, because that’s where the value is. It’s not merely pointing out where the value is, with no reference to being actionable.
To give a contrived example: suppose there’s a civilization in a galaxy far away that’s immeasurably larger than our total potential future, and we can give them ~infinite utility by sending them one photon. But they’re receding from us faster than the speed of light, so there’s nothing we can do about it. Here, all of the expected value is in this civilization, but it has no bearing on how the EA community should allocate our budget.
For the sake of abstract propositions about “what matters most,” it’s not necessarily constrained by what we know.
I just don’t think MacAskill/Greaves/others intended this to be interpreted as a perfect-information scenario with no practical relevance.
Over my probability distribution for the future. In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future. Moreover, the variance in values of such a variable between possible futures is almost entirely due to differences in the long-term future.
General instrumentally convergent goods like power, money, influence, skills, and knowledge
Success in projects that we choose for longtermist reasons but then pursue without constantly thinking about the effect on the long-term future. For me these include doing well in college and organizing an EA group; for those with directly valuable careers it would mostly be achieving their day-to-day career goals.
Sure, for the sake of making decisions. For the sake of abstract propositions about “what matters most,” it’s not necessarily constrained by what we know.
Okay, so you’re thinking about what an outside observer would expect to happen. (Another approach is to focus on a single action A, and think about how A affects the long-run future in expectation.)
Coming back to this, in my experience the quote is used to express what we should do; it’s saying we should focus on affecting the far future, because that’s where the value is. It’s not merely pointing out where the value is, with no reference to being actionable.
To give a contrived example: suppose there’s a civilization in a galaxy far away that’s immeasurably larger than our total potential future, and we can give them ~infinite utility by sending them one photon. But they’re receding from us faster than the speed of light, so there’s nothing we can do about it. Here, all of the expected value is in this civilization, but it has no bearing on how the EA community should allocate our budget.
I just don’t think MacAskill/Greaves/others intended this to be interpreted as a perfect-information scenario with no practical relevance.