I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken. But regardless, the first quote is just about value, not about what we ought to do.
I think the second principle is basically true. Since the long-term future is 10^(big number) times bigger than the short-term future, our effects on the short-term future mostly matter insofar as they affect the long-term future, unless we have reason to believe that long-term effects somehow cancel out exactly. You’re right that humans are not psychologically capable of always following it directly, but we can pursue proxies and instrumental goals that we think improve the long-term future. (But also, this principle is about describing our actions, not telling us what to do, so what’s relevant isn’t our capability to estimate long-term effects, but rather what we would think about our actions if we were omniscient.)
But regardless, the first quote is just about value, not about what we ought to do.
How do you understand the claim about expected value? What is the expectation being taken over?
You’re right that humans are not psychologically capable of always following it directly, but we can pursue proxies and instrumental goals that we think improve the long-term future.
What are some examples of such proxies?
this principle is about describing our actions, not telling us what to do, so what’s relevant isn’t our capability to estimate long-term effects, but rather what we would think about our actions if we were omniscient.
Why would we care about a hypothetical scenario where we’re omniscient? Shouldn’t we focus on the actual decision problem being faced?
How do you understand the claim about expected value? What is the expectation being taken over?
Over my probability distribution for the future. In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future. Moreover, the variance in values of such a variable between possible futures is almost entirely due to differences in the long-term future.
What are some examples of such proxies?
General instrumentally convergent goods like power, money, influence, skills, and knowledge
Success in projects that we choose for longtermist reasons but then pursue without constantly thinking about the effect on the long-term future. For me these include doing well in college and organizing an EA group; for those with directly valuable careers it would mostly be achieving their day-to-day career goals.
Why would we care about a hypothetical scenario where we’re omniscient? Shouldn’t we focus on the actual decision problem being faced?
Sure, for the sake of making decisions. For the sake of abstract propositions about “what matters most,” it’s not necessarily constrained by what we know.
In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future.
Okay, so you’re thinking about what an outside observer would expect to happen. (Another approach is to focus on a single action A, and think about how A affects the long-run future in expectation.)
But regardless, the first quote is just about value, not about what we ought to do.
Coming back to this, in my experience the quote is used to express what we should do; it’s saying we should focus on affecting the far future, because that’s where the value is. It’s not merely pointing out where the value is, with no reference to being actionable.
To give a contrived example: suppose there’s a civilization in a galaxy far away that’s immeasurably larger than our total potential future, and we can give them ~infinite utility by sending them one photon. But they’re receding from us faster than the speed of light, so there’s nothing we can do about it. Here, all of the expected value is in this civilization, but it has no bearing on how the EA community should allocate our budget.
For the sake of abstract propositions about “what matters most,” it’s not necessarily constrained by what we know.
I just don’t think MacAskill/Greaves/others intended this to be interpreted as a perfect-information scenario with no practical relevance.
I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken.
What do you think about MacAskill’s claim that “there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear.”?
I mostly agree that obviously great stuff gets funding, but I think the “marginal stuff” is still orders of magnitude better in expectation than almost any neartermist interventions.
Not actively. I buy that doing a few projects with sharper focus and tighter feedback loops can be good for community health & epistemics. I would disagree if it took a significant fraction of funding away from interventions with a more clear path to doing an astronomical amount of good. (I almost added that it doesn’t really feel like lead elimination is competing with more longtermist interventions for FTX funding, but there probably is a tradeoff in reality.)
I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken. But regardless, the first quote is just about value, not about what we ought to do.
I think the second principle is basically true. Since the long-term future is 10^(big number) times bigger than the short-term future, our effects on the short-term future mostly matter insofar as they affect the long-term future, unless we have reason to believe that long-term effects somehow cancel out exactly. You’re right that humans are not psychologically capable of always following it directly, but we can pursue proxies and instrumental goals that we think improve the long-term future. (But also, this principle is about describing our actions, not telling us what to do, so what’s relevant isn’t our capability to estimate long-term effects, but rather what we would think about our actions if we were omniscient.)
How do you understand the claim about expected value? What is the expectation being taken over?
What are some examples of such proxies?
Why would we care about a hypothetical scenario where we’re omniscient? Shouldn’t we focus on the actual decision problem being faced?
Over my probability distribution for the future. In my expected/average future, almost all lives/experiences/utility/etc are in the long-term future. Moreover, the variance in values of such a variable between possible futures is almost entirely due to differences in the long-term future.
General instrumentally convergent goods like power, money, influence, skills, and knowledge
Success in projects that we choose for longtermist reasons but then pursue without constantly thinking about the effect on the long-term future. For me these include doing well in college and organizing an EA group; for those with directly valuable careers it would mostly be achieving their day-to-day career goals.
Sure, for the sake of making decisions. For the sake of abstract propositions about “what matters most,” it’s not necessarily constrained by what we know.
Okay, so you’re thinking about what an outside observer would expect to happen. (Another approach is to focus on a single action A, and think about how A affects the long-run future in expectation.)
Coming back to this, in my experience the quote is used to express what we should do; it’s saying we should focus on affecting the far future, because that’s where the value is. It’s not merely pointing out where the value is, with no reference to being actionable.
To give a contrived example: suppose there’s a civilization in a galaxy far away that’s immeasurably larger than our total potential future, and we can give them ~infinite utility by sending them one photon. But they’re receding from us faster than the speed of light, so there’s nothing we can do about it. Here, all of the expected value is in this civilization, but it has no bearing on how the EA community should allocate our budget.
I just don’t think MacAskill/Greaves/others intended this to be interpreted as a perfect-information scenario with no practical relevance.
What do you think about MacAskill’s claim that “there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear.”?
I mostly agree that obviously great stuff gets funding, but I think the “marginal stuff” is still orders of magnitude better in expectation than almost any neartermist interventions.
Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?
Not actively. I buy that doing a few projects with sharper focus and tighter feedback loops can be good for community health & epistemics. I would disagree if it took a significant fraction of funding away from interventions with a more clear path to doing an astronomical amount of good. (I almost added that it doesn’t really feel like lead elimination is competing with more longtermist interventions for FTX funding, but there probably is a tradeoff in reality.)
I was just about to make all three of these points (with the first bullet containing two), so thank you for saving me the time!