I get the weak impression that worldview diversification (partially) started as an approximation to expected value, and ended up being more of a peace pact between different cause areas. This peace pact disincentivizes comparisons between giving in different cause areas, which then leads to getting their marginal values out of sync.
Do you think there’s an optimal ‘exchange rate’ between causes (eg. present vs future lives, animal vs human lives), and that we should just do our best to approximate it?
Yes. To elaborate on this, I think that agents should converge on such an exchange as they become more wise and understand the world better.
Separately, I think that there are exchange rates that are inconsistent with each other, and I would already consider it a win to have a setup where the exchange rates aren’t inconsistent.
I wonder if we can back out what assumptions the ‘peace pact’ approach is making about these exchange rates. They are making allocations across cause areas, so they are implicitly using an exchange rate.
Do you think there’s an optimal ‘exchange rate’ between causes (eg. present vs future lives, animal vs human lives), and that we should just do our best to approximate it?
Yes. To elaborate on this, I think that agents should converge on such an exchange as they become more wise and understand the world better.
Separately, I think that there are exchange rates that are inconsistent with each other, and I would already consider it a win to have a setup where the exchange rates aren’t inconsistent.
I wonder if we can back out what assumptions the ‘peace pact’ approach is making about these exchange rates. They are making allocations across cause areas, so they are implicitly using an exchange rate.