Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.
It’s hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we’re thinking about the future?
The life-saving vs life-improving point only seems relevant if you’ve already signed up to a person-affecting view. Talking about ‘saving lives’ of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).
Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.
It’s hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we’re thinking about the future?
The life-saving vs life-improving point only seems relevant if you’ve already signed up to a person-affecting view. Talking about ‘saving lives’ of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).