Part of what it means that I try to support thinking on this issue, e.g. by seed-funding NYU MEP and doing this discussion, and doing my own thinking on it.
At this stage the thing I’m most excited about supporting is market-based mechanisms for democratic AI alignment like this. Also excited about trying to get more resources to work on AI welfare, utilitarianism, and to groups like Forethought: A new AI macrostrategy group.
In practice I spend more resources on extinction risk reduction. Part of this is just because I’d really prefer not to die in my 30s. When an EA cares for their family taking away time from extinction risk they’re valuing their family as much as 10^N people. I see myself as doing something similar here.
When an EA cares for their family taking away time from extinction risk they’re valuing their family as much as 10^N people.
No. I’ve said this before elsewhere, and it’s not directly relevant to most of this discussion, but I think it’s very worth reinforcing; EA is not utilitarianism, and the commitment to EA does not imply that you have any obligatory trade-off between yourself or your family’s welfare and your EA commitment. If, as is the generally accepted standard, a “normal” EA commitment is of 10% of your income and/or resources, it seems bad to suggest that such an EA should not ideally spend the other 90% of their time/effort on personal things like their family.
(Note that in addition to being a digression, this is a deontological rather than decision-theoretic point.)
When an EA cares for their family taking away time from extinction risk they’re valuing their family as much as 10^N people.
Not sure exactly what you mean here—do you mean attending to family matters (looking after family) taking away time from working on extinction risk reduction?
Part of what it means that I try to support thinking on this issue, e.g. by seed-funding NYU MEP and doing this discussion, and doing my own thinking on it.
At this stage the thing I’m most excited about supporting is market-based mechanisms for democratic AI alignment like this. Also excited about trying to get more resources to work on AI welfare, utilitarianism, and to groups like Forethought: A new AI macrostrategy group.
In practice I spend more resources on extinction risk reduction. Part of this is just because I’d really prefer not to die in my 30s. When an EA cares for their family taking away time from extinction risk they’re valuing their family as much as 10^N people. I see myself as doing something similar here.
Thanks for saying this. I feel likewise (but s/30s/40s :))
No. I’ve said this before elsewhere, and it’s not directly relevant to most of this discussion, but I think it’s very worth reinforcing; EA is not utilitarianism, and the commitment to EA does not imply that you have any obligatory trade-off between yourself or your family’s welfare and your EA commitment. If, as is the generally accepted standard, a “normal” EA commitment is of 10% of your income and/or resources, it seems bad to suggest that such an EA should not ideally spend the other 90% of their time/effort on personal things like their family.
(Note that in addition to being a digression, this is a deontological rather than decision-theoretic point.)
Not sure exactly what you mean here—do you mean attending to family matters (looking after family) taking away time from working on extinction risk reduction?
Yes. Which, at least on optimistic assumptions, means sacrificing lots of lives.
Fair point. But this applies to a lot of things in EA. We give what we can.