I’m shocked and somewhat concerned that your empirical finding is that so few people have encountered or thought about this crucial consideration.
My experience is different, with maybe 70% of AI x-risk researchers I’ve discussed with being somewhat au fait with the notion that we might not know the sign of future value conditional on survival. But I agree that it seems people (myself included) have a tendency to slide off this consideration or hope to defer its resolution to future generations, and my sample size is quite small (a few dozen maybe) and quite correlated.
For what it’s worth, I recall this question being explicitly posed in at least a few of the EA in-depth fellowship curricula I’ve consumed or commented on, though I don’t recall specifics and when I checked EA Cambridge’s most recent curriculum I couldn’t find it.
My anecdata is also that most people have thought about it somewhat, and “maybe it’s okay if everyone dies” is one of the more common initial responses I’ve heard to existential risk.
But I agree with OP that I more regularly hear “people are worried about negative outcomes just because they themselves are depressed” than “people assume positive outcomes just because they themselves are manic” (or some other cognitive bias).
This is helpful data. Two important axes of variation here are:
- Time, where this has fortunatley become more frequently discussed in recent years - Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.
I’m shocked and somewhat concerned that your empirical finding is that so few people have encountered or thought about this crucial consideration.
My experience is different, with maybe 70% of AI x-risk researchers I’ve discussed with being somewhat au fait with the notion that we might not know the sign of future value conditional on survival. But I agree that it seems people (myself included) have a tendency to slide off this consideration or hope to defer its resolution to future generations, and my sample size is quite small (a few dozen maybe) and quite correlated.
For what it’s worth, I recall this question being explicitly posed in at least a few of the EA in-depth fellowship curricula I’ve consumed or commented on, though I don’t recall specifics and when I checked EA Cambridge’s most recent curriculum I couldn’t find it.
My anecdata is also that most people have thought about it somewhat, and “maybe it’s okay if everyone dies” is one of the more common initial responses I’ve heard to existential risk.
But I agree with OP that I more regularly hear “people are worried about negative outcomes just because they themselves are depressed” than “people assume positive outcomes just because they themselves are manic” (or some other cognitive bias).
This is helpful data. Two important axes of variation here are:
- Time, where this has fortunatley become more frequently discussed in recent years
- Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.