Taking this on as a cause area and talking about it can easily (and I would predict) backfire (IE increase probability of said event occurring).
If Alice is not thinking about whether to do X (say creating hell), then Bob actively trying to prevent Alice from doing X can increase the total risk of X. Bob will both bring X to consideration for Alice when it wasn’t before, but also increase Alice’s likelihood of endorsing X, due to ingroup-outgroup dynamics where beliefs are often flipped/inverted.
In order to justify active opposition to X, you would need X to be not only possible, but a likely or default outcome. Something which has a very strong causal factor of occurrence, such that activism does not become the main causal factor. Such considerations should be taken more seriously before adopting tail-risks as an EA-cause-area.
Taking this on as a cause area and talking about it can easily (and I would predict) backfire (IE increase probability of said event occurring).
If Alice is not thinking about whether to do X (say creating hell), then Bob actively trying to prevent Alice from doing X can increase the total risk of X. Bob will both bring X to consideration for Alice when it wasn’t before, but also increase Alice’s likelihood of endorsing X, due to ingroup-outgroup dynamics where beliefs are often flipped/inverted.
In order to justify active opposition to X, you would need X to be not only possible, but a likely or default outcome. Something which has a very strong causal factor of occurrence, such that activism does not become the main causal factor. Such considerations should be taken more seriously before adopting tail-risks as an EA-cause-area.