You could defend the idea that extinction risk reduction is net negative or highly ambiguous in value, even just within EA and adjacent communities. Convincing people to not work on things that are net negative by your lights seems not to break good heuristics or norms.
It’d be hard to do without breaking a lot of good heuristics (i.e. don’t lie, don’t kill people)
You could defend the idea that extinction risk reduction is net negative or highly ambiguous in value, even just within EA and adjacent communities. Convincing people to not work on things that are net negative by your lights seems not to break good heuristics or norms.
(I suspect this explain like half of Émile Torres’ deal.)