I also appreciate the reference to my/âConvergenceâs post, and agree with how youâve applied it. But I just want to quickly note that that post doesnât take a clear stand on:
how benevolent is âsufficient benevolentâ
how that differs for different intelligence improvements
precisely what âbenevolenceâ involves (though we gesture at some likely components, and state that âwe essentially mean how well an actorâs moral beliefs or values align with the goal of improving the expected value of the long-term futureâ)
Relevant paragraph from the post for the first two of those points:
Determining precisely what the relevant âthresholdâ level of benevolence would be is not a trivial matter, but we think even just recognising that such a threshold likely exists may be useful. The threshold would also depend on the precise type of intelligence improvement that would occur. For example, the same authoritarians or militaries may be âsufficientlyâ benevolent (e.g., just entirely self-interested, rather than actively sadistic) that improving their understanding of global priorities research is safe, even if improving their understanding of biotech is not.
I say this because I want to note that that post doesnât rule out hypotheses such as âimproving the critical thinking of (lets say) 99.99% of schoolchildren is beneficial, and the slight harm from improving the critical thinking of the last 0.01% (perhaps those predisposed to unusually high levels of malevolent traits) is outweighed by those benefits.â
But I think the key relevance of that post here is that it suggests that:
improving benevolence may be more clearly or more strongly beneficial than improving intelligence
improving intelligence of especially benevolent actors (or improving intelligence and benevolence in tandem, which seems roughly equivalent) may be more clearly or more strongly beneficial than improving intelligence of just a random/âgeneral subset of people
(And therefore, long story short, Iâd also be particularly excited about an intervention which increases things like empathy, moral circle expansion, inclination towards EA ideas, etc.)
I agree with this comment.
I also appreciate the reference to my/âConvergenceâs post, and agree with how youâve applied it. But I just want to quickly note that that post doesnât take a clear stand on:
how benevolent is âsufficient benevolentâ
how that differs for different intelligence improvements
precisely what âbenevolenceâ involves (though we gesture at some likely components, and state that âwe essentially mean how well an actorâs moral beliefs or values align with the goal of improving the expected value of the long-term futureâ)
Relevant paragraph from the post for the first two of those points:
I think some of this comes down to oneâs more general views on differential progress, and on whether âspeeding up developmentâ in general is currently beneficial (see Crucial questions for longtermists).
I say this because I want to note that that post doesnât rule out hypotheses such as âimproving the critical thinking of (lets say) 99.99% of schoolchildren is beneficial, and the slight harm from improving the critical thinking of the last 0.01% (perhaps those predisposed to unusually high levels of malevolent traits) is outweighed by those benefits.â
But I think the key relevance of that post here is that it suggests that:
improving benevolence may be more clearly or more strongly beneficial than improving intelligence
improving intelligence of especially benevolent actors (or improving intelligence and benevolence in tandem, which seems roughly equivalent) may be more clearly or more strongly beneficial than improving intelligence of just a random/âgeneral subset of people
(And therefore, long story short, Iâd also be particularly excited about an intervention which increases things like empathy, moral circle expansion, inclination towards EA ideas, etc.)
Thanks for these clarifications Michael