Yes it’s possible, but not definite, that I would prefer P4C (or another method of teaching philosophy). Critical thinking alone has no ethical dimension. Someone with better critical thinking skills may be more able to grasp important ethical principles, but would they be interested in doing so? Maybe not.
Michael A’s post introducing the benevolence, intelligence, power (BIP) framework seems relevant here. It may be bad to increase the intelligence of actors who aren’t sufficiently benevolent in the first place. That’s why I am interested in the evidence that P4C can improve empathy.
I should note that I don’t necessarily see utilitarianism as the best ethical theory we will ever have, but I do think it’s probably the best one we currently have (although I understand Parfit has some interesting things to say on this in his 2011 book which I haven’t read). More people studying philosophy increases the probability that we will one day make further ethical progress towards the ‘best’ or ‘true’ ethical theory if in fact such a thing exists.
I agree with this comment.
I also appreciate the reference to my/Convergence’s post, and agree with how you’ve applied it. But I just want to quickly note that that post doesn’t take a clear stand on:
how benevolent is “sufficient benevolent”
how that differs for different intelligence improvements
precisely what “benevolence” involves (though we gesture at some likely components, and state that “we essentially mean how well an actor’s moral beliefs or values align with the goal of improving the expected value of the long-term future”)
Relevant paragraph from the post for the first two of those points:
Determining precisely what the relevant “threshold” level of benevolence would be is not a trivial matter, but we think even just recognising that such a threshold likely exists may be useful. The threshold would also depend on the precise type of intelligence improvement that would occur. For example, the same authoritarians or militaries may be “sufficiently” benevolent (e.g., just entirely self-interested, rather than actively sadistic) that improving their understanding of global priorities research is safe, even if improving their understanding of biotech is not.
I think some of this comes down to one’s more general views on differential progress, and on whether “speeding up development” in general is currently beneficial (see Crucial questions for longtermists).
I say this because I want to note that that post doesn’t rule out hypotheses such as “improving the critical thinking of (lets say) 99.99% of schoolchildren is beneficial, and the slight harm from improving the critical thinking of the last 0.01% (perhaps those predisposed to unusually high levels of malevolent traits) is outweighed by those benefits.”
But I think the key relevance of that post here is that it suggests that:
improving benevolence may be more clearly or more strongly beneficial than improving intelligence
improving intelligence of especially benevolent actors (or improving intelligence and benevolence in tandem, which seems roughly equivalent) may be more clearly or more strongly beneficial than improving intelligence of just a random/general subset of people
(And therefore, long story short, I’d also be particularly excited about an intervention which increases things like empathy, moral circle expansion, inclination towards EA ideas, etc.)
Thanks for these clarifications Michael