I think one way we could make the world far better in decades’ time is by making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion.
I’d say there’s a >50% chance that this would indeed be good, and that it’s plausible it’d be very good. But it also seems to me plausible that this would be bad or very bad. This is for a few reasons:
You didn’t say what you meant by wellbeing. A decision maker might say “wellbeing” and mean only the wellbeing of humans, or of people in countries like theirs (e.g., predominantly English-speaking liberal democracies), or of people in their country, or of an in-group of theirs within their country (e.g., people with the same political leaning or race as them).
This could be because they explicitly believe that only those people are moral patients, or just because that’s who they implicitly focus on.
If the decision makers do have a narrow subset of all moral patients in mind when they they think about increasing wellbeing, would probably at least reduce the benefits of decision makers having that as their main criterion. It might also lead to that criterion being net harmful, if it means people are consequentialist altruists for one group only, having stripped away the norms and deontological constraints that often help prevent certain bad behaviours.
Maybe this is just a nitpick, as you could just edit your statement to incorporate some sort of impartiality. But then you’d have to grapple with exactly how to do that—do we want the criteria decision makers use to come pre-loaded with our current best guesses about moral patienthood and weights? Or with some particular of handling moral uncertainty? Or with some general principles for thinking about how to handle moral uncertainty?
I have an intuition that just making people more consequentialist and more altruistic-in-some-sense, without also making them more rational, reflective, cautious, etc., has a decent chance of being harmful. I think the (overlapping) drivers of this intuition are:
The fact doing that would move a seemingly important variable into somewhat uncharted territory, so we should start out pretty uncertain about what outcomes it would have, and thus predict a nontrivial chance of fairly bad outcomes
The various potential ways people have suggested naive consequentialism could cause harms (even from a consequentialist perspective)
There seeming to have been some historical cases where people have been mobilised to do bad things by consequentialist and altruistic-in-some-sense arguments (“for the greater good”)
A sort of Chesterton’s fence / Secrets of Our Success-style argument for thinking very carefully before substantially changing anything that currently seems like a major part of how the world runs (even if it seems at first glance like the consequences of the change would be good)
[The above statements of mine are pretty vague, and I can try to elaborate if that’d be useful.]
So I’d favour thinking more about precisely what sort of changes we want to make to future decision-makers’ values, reasoning, and criteria for decision-making, and doing so before we make any major pushes on those fronts.
And that generic “more research needed” statement, I’d favour trying to package increases in consequentialism and generic altruism with more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, and probably some other things like that.
The following posts and their comment sections contain some relevant prior discussion:
...but, I think all of this might be pretty much just a tangent. That’s because I think we could just change the sentence of yours that I quoted at the start of this comment to make it reflect a broader package of attributes we want to change in future leaders, and your other points would still stand. E.g., teaching at universities could try to inculcate not just consequentialism and generic altruism but also more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, etc.
A long quibbly tangent
I’d say there’s a >50% chance that this would indeed be good, and that it’s plausible it’d be very good. But it also seems to me plausible that this would be bad or very bad. This is for a few reasons:
You didn’t say what you meant by wellbeing. A decision maker might say “wellbeing” and mean only the wellbeing of humans, or of people in countries like theirs (e.g., predominantly English-speaking liberal democracies), or of people in their country, or of an in-group of theirs within their country (e.g., people with the same political leaning or race as them).
This could be because they explicitly believe that only those people are moral patients, or just because that’s who they implicitly focus on.
If the decision makers do have a narrow subset of all moral patients in mind when they they think about increasing wellbeing, would probably at least reduce the benefits of decision makers having that as their main criterion. It might also lead to that criterion being net harmful, if it means people are consequentialist altruists for one group only, having stripped away the norms and deontological constraints that often help prevent certain bad behaviours.
Maybe this is just a nitpick, as you could just edit your statement to incorporate some sort of impartiality. But then you’d have to grapple with exactly how to do that—do we want the criteria decision makers use to come pre-loaded with our current best guesses about moral patienthood and weights? Or with some particular of handling moral uncertainty? Or with some general principles for thinking about how to handle moral uncertainty?
I have an intuition that just making people more consequentialist and more altruistic-in-some-sense, without also making them more rational, reflective, cautious, etc., has a decent chance of being harmful. I think the (overlapping) drivers of this intuition are:
The fact doing that would move a seemingly important variable into somewhat uncharted territory, so we should start out pretty uncertain about what outcomes it would have, and thus predict a nontrivial chance of fairly bad outcomes
The various potential ways people have suggested naive consequentialism could cause harms (even from a consequentialist perspective)
There seeming to have been some historical cases where people have been mobilised to do bad things by consequentialist and altruistic-in-some-sense arguments (“for the greater good”)
A sort of Chesterton’s fence / Secrets of Our Success-style argument for thinking very carefully before substantially changing anything that currently seems like a major part of how the world runs (even if it seems at first glance like the consequences of the change would be good)
[The above statements of mine are pretty vague, and I can try to elaborate if that’d be useful.]
So I’d favour thinking more about precisely what sort of changes we want to make to future decision-makers’ values, reasoning, and criteria for decision-making, and doing so before we make any major pushes on those fronts.
And that generic “more research needed” statement, I’d favour trying to package increases in consequentialism and generic altruism with more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, and probably some other things like that.
The following posts and their comment sections contain some relevant prior discussion:
Everyday Longtermism
Especially the section Safeguarding against naive utilitarianism, which presents a model/graph that I think is very interesting and helpful
Improving the future by influencing actors’ benevolence, intelligence, and power
...but, I think all of this might be pretty much just a tangent. That’s because I think we could just change the sentence of yours that I quoted at the start of this comment to make it reflect a broader package of attributes we want to change in future leaders, and your other points would still stand. E.g., teaching at universities could try to inculcate not just consequentialism and generic altruism but also more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, etc.