Tyler Cowen has recently recommended that EA-ers should exercise more social conservatism in their private lives. No doubt Cowen is right
Er, I doubt this! And “how should EAs conduct their private lives?” doesn’t really seem like my business, and is the sort of question that strikes me as easy to get wrong. So I’d want to believe in a pretty large effect size here, with a lot of confidence (and ideally some supporting argument!), before I started asserting this as obvious.
(Raising it as a question to think about is another matter.)
While I think, as many people do, that EA should spend more time just getting on with making the world better and less time considering science fiction scenarios and philosophical thought experiments
This is a weird sentence to my eyes. It reads to me like “Of course we all believe that EAs should spend more time just getting on with making the world better, and less time thinking about Hollywood movie scenarios like ‘pandemics’ or adding numbers together.”
Pandemics don’t parse to me as a silly Hollywood thing, and if you disagree, I’d much rather hear specifics rather than just an appeal to fictional evidence (“Hollywood says p, so p is low-probability”). And I could imagine that people are doing too much adding-numbers-together or running-thought-experiments if they enjoy it or whatever, but if you think the very idea of adding numbers together is silly (/ the very idea of imagining scenarios in order to think about them is silly), then I worry that we have very deep disagreements just below the surface. Again, specifics would help here.
Doctors don’t have to consider whether it would be better, all things considered, if their patients recover or if their organs instead were used for donations: in fact, they should not even ask themselves that question.
I think there are three different things going on here, which are important to distinguish:
Many doctors think of making the future go well overall as “not their responsibility”, such that even if there were a completely clear-cut way to save thousands of lives at no cost to anyone but the doctor, the doctor might still go “eh, not my job”.
Doctors correctly recognize that it’s not worth the time and effort to micro-evaluate so many ordinary decisions they make day-to-day. Expected utility maximization is about triaging your attention and cognition, as much as any other resource—it’s in large part about deciding what topics are most useful to think about.
Doctors correctly recognize that it’s just plain wrong to deceive people and let them die after you said you’d help them, for the sake of helping somebody else. (And they recognize that openly telling patients “we’re going to let you die at random if we think your organs will do more good elsewhere” would cause more harm than good, in real life, via drastically reduced trust in doctors; and they recognize that the general policy of routinely lying to and manipulating people in such a drastic way won’t work well either. So ordinary evaluation of consequences is a big part of the story here, even if it’s not the full story.)
2 and 3 seem good to me, but mostly seem to fit fine into garden-variety consequentialism, or maybe consequentialism subject to a few deontology-like prohibitions on specific extremely-unethical actions.
1 seems more relevant to your argument, and appears to me to come down to some combination of:
People just aren’t maximally altruistic.
In many cases people are altruistic, but unendorsed laziness, akrasia, and blindly-following-local-hedonic-gradients cause them to lose touch with this value and pursue it less than they’d reflectively want to. (“Someone is just optimizing for what looks Normal or Professional or Respectable, and trying to save the world is not the optimal way to achieve those social goals” is usually a special case of this: the person may not deeply endorse living their life that way, but it’s not salient to them moment-to-moment that they’re not following their values.)
Professional specialization is useful; not every human should try to be a leading expert in moral philosophy or cause prioritization. But that doesn’t mean that doctors don’t have a responsibility to make decisions well if they get themselves into a weird situation where they face a dilemma similar to what heads of state often face. It just means that it doesn’t come up much for the typical doctor in real life.
Bernard Williams famously suggested that the man who is faced with the choice of saving his drowning wife or a stranger is not only justified in saving his wife, but should do so with no thought more sophisticated than “that’s my wife!” because thinking, for example, “that’s my wife and in such situations it’s permissible to save one’s wife” would be one thought too many.
I think the “save my wife” instinct is a really good part of human nature. And since time is of the essence in this hypothetical, I agree that in the moment, for pragmatic reasons, it’s important not to spend too much time deliberating before acting; so EAs will tend to perform better if they trust their guts enough to act immediately in this sort of situation.
From my perspective, this is an obviously good reason to stay in the habit of trusting your gut on various things, as opposed to falling into the straw-rationalist error of needing to carefully deliberate about everything you do. (There are many other reasons it’s crucial to be in touch with your gut.)
That said, I don’t think it’s best for most people to go through life without ever reflecting on their values, asking “Why?”, questioning their feelings and society’s expectations and taboos, etc. And if the reason to be unreflective is just to look less weird, then that seems outright dishonorable and gross to me.
(And I think it would look that way to many others. Following the correct moral code is a huge, huge deal! Many lives are potentially at stake! Choosing not to think about the pros and cons of different moral codes at any point in your life because you want to seem normal is really bad.)
I think the right answer to reach here is to entertain the possibility “Maybe I should value my friends and family the same as strangers?”, think it through, and then reach the conclusion “Nope, valuing my friends and family more than strangers does make more sense, I’ll go ahead and keep doing that”.
Being a true friend is honorable, noble, and good. Unflinchingly, unhesitatingly standing for the ones you love in times of crisis is a thing we should praise.
Going through your entire life refusing to think “one thought too many” and intellectually question whether you’re doing the right thing in all this, not so much. That strikes me either as laziness triumphing over the hard work of being a good person, or as someone deciding they care more about signaling their virtues than about actual outcomes.
(Including good outcomes for your loved ones! Going through life without thinking about hard philosophy questions is not the ideal way to protect the people you care about!)
I think intuitions like Bernard Williams’ “one thought too many” are best understood through the lens of Robin Hanson’s 80,000 Hours interview. Hanson interprets “one thought too many” type reasoning as an attempt to signal that you’re a truer friend, because you’re too revolted at the idea of betraying your friends to apply dispassionate philosophical analysis to the topic at all.
You choose to live your life steering entirely by your initial gut reactions in this regard (and you express disgust at others who don’t do the same), because your brain evolved to use emotions like that as a signal that you’re a trustworthy friend.
(And, indeed, you may be partly employing this strategy deliberately, based on consciously recognizing how others might respond if you seemed too analytical and dispassionate.)
The problem is that in modern life, unlike our environment of evolutionary adaptedness, your decisions can potentially matter a lot more for others, and making the right decision requires a lot more weird and novel chains of reasoning. If you choose to entirely coast on your initial emotional impulse, then you’ll tend to make worse decisions in many ways.
In that light, I think the right response to “one thought too many” worries is just to stop stigmatizing thinking. Signaling friendship and loyalty is a good thing, but we shouldn’t avoid doing a modicum of reflection, science, or philosophical inquiry for the sake of this kind of signaling. Conservatives should recognize that there are times when we should lean into our evolved instincts, and times when we should overrule them; and careful reflection, reasoning, debate, and scholarship is the best way to distinguish those two cases.
Some specific individuals may be bad enough at reflection and reasoning that they’ll foreseeably get worse outcomes if they try to do it, versus just trusting their initial gut reaction. In those cases, sure, don’t try to go test whether your gut is right or wrong.
But I think the vast majority of EAs have more capacity to reflect and reason than that.
I would suggest that the difference derives from the same underlying sense of responsibility. The doctor, parent or lawyer can, quite properly, say “I am only responsible for outcomes of a certain kind, or for the welfare of certain people. The fact that my actions might cause worse outcomes on other metrics or harm to other people is simply not my responsibility.” But the government of a country cannot say that.
I could buy that EA should think more about role-based responsibilities on the current margin; maybe it would help people with burnout and coordination if they thought more about “what’s my job?”, about honest exchanges and the work they’re being paid to do, etc.
Your argument seems to require that role-based thinking play a more fundamental role than it actually does, though. I think “our moral responsibility to strangers is about roles and responsibility” falls apart for three reasons:
People correctly intuit that “I was just doing my job” is no excuse for the atrocities committed by rank-and-file Nazis. This seems like a super clear case where most people’s moral intuitions are more humanistic, universal, and “heroic”, rather than being about social roles.
The main difference between the Nazi case and ordinary cases seems to be one of scale. But EAs face tough choices on a scale far larger than WW2. Saying “it’s not my job” seems like a very weak excuse here, if you personally spot an opportunity to do enormous amounts of good.
There’s no ground truth about what someone’s “role” is, except something like social consensus. But social consensus can be wrong: if society has tasked you with torturing people to death every day, you shouldn’t do it, even if that’s your “role”.
My own approach to all of this is:
Utilitarianism is false as a description of most human’s values—we aren’t literally indifferent to improving a spouse’s welfare vs. a stranger’s welfare. We aren’t perfectly altruistic, either.
Nor, honestly, should we want to be. I like the aspect of human nature where we aren’t completely self-sacrificing, where we take a special interest in our own welfare.
But the ways that utilitarianism is false tend to be about mundane individual-scale things that we evolved to care about preferentially.
And these ordinary individual-scale things don’t tend to have much relevance to large-scale decisions. Heads of state are basically never making a decision where their family’s survival directly depends on them disregarding the welfare of the majority of humanity.
Our idiosyncratic personal values tend to become especially irrelevant to large-scale decisions once we take into account the consequentialist benefits of “being the sorts of people who keep promises and do the job they said they’d do”. Societies work better when people stick to their oaths. If your oath of office involves setting aside your idiosyncratic personal preferences in a circumscribed set of decisions, then you should do that because of the consequentialist value of oath-keeping.
The above points are sufficient to explain the data. I don’t have to be a head of state in order to want societal outcomes to be good for everyone (and not just for my family). People aren’t perfectly altruistic and impartial, but we do care a great deal about strangers’ welfare, which is why this is a key component of morality. And it’s a key component regardless of the role you’re playing, though in practice some jobs involve making much more consequential decisions than other jobs do.
In Crazy Train, McLaughlin quotes a suggestion from Tyler Cowen that utilitarianism is only good as a “mid-scale” theory, i.e. the small country scale I have described, the scale between, on the one hand, the small, personal level of doctors, lawyers and ice cream vans and, on the other, the mega-scale beloved of these kinds of theoretical discussions, consisting of trillions of future humans spread across the galaxy.
I don’t see any reason to think that. If it’s a prediction of the “roles” theory, it seems to be a totally arbitrary one: society happened to decide not to hire anyone for the “save the world” or “make the long-term future go well” jobs, so nobody is on the hook. I don’t think my moral intuitions should depend crucially on whether society forget to assign an important role to anyone!
If the fire department doesn’t exist, and I see a house on fire, I should go grab buckets, not shrug my shoulders.
The alternative theory I sketched above seems a lot simpler to me, and predicts that we’d care about galaxy-level outcomes for the same reason we care about planet- or country-level ones. People love their family, but the state of policymakers’ family members doesn’t tend to matter much for setting social policy; and smart consequentialists ought to be trustworthy and promise-keeping. These facts make sense of the same moral intuitions as the ice cream, doctor, and policymaker examples, without scrabbling for some weird reason to carve out an exception at really large scales.
(We should probably have more moral uncertainty when we get to really extreme cases, like infinite ethics. But you aren’t advocating for more moral uncertainty, AFAICT; you’re advocating that we concentrate our probability mass on a specific dubious moral theory centered on social roles.)
Maybe I’d find your argument more compelling if I saw any examples of how cluelessness or ‘crazy train’ actually bears on a real-world decision I have to make about x-risk today?
From my perspective, cluelessness and ‘crazy train’ don’t matter for any of my actual EA tactics or strategy, so it’s hard for me to get that worked up about them. Whereas ‘stop caring about large-scale things unless society has told me it’s my Job to do so’ would have large effects on my behavior, and (to my eye) in directions that seem worse on reflection. I’d be throwing out the baby, and there isn’t even any bathwater I thereby get rid of, as far as I can tell.
Why, the conservative asks, should we believe that EA practitioners will achieve outcomes so much better than those of the myriad of well-meaning aid workers who have come before? Are the ‘Bright Young Things’ of EA really so much cleverer, or better-intentioned, or knowledgeable than, say, the Scandinavian or Canadian international development establishments?
Yep, at least somewhat more. (It doesn’t necessarily take a large gap. See Inadequate Equilibria for the full argument.)
I think EAs are pretty often tempted to think “no way could we have arrived at any truths that weren’t already widely considered by experts, and there’s no way that the world’s expert community could have failed to arrive at the truth if they did consider it”.
But a large part of the answer to this puzzle is, in fact, the mistaken “roles” model of morality. (Which is one part of why it would be a serious mistake for EA to center its morality on this model.)
There’s no one whose job it is to think about how to make civilization go well over the next thousand years.
There’s no one whose job it is to think about priority-setting for humanity at the highest level.
Or, of the people whose job is nominally to think about things at that high a level of abstraction, there aren’t enough people of enough skill putting enough effort in to figuring out the answer. The relevant networks of thinkers are often small, and the people in those networks are often working under a variety of political constraints that force them to heavily compromise on their intellectual inquiry, and compromise a second time when they report the findings from their inquiry.
There’s no pre-existing field of generalist technological forecasting. At least, not one with amazing bona fides and stunning expertise that EAs should defer to.
Etc., etc.
People have said stuff about many of the topics EAs focus on, but often it’s just some cute editorializing, not a mighty edifice of massively vetted expert knowledge. The world really hasn’t tried very hard at most of the things EA is trying to focus on these days. (And to the extent it has, those are indeed the areas EA isn’t trying to reinvent the wheel, as far as I can tell.)
In short: by exercising the old and well-established virtues of prudence, good judgment and statesmanship. I’m afraid that sounds vague and unhelpful, and a far cry from the kind of quantitative, data-driven, rapidly scalable maximising decision-making processes that EA practitioners would like. But it’s true. These virtues are the best tools that human have yet found for navigating the cluelessness inherent in making big decisions that affect the future.
The dichotomy here between “good judgment” and being “quantitative” doesn’t make sense to me. It’s pretty easy in practice to assign probabilities to different outcomes and reach decisions that are informed by a cost-benefit analysis.
Often this does in fact look like “do the analysis and integrate it into your decision-making process, but then pay more attention to your non-formalized brain says about what the best thing to do is”, because your brain tends to have a lot more information than what you’re able to plug into any spreadsheet. But the act of trying to quantify costs and benefits is in fact a central and major part of this whole process, if you’re doing it right.
Or let me put it another way. Perhaps, as Toby Ord has suggested, we are walking along the edge of a precipice which, if badly traversed, will lead to disaster for humankind. What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones. Moreover, not only would such an approach be sensible in its own terms, but it would also help EA to acquire the kind of popular support that would help it achieve its outcomes.
Every time you sprinkle in this “moreover, it would help you acquire more popular support!” aside, it reduces my confidence in your argument. :P Making allies matters, but I worry that you aren’t keeping sufficiently good bookkeeping about the pros and cons of interventions for addressing existential risk, and the separate pros and cons of interventions for making people like you. At some point, humanity has to actually try to solve the problem, and not just play-act at an attempt in order to try to gather more political power. Somewhere, someone has to be doing the actual work.
That said, a lot of your point here sounds to me like the old maxipok rule (in x-risk, prioritize maximizing the probability of OK outcomes)? And the parts that aren’t just maxipok don’t seem convincing to me.
I’m happy to discuss moral philosophy. (Genuinely—I enjoyed that at undergraduate level and it’s one of the fun aspects of EA.) Indeed, perhaps I’ll put some direct responses to your points into another reply. But what I was trying to get at with my piece was how EA could make some rough and ready, plausibly justifiable, short cuts through some worrying issues that seemed to be capable of paralysing EA decision-making.
I write as a sympathiser with EA—someone who has actually changed his actions based on the points made by EA. What I’m trying to do is show the world of EA—a world which has been made to look foolish by the collapse of SBF—some ways to shortcut abstruse arguments that look like navel-gazing, avoid openly endorsing ‘crazy train’ ideas, resolve cluelessness in the face of difficult utilitarian calculations and generally do much more good in the world. Your comment “Somewhere, someone has to be doing the actual work” is precisely my point: the actual work is not worrying about mental bookkeeping or thinking about Nazis—the actual work is persuading large numbers of people and achieving real things in the real world, and I’m trying to help with that work.
As I said above, I don’t claim that any of my points above are knock-down arguments for why these are the ultimately right answers. Instead I’m trying to do something different. It seems to me that EA is (or at least should be) in the business of gaining converts and doing practical good in the world. I’m trying to describe a way forward for doing that, based on the world as it actually is. The bits where I say ‘that’s how get popular support’ are a feature, not a bug: I’m not trying to persuade you to support EA—you’re already in the club! - I’m trying to give EA some tools to persuade other people, and some ways to avoid looking as if EA largely consists of oddballs.
Let me put it this way. I could have added: “and put on a suit and tie when you go to important meetings”. That’s the kind of advice I’m trying to give.
Er, I doubt this! And “how should EAs conduct their private lives?” doesn’t really seem like my business, and is the sort of question that strikes me as easy to get wrong. So I’d want to believe in a pretty large effect size here, with a lot of confidence (and ideally some supporting argument!), before I started asserting this as obvious.
(Raising it as a question to think about is another matter.)
This is a weird sentence to my eyes. It reads to me like “Of course we all believe that EAs should spend more time just getting on with making the world better, and less time thinking about Hollywood movie scenarios like ‘pandemics’ or adding numbers together.”
Pandemics don’t parse to me as a silly Hollywood thing, and if you disagree, I’d much rather hear specifics rather than just an appeal to fictional evidence (“Hollywood says p, so p is low-probability”). And I could imagine that people are doing too much adding-numbers-together or running-thought-experiments if they enjoy it or whatever, but if you think the very idea of adding numbers together is silly (/ the very idea of imagining scenarios in order to think about them is silly), then I worry that we have very deep disagreements just below the surface. Again, specifics would help here.
I think there are three different things going on here, which are important to distinguish:
Many doctors think of making the future go well overall as “not their responsibility”, such that even if there were a completely clear-cut way to save thousands of lives at no cost to anyone but the doctor, the doctor might still go “eh, not my job”.
Doctors correctly recognize that it’s not worth the time and effort to micro-evaluate so many ordinary decisions they make day-to-day. Expected utility maximization is about triaging your attention and cognition, as much as any other resource—it’s in large part about deciding what topics are most useful to think about.
Doctors correctly recognize that it’s just plain wrong to deceive people and let them die after you said you’d help them, for the sake of helping somebody else. (And they recognize that openly telling patients “we’re going to let you die at random if we think your organs will do more good elsewhere” would cause more harm than good, in real life, via drastically reduced trust in doctors; and they recognize that the general policy of routinely lying to and manipulating people in such a drastic way won’t work well either. So ordinary evaluation of consequences is a big part of the story here, even if it’s not the full story.)
2 and 3 seem good to me, but mostly seem to fit fine into garden-variety consequentialism, or maybe consequentialism subject to a few deontology-like prohibitions on specific extremely-unethical actions.
1 seems more relevant to your argument, and appears to me to come down to some combination of:
People just aren’t maximally altruistic.
In many cases people are altruistic, but unendorsed laziness, akrasia, and blindly-following-local-hedonic-gradients cause them to lose touch with this value and pursue it less than they’d reflectively want to. (“Someone is just optimizing for what looks Normal or Professional or Respectable, and trying to save the world is not the optimal way to achieve those social goals” is usually a special case of this: the person may not deeply endorse living their life that way, but it’s not salient to them moment-to-moment that they’re not following their values.)
Professional specialization is useful; not every human should try to be a leading expert in moral philosophy or cause prioritization. But that doesn’t mean that doctors don’t have a responsibility to make decisions well if they get themselves into a weird situation where they face a dilemma similar to what heads of state often face. It just means that it doesn’t come up much for the typical doctor in real life.
I think the “save my wife” instinct is a really good part of human nature. And since time is of the essence in this hypothetical, I agree that in the moment, for pragmatic reasons, it’s important not to spend too much time deliberating before acting; so EAs will tend to perform better if they trust their guts enough to act immediately in this sort of situation.
From my perspective, this is an obviously good reason to stay in the habit of trusting your gut on various things, as opposed to falling into the straw-rationalist error of needing to carefully deliberate about everything you do. (There are many other reasons it’s crucial to be in touch with your gut.)
That said, I don’t think it’s best for most people to go through life without ever reflecting on their values, asking “Why?”, questioning their feelings and society’s expectations and taboos, etc. And if the reason to be unreflective is just to look less weird, then that seems outright dishonorable and gross to me.
(And I think it would look that way to many others. Following the correct moral code is a huge, huge deal! Many lives are potentially at stake! Choosing not to think about the pros and cons of different moral codes at any point in your life because you want to seem normal is really bad.)
I think the right answer to reach here is to entertain the possibility “Maybe I should value my friends and family the same as strangers?”, think it through, and then reach the conclusion “Nope, valuing my friends and family more than strangers does make more sense, I’ll go ahead and keep doing that”.
Being a true friend is honorable, noble, and good. Unflinchingly, unhesitatingly standing for the ones you love in times of crisis is a thing we should praise.
Going through your entire life refusing to think “one thought too many” and intellectually question whether you’re doing the right thing in all this, not so much. That strikes me either as laziness triumphing over the hard work of being a good person, or as someone deciding they care more about signaling their virtues than about actual outcomes.
(Including good outcomes for your loved ones! Going through life without thinking about hard philosophy questions is not the ideal way to protect the people you care about!)
I think intuitions like Bernard Williams’ “one thought too many” are best understood through the lens of Robin Hanson’s 80,000 Hours interview. Hanson interprets “one thought too many” type reasoning as an attempt to signal that you’re a truer friend, because you’re too revolted at the idea of betraying your friends to apply dispassionate philosophical analysis to the topic at all.
You choose to live your life steering entirely by your initial gut reactions in this regard (and you express disgust at others who don’t do the same), because your brain evolved to use emotions like that as a signal that you’re a trustworthy friend.
(And, indeed, you may be partly employing this strategy deliberately, based on consciously recognizing how others might respond if you seemed too analytical and dispassionate.)
The problem is that in modern life, unlike our environment of evolutionary adaptedness, your decisions can potentially matter a lot more for others, and making the right decision requires a lot more weird and novel chains of reasoning. If you choose to entirely coast on your initial emotional impulse, then you’ll tend to make worse decisions in many ways.
In that light, I think the right response to “one thought too many” worries is just to stop stigmatizing thinking. Signaling friendship and loyalty is a good thing, but we shouldn’t avoid doing a modicum of reflection, science, or philosophical inquiry for the sake of this kind of signaling. Conservatives should recognize that there are times when we should lean into our evolved instincts, and times when we should overrule them; and careful reflection, reasoning, debate, and scholarship is the best way to distinguish those two cases.
Some specific individuals may be bad enough at reflection and reasoning that they’ll foreseeably get worse outcomes if they try to do it, versus just trusting their initial gut reaction. In those cases, sure, don’t try to go test whether your gut is right or wrong.
But I think the vast majority of EAs have more capacity to reflect and reason than that.
I could buy that EA should think more about role-based responsibilities on the current margin; maybe it would help people with burnout and coordination if they thought more about “what’s my job?”, about honest exchanges and the work they’re being paid to do, etc.
Your argument seems to require that role-based thinking play a more fundamental role than it actually does, though. I think “our moral responsibility to strangers is about roles and responsibility” falls apart for three reasons:
People correctly intuit that “I was just doing my job” is no excuse for the atrocities committed by rank-and-file Nazis. This seems like a super clear case where most people’s moral intuitions are more humanistic, universal, and “heroic”, rather than being about social roles.
The main difference between the Nazi case and ordinary cases seems to be one of scale. But EAs face tough choices on a scale far larger than WW2. Saying “it’s not my job” seems like a very weak excuse here, if you personally spot an opportunity to do enormous amounts of good.
There’s no ground truth about what someone’s “role” is, except something like social consensus. But social consensus can be wrong: if society has tasked you with torturing people to death every day, you shouldn’t do it, even if that’s your “role”.
My own approach to all of this is:
Utilitarianism is false as a description of most human’s values—we aren’t literally indifferent to improving a spouse’s welfare vs. a stranger’s welfare. We aren’t perfectly altruistic, either.
Nor, honestly, should we want to be. I like the aspect of human nature where we aren’t completely self-sacrificing, where we take a special interest in our own welfare.
But the ways that utilitarianism is false tend to be about mundane individual-scale things that we evolved to care about preferentially.
And these ordinary individual-scale things don’t tend to have much relevance to large-scale decisions. Heads of state are basically never making a decision where their family’s survival directly depends on them disregarding the welfare of the majority of humanity.
Our idiosyncratic personal values tend to become especially irrelevant to large-scale decisions once we take into account the consequentialist benefits of “being the sorts of people who keep promises and do the job they said they’d do”. Societies work better when people stick to their oaths. If your oath of office involves setting aside your idiosyncratic personal preferences in a circumscribed set of decisions, then you should do that because of the consequentialist value of oath-keeping.
The above points are sufficient to explain the data. I don’t have to be a head of state in order to want societal outcomes to be good for everyone (and not just for my family). People aren’t perfectly altruistic and impartial, but we do care a great deal about strangers’ welfare, which is why this is a key component of morality. And it’s a key component regardless of the role you’re playing, though in practice some jobs involve making much more consequential decisions than other jobs do.
I don’t see any reason to think that. If it’s a prediction of the “roles” theory, it seems to be a totally arbitrary one: society happened to decide not to hire anyone for the “save the world” or “make the long-term future go well” jobs, so nobody is on the hook. I don’t think my moral intuitions should depend crucially on whether society forget to assign an important role to anyone!
If the fire department doesn’t exist, and I see a house on fire, I should go grab buckets, not shrug my shoulders.
The alternative theory I sketched above seems a lot simpler to me, and predicts that we’d care about galaxy-level outcomes for the same reason we care about planet- or country-level ones. People love their family, but the state of policymakers’ family members doesn’t tend to matter much for setting social policy; and smart consequentialists ought to be trustworthy and promise-keeping. These facts make sense of the same moral intuitions as the ice cream, doctor, and policymaker examples, without scrabbling for some weird reason to carve out an exception at really large scales.
(We should probably have more moral uncertainty when we get to really extreme cases, like infinite ethics. But you aren’t advocating for more moral uncertainty, AFAICT; you’re advocating that we concentrate our probability mass on a specific dubious moral theory centered on social roles.)
Maybe I’d find your argument more compelling if I saw any examples of how cluelessness or ‘crazy train’ actually bears on a real-world decision I have to make about x-risk today?
From my perspective, cluelessness and ‘crazy train’ don’t matter for any of my actual EA tactics or strategy, so it’s hard for me to get that worked up about them. Whereas ‘stop caring about large-scale things unless society has told me it’s my Job to do so’ would have large effects on my behavior, and (to my eye) in directions that seem worse on reflection. I’d be throwing out the baby, and there isn’t even any bathwater I thereby get rid of, as far as I can tell.
Yep, at least somewhat more. (It doesn’t necessarily take a large gap. See Inadequate Equilibria for the full argument.)
I think EAs are pretty often tempted to think “no way could we have arrived at any truths that weren’t already widely considered by experts, and there’s no way that the world’s expert community could have failed to arrive at the truth if they did consider it”.
But a large part of the answer to this puzzle is, in fact, the mistaken “roles” model of morality. (Which is one part of why it would be a serious mistake for EA to center its morality on this model.)
There’s no one whose job it is to think about how to make civilization go well over the next thousand years.
There’s no one whose job it is to think about priority-setting for humanity at the highest level.
Or, of the people whose job is nominally to think about things at that high a level of abstraction, there aren’t enough people of enough skill putting enough effort in to figuring out the answer. The relevant networks of thinkers are often small, and the people in those networks are often working under a variety of political constraints that force them to heavily compromise on their intellectual inquiry, and compromise a second time when they report the findings from their inquiry.
There’s no pre-existing field of generalist technological forecasting. At least, not one with amazing bona fides and stunning expertise that EAs should defer to.
Etc., etc.
People have said stuff about many of the topics EAs focus on, but often it’s just some cute editorializing, not a mighty edifice of massively vetted expert knowledge. The world really hasn’t tried very hard at most of the things EA is trying to focus on these days. (And to the extent it has, those are indeed the areas EA isn’t trying to reinvent the wheel, as far as I can tell.)
The dichotomy here between “good judgment” and being “quantitative” doesn’t make sense to me. It’s pretty easy in practice to assign probabilities to different outcomes and reach decisions that are informed by a cost-benefit analysis.
Often this does in fact look like “do the analysis and integrate it into your decision-making process, but then pay more attention to your non-formalized brain says about what the best thing to do is”, because your brain tends to have a lot more information than what you’re able to plug into any spreadsheet. But the act of trying to quantify costs and benefits is in fact a central and major part of this whole process, if you’re doing it right.
Every time you sprinkle in this “moreover, it would help you acquire more popular support!” aside, it reduces my confidence in your argument. :P Making allies matters, but I worry that you aren’t keeping sufficiently good bookkeeping about the pros and cons of interventions for addressing existential risk, and the separate pros and cons of interventions for making people like you. At some point, humanity has to actually try to solve the problem, and not just play-act at an attempt in order to try to gather more political power. Somewhere, someone has to be doing the actual work.
That said, a lot of your point here sounds to me like the old maxipok rule (in x-risk, prioritize maximizing the probability of OK outcomes)? And the parts that aren’t just maxipok don’t seem convincing to me.
Lots of good points here—thank you.
I’m happy to discuss moral philosophy. (Genuinely—I enjoyed that at undergraduate level and it’s one of the fun aspects of EA.) Indeed, perhaps I’ll put some direct responses to your points into another reply. But what I was trying to get at with my piece was how EA could make some rough and ready, plausibly justifiable, short cuts through some worrying issues that seemed to be capable of paralysing EA decision-making.
I write as a sympathiser with EA—someone who has actually changed his actions based on the points made by EA. What I’m trying to do is show the world of EA—a world which has been made to look foolish by the collapse of SBF—some ways to shortcut abstruse arguments that look like navel-gazing, avoid openly endorsing ‘crazy train’ ideas, resolve cluelessness in the face of difficult utilitarian calculations and generally do much more good in the world. Your comment “Somewhere, someone has to be doing the actual work” is precisely my point: the actual work is not worrying about mental bookkeeping or thinking about Nazis—the actual work is persuading large numbers of people and achieving real things in the real world, and I’m trying to help with that work.
As I said above, I don’t claim that any of my points above are knock-down arguments for why these are the ultimately right answers. Instead I’m trying to do something different. It seems to me that EA is (or at least should be) in the business of gaining converts and doing practical good in the world. I’m trying to describe a way forward for doing that, based on the world as it actually is. The bits where I say ‘that’s how get popular support’ are a feature, not a bug: I’m not trying to persuade you to support EA—you’re already in the club! - I’m trying to give EA some tools to persuade other people, and some ways to avoid looking as if EA largely consists of oddballs.
Let me put it this way. I could have added: “and put on a suit and tie when you go to important meetings”. That’s the kind of advice I’m trying to give.