Former username Ikaxas
Vaughn Papenhausen
Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.
Not Jeff, but I agree with what he said, and here are my reasons:
The feedback Jpmos is giving you is time-sensitive (“Since it is still relatively early in the life of this post...”)
The feedback Jpmos is giving you is not actually about what you said. Rather, it’s simply about the way you’re communicating it, letting you know that, at least in Jpmos’s case, your chosen method of communication came close to not being effective (unless your goals in writing the post are significantly different than the usual goals of someone writing a post, i.e. to communicate and defend a claim. Admittedly, you say that “I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place”, and maybe that is a significantly different goal from the usual one, but even so, there’s a higher likelihood of readers doing that if they get the claim, or at least the topic, of the post up front.)
Ooh, I would also very much like to see this post
Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)
I’d be very interested in hearing more about the views you list under the “more philosophical end” (esp. moral uncertainty) -- either here or on the 80k podcast.
Definitely, I’ll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven’t picked particular readings yet though as I don’t know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in the South, the Holocaust); a unit on biases related to moral catastrophes; a unit on the psychology of evil (e.g. Baumeister’s work on the subject, which I haven’t read yet); a unit on moral uncertainty; a unit on whether antirealism can escape or accommodate the possibility of moral catastrophes.
Assignment ideas:
pick one of the potential moral catastophes Williams mentions, which you think is least likely to actually be a moral catastrophe. Now, imagine that you are yourself five years from now and you’ve been completely convinced that it is in fact a moral catastrophe. What convinced you? Write a paper trying to convince your current self that it is a moral catastrophe after all.
Come up with a potential moral catastrophe that Williams didn’t mention, and write a brief (maybe 1-2 pages?) argument for why it is or isn’t one (whatever you actually believe). Further possibility: Once these are collected, I observe how many people argued that the one they picked was not a moral catastrophe, and if it’s far over 50%, discuss with the class where that bias might come from (e.g. status quo bias, etc.).
This is all still in the brainstorming stage at the moment, but feel free to use any of this if you’re ever designing a course/discussion group for this paper.
Thanks!
I’m entering philosophy grad school now, but in a few years I’m going to have to start thinking about designing courses, and I’m thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?
David Moss mentioned a “long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically.” Aside from Nietzsche, another very well-known proponent of this tradition is Bernard Williams. Take a look at his page in the Stanford Encyclopedia of Philosophy, and if it looks promising check out his book Ethics and the Limits of Philosophy. You might also check out his essays “Ethical Consistency” (which I haven’t read; in his essay collection Problems of the Self) and “Conflicts of Values” (in Moral Luck). There are probably lots of other essays of his that are relevant that I just don’t know about. Another essay you might read is Steven Lukes’ “Making Sense of Moral Conflict” in his book Moral Conflict and Politics. On the question of whether there can ever be impossible moral demands (that is, situations where all of the available options are morally wrong, potentially because of conflicting moral requirements), one recent book (which I haven’t read, but sounds good) is Lisa Tessman’s Moral Failure: On the Impossible Demands of Morality (see also the SEP article here). Don Loeb has an essay called “Moral Incoherentism,” which despite its title seems to deal with something slightly different than what you’re talking about, but might still be of interest.
The piece that comes the closest to speaking directly to what you’re talking about here, that I know of, is Richard Ngo’s blog post “Arguments for Moral Indefinability”. He also has a post on “realism about rationality” which is probably also related.
On “consistency with our intuitions,” a book to check out might be Michael Huemer’s Ethical Intuitionism. And of course the SEP article on ethical intuitionism. Though of course intuitionism isn’t the only metaethical theory that takes consistency with our intuitions as a criterion; David Moss mentioned reflective equilibrium—and I definitely second his recommendation to look into this further—and Constructivism also has some of this flavor, for instance. Also check out this paper on Moorean arguments in ethics (“Moorean arguments” in reference to G.E. Moore’s famous “here is one hand” argument).
David Moss also mentioned “hyper-methodism and hyper-particularism.” Another paper that touches on that distinction, and on Moorean arguments (though not specifically in ethics) is Thomas Kelly’s “Moorean Facts and Belief Revision.”
Counterpoint (for purposes of getting it into the discussion; I’m undecided about antinatalism myself): that argument only applied to people who are already alive, and thus not to most of the people who would be affected by the decision whether to extend the human species or not (I.e. those who don’t yet exist). David Benatar argues (podcast, book) that while, as you point out, many human lives may well be worth continuing, those very same lives (he thinks all lives, but that’s more than I need to make this argument) may nevertheless not have been worth starting. If this is the case, then some or all of the lives that would come into existence by preventing extinction may also not be worth starting.
What I was describing wasn’t exactly Pascal’s mugging. Pascal’s mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he’s told you is some ridiculous story about how, if you don’t, there’s a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn’t actually lead you to pay the mugger, and so on. I don’t really have a thought-out view on Pascal’s mugging myself. The reason what I’m describing is different is that [this sort of reasoning leading you to *not* kill someone] wouldn’t be considered a pathological conclusion by most people (same with buying flood insurance).
Here are two other considerations that haven’t yet been mentioned:
1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes side constraints/deontological rules against killing, then EA doesn’t require you to violate those side constraints in the service of doing good; one can simply do the most good one can do within those side constraints.
2. Many EAs are interested in taking into account moral uncertainty, i.e. uncertainty about which moral system is correct. Even if you think the most likely theory is consequentialism, it can be rational to act as if there is a side constraint against killing if you place some amount of credence in a theory (e.g. a deontological theory) on which killing is always quite seriously wrong. The thought is this: if there’s some chance that your house will be damaged by a flood, it can be worth it to buy flood insurance, even if that chance is quite small, since the damage if it does happen will be very great. By the same token, even if the theory you think is most probably recommends killing in a particular case, it can still be worth it to refrain, if you also place some small credence in another theory that thinks killing is always seriously wrong. Will MacAskill discusses this in his podcast with Rob Wiblin.
Tl;dr: you might think killing one to save ten is wrong because you’re a nonconsequentialist, and this is perfectly compatible with EA. Or, even if you are consequentialist, and even if you think consequentialism sometimes recommends killing one to save ten, it might still be rational not to kill in those cases, if you place even a small credence in some other theory on which this would be seriously wrong.
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom’s book Against Empathy (sadly I don’t remember which article of his Bloom cites), and I get the impression a fair few academics read him.
Hi,
What would be the attitude towards someone who wanted to work with you after undergrad for a year or two, but then go on to graduate school (likely for philosophy in my case), with an eye towards then continuing to work with you or other EA orgs after grad school?
So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:
From my outsider’s perspective, it seems as though AI safety uses a lot of concepts from economics (especially expected utility theory). And if you’re at the grad level in economics, then you probably have a decent math background. So at least many of your skills seem like they would transfer over.
I don’t know how much impact you can expect to have as an AI researcher compared to an economist. But that seems like the kind of question an economist would be well-equipped to work on answering! If you happen to not already be familiar with cause prioritization research, you might consider staying in economics and focusing on it, rather than switching to AI, as cause prioritization is pretty important in its own right.
Similarly, you might focus on global priorities research: https://forum.effectivealtruism.org/posts/dia3NcGCqLXhWmsaX/an-introduction-to-global-priorities-research-for-economists. Last I knew the Global Priorities Institute was looking to hire more economists; don’t know if that will still be true when you finish your grad program, but at the very least I expect they’ll still be looking to collaborate with economists at that time.
In other words, it seems like you might have a shot at transitioning (though I am very, very unqualified to assess this), but also there seem to be good, longtermist-relevant research opportunities even within economics proper.