Former username Ikaxas
Vaughn Papenhausen
Adding onto this, it’s also generally accepted that you should only do serious translation work into a language that you speak natively. For instance, an English-German bilingual with German as their native language should not translate German content into English, only English content into German. So what you need are not just people who are fluent in English and some other language, but people who have some other language as their native language.
So I am a philosophy grad student with a shallow familiarity with this literature. The way I understand the people who object to the evo-debunking, they argue that the evolution stuff is a red herring—basically any causal story about the origins of our moral intuitions would do the same work in the argument, the empirical details don’t matter. The real work is going on in the philosophical side of the argument, and that, they think, doesn’t hold up. Might post again later with some paper recs.
Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.
So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:
From my outsider’s perspective, it seems as though AI safety uses a lot of concepts from economics (especially expected utility theory). And if you’re at the grad level in economics, then you probably have a decent math background. So at least many of your skills seem like they would transfer over.
I don’t know how much impact you can expect to have as an AI researcher compared to an economist. But that seems like the kind of question an economist would be well-equipped to work on answering! If you happen to not already be familiar with cause prioritization research, you might consider staying in economics and focusing on it, rather than switching to AI, as cause prioritization is pretty important in its own right.
Similarly, you might focus on global priorities research: https://forum.effectivealtruism.org/posts/dia3NcGCqLXhWmsaX/an-introduction-to-global-priorities-research-for-economists. Last I knew the Global Priorities Institute was looking to hire more economists; don’t know if that will still be true when you finish your grad program, but at the very least I expect they’ll still be looking to collaborate with economists at that time.
In other words, it seems like you might have a shot at transitioning (though I am very, very unqualified to assess this), but also there seem to be good, longtermist-relevant research opportunities even within economics proper.
I agree it may be difficult for a utilitarian to fully deceive themselves into giving up their utilitarianism. But here’s an option that might be more feasible: be uncertain about your utilitarianism (you probably already are, or if you aren’t you should be), and act according to a theory that both 1. Utilitarianism recommends you act according to, and 2. You find independently at least somewhat plausible. This could be a traditional moral theory, or it might even be the result of the moral uncertainty calculation itself.
I’m entering philosophy grad school now, but in a few years I’m going to have to start thinking about designing courses, and I’m thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?
Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom’s book Against Empathy (sadly I don’t remember which article of his Bloom cites), and I get the impression a fair few academics read him.
I loved this! A nitpick and a question. First, “pray” and “pray tell” sound a bit out of place to me, they sound more like Shakespeare or something than Plato.
Second:
Socrates: Huh, I had not previously considered women and slaves...
This isn’t true, at least of the version of Socrates depicted by Plato. Are you trying to imply that this conversation is the origin of Socrates’ egalitarian views on women and slaves in the Republic and the Meno? There are a couple of places where I thought you were implying that the dinner conversation Caplan attends is the conversation depicted in the Republic. Is it just meant to be another similar conversation, with earlier incarnations of Socrates’ views on the tripartite soul and Thrasymachus’ views on justice?
I know this doesn’t solve the actual problem you’re getting at, but here’s a translation of that sentence from philosophese to English. “Pro tanto” essentially means “all else equal”: a “pro tanto” consideration is a consideration, but not necessarily an overriding one. “Public justification” just means justifying policy choices with reasons that would/could be persuasive to the public/to the people they will affect. So the sentence as a whole means something like “While moral uncertainty doesn’t mean that governments (and other institutions) should always justify their decisions to the people, it does mean they should do so when they can.”
This is something I’m dealing with right now, so reading this was helpful. Thanks
David Moss mentioned a “long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically.” Aside from Nietzsche, another very well-known proponent of this tradition is Bernard Williams. Take a look at his page in the Stanford Encyclopedia of Philosophy, and if it looks promising check out his book Ethics and the Limits of Philosophy. You might also check out his essays “Ethical Consistency” (which I haven’t read; in his essay collection Problems of the Self) and “Conflicts of Values” (in Moral Luck). There are probably lots of other essays of his that are relevant that I just don’t know about. Another essay you might read is Steven Lukes’ “Making Sense of Moral Conflict” in his book Moral Conflict and Politics. On the question of whether there can ever be impossible moral demands (that is, situations where all of the available options are morally wrong, potentially because of conflicting moral requirements), one recent book (which I haven’t read, but sounds good) is Lisa Tessman’s Moral Failure: On the Impossible Demands of Morality (see also the SEP article here). Don Loeb has an essay called “Moral Incoherentism,” which despite its title seems to deal with something slightly different than what you’re talking about, but might still be of interest.
The piece that comes the closest to speaking directly to what you’re talking about here, that I know of, is Richard Ngo’s blog post “Arguments for Moral Indefinability”. He also has a post on “realism about rationality” which is probably also related.
On “consistency with our intuitions,” a book to check out might be Michael Huemer’s Ethical Intuitionism. And of course the SEP article on ethical intuitionism. Though of course intuitionism isn’t the only metaethical theory that takes consistency with our intuitions as a criterion; David Moss mentioned reflective equilibrium—and I definitely second his recommendation to look into this further—and Constructivism also has some of this flavor, for instance. Also check out this paper on Moorean arguments in ethics (“Moorean arguments” in reference to G.E. Moore’s famous “here is one hand” argument).
David Moss also mentioned “hyper-methodism and hyper-particularism.” Another paper that touches on that distinction, and on Moorean arguments (though not specifically in ethics) is Thomas Kelly’s “Moorean Facts and Belief Revision.”
I don’t think only PhD students can apply. On the website it says either philosophy PhD students, or graduates of a philosophy program, can apply. So I assume e.g. early-career professors would also be welcome to apply.
So the terminology here gets used differently by different people, but the view that moral statements can be true or false is usually called “cognitivism”, not “realism” (though there definitely are people who use “realism” for that view). My own personal preference is to define realism as cognitivism plus the metaphysical claim that moral properties are mind-independent (i.e. not grounded in facts about anyone’s moral beliefs or attitudes).
- 7 Sep 2022 18:29 UTC; 5 points) 's comment on Bernard Williams: Ethics and the limits of impartiality by (
I’d be interested in this. Even though “generalist researcher” is well-known, I think it’s easy from the outside to get a distorted picture of the “content” of the job. Aside from this recent post, I don’t know of write ups about it off the top of my head (though there could be ones I don’t know about), and of course multiple writeups are useful since different people’s situations and experiences will be different.
I had this reaction as well. Can’t speak for OP, but one issue with this is that audio is harder to look back at than writing; harder to skim when you’re just looking for that one thing you think was said but you want to be sure. One solution here would be transcription, which could probably be automated because it wouldn’t have to be perfect, just good enough to be able to skim to the part of the audio you’re looking for.
I’d be very interested in hearing more about the views you list under the “more philosophical end” (esp. moral uncertainty) -- either here or on the 80k podcast.
Definitely, I’ll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven’t picked particular readings yet though as I don’t know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in the South, the Holocaust); a unit on biases related to moral catastrophes; a unit on the psychology of evil (e.g. Baumeister’s work on the subject, which I haven’t read yet); a unit on moral uncertainty; a unit on whether antirealism can escape or accommodate the possibility of moral catastrophes.
Assignment ideas:
pick one of the potential moral catastophes Williams mentions, which you think is least likely to actually be a moral catastrophe. Now, imagine that you are yourself five years from now and you’ve been completely convinced that it is in fact a moral catastrophe. What convinced you? Write a paper trying to convince your current self that it is a moral catastrophe after all.
Come up with a potential moral catastrophe that Williams didn’t mention, and write a brief (maybe 1-2 pages?) argument for why it is or isn’t one (whatever you actually believe). Further possibility: Once these are collected, I observe how many people argued that the one they picked was not a moral catastrophe, and if it’s far over 50%, discuss with the class where that bias might come from (e.g. status quo bias, etc.).
This is all still in the brainstorming stage at the moment, but feel free to use any of this if you’re ever designing a course/discussion group for this paper.
Pretty sure I would also benefit from reading the appendix
Not Jeff, but I agree with what he said, and here are my reasons:
The feedback Jpmos is giving you is time-sensitive (“Since it is still relatively early in the life of this post...”)
The feedback Jpmos is giving you is not actually about what you said. Rather, it’s simply about the way you’re communicating it, letting you know that, at least in Jpmos’s case, your chosen method of communication came close to not being effective (unless your goals in writing the post are significantly different than the usual goals of someone writing a post, i.e. to communicate and defend a claim. Admittedly, you say that “I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place”, and maybe that is a significantly different goal from the usual one, but even so, there’s a higher likelihood of readers doing that if they get the claim, or at least the topic, of the post up front.)