I’m a PhD candidate in philosophy at Princeton University. Over summer 2023, I was a global priorities fellow at GPI. I work in normative ethics and Buddhist philosophy, with an eye towards global priorities research. My published work is listed here and my academic site is here.
Calvin_Baker
Wow, this is good—go Claude 3!
Buddhism and pessimism
Hi Andreas! I’m worried that the maximality rule will overgeneralize, implying that little is rationally required of us. Consider the decision whether to have children. There are obvious arguments both for and against from a self-interested point of view, and it isn’t clear exactly how to weigh them against each other. So, plausibly, having children will max EU according to at least one probability function in our representor, whereas not having children will max EU according to at least one other probability function in our representor. Result via maximality rule: either choice is rationally permissible. Or consider some interesting public policy problem from the perspective of a benevolent social planner. Given the murkiness of social science research, it seems like that, if we’ve gone in for the imprecise credence picture, no one policy will maximize EU relative to every credence function in the representor, in which case, many policy choices will be rationally permissible. I wonder if you have thoughts on this?
[Question] Line-caught carnivorous fish
Hi Michael, thanks for the post! I was really happy to see something like this on the EA Forum. In my view, EAs* significantly overestimate the plausibility of total welfarist consequentialism**, in part due to a lack of familiarity with the recent literature in moral philosophy. So I think posts like this are important and helpful.
* I mean this as a generic term (natural language plurals (usually) aren’t universally quantified).
** This isn’t to suggest that I think there’s some other moral theory that is very plausible. They’re all implausible, as far as I can tell; which is partly why I lean towards anti-realism in meta-ethics.
Thanks for the recs! What’s the Lecun you mention?
I’d love to see Johann Frick (Philosophy, UC Berkeley) on the podcast. Johann is a nonconsequentialist who defends the procreation Asymmetry and thinks longtermism is deeply misguided. Imo, his recent paper on the Asymmetry is one of the best; he’ll be able to steel-person many philosophical views that challenge common EA commitments; and he’s an engaging speaker.
Thanks for catching this, Bella! I’ve updated the link here and on our syllabus.
Hi Saul, since this is a discussion-based seminar rather than a lecture course, we won’t be recording. However, I plan to teach this course again in the future and may change the format—so future iterations may be recorded.
New Princeton course on longtermism
Hi Joe, thanks for sharing this. I enjoyed it—as I have enjoyed and learned from many of your philosophy posts recently!
A couple things:
1) I’m curious about your thoughts on the role of knowledge in epistemology and decision theory. You write, e.g., ‘Consider the divine commands of the especially-big-deal-meta-ethics spaghetti monster...‘. On pain of general skepticism, don’t we get to know that a spaghetti monster is not ‘the foundation of all being’? (I don’t have a strong commitment here, but after talking with a colleague who works in epistemology + decision theory and studied under Williamson, I think this sort of k-first approach is at least worth a serious look.)
2) At risk of being the table-thumping realist, I wanted to press on the nihilist’s response. You write that the nihilist has ‘other deliberative currency available – “wants,” “cares,” “prefers,” “would want,” “would care,” “would prefer,” and so on.’ We then get an example of this style of practical reasoning: ‘“If I untangle the deer from the barbed wire, then it can go free; I want this deer to be able to go free; OK, I will untangle the deer from the barbed wire”.’
The first two sentences don’t in any way support the third (since ‘supports’ is a normative relation, and we’re in nihilism world). The agent could just as well have thought to herself, ‘If I untangle the deer from the barbed wire, then it can go free; I want this deer to be able to go free; OK, I will now read Hamlet.’ There’s nothing worse about this internal dialogue and sequence of action (assuming the agent does then read Hamlet) because, again, nothing is worse than anything else in nihilism world.
You ask, ‘Who set up this court? We would presumably object if the court only accepted shoulds that were made out of e.g. divine commands, or non-natural frosting. So why not accept the currency of every representative?’ I think the realist will want to say: ‘the principled distinction is that in the other worlds there some sort of normativity. Whereas in nihilism world there isn’t. That’s why nihilism doesn’t get a seat at the table.’
As far as I can tell (not being a specialist in metaethics), the best the nihilist can hope for is the “Humean” solution, namely that our natural dispositions will (usually) suffice to get us back in the saddle and keep on with the project of living and pursuing things of “value.” (”...fortunately it happens, that since reason is incapable of dispelling these clouds, nature herself suffices to that purpose, and cures me of this philosophical melancholy and delirium, either by relaxing this bent of mind, or by some avocation, and lively impression of my senses, which obliterate all these chimeras. I dine, I play a game of backgammon, I converse, and am merry with my friends; and when after three or four hours’ amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any farther.
Here then I find myself absolutely and necessarily determined to live, and talk, and act like other people in the common affairs of life” (Treatise 1.4.7.8-10).) But this does nothing to address the question of whether we have reason to do any of those things. It’s just a descriptive forecast about what we will in fact do.
Neff’s book has been huge for my mental health. However, sometimes I find myself applying the self-compassion framework in a way that’s too formulaic, making it feel like a chore. (E.g., ’step 1: what would my best friend say to me right now? Step 2: remind myself that I’m not the only one experiencing/struggling with [whatever]. Step 3: pause to let yourself feel what you’re feeling.) I’d be interested if she has any tips for making it feel more warm/spontaneous/etc. and less rote
Thanks, Oliver! And am I reading the website correctly that the fellowship is full time, such that participants won’t be able to devote any time to their current research agendas (aside from weekends/evenings etc.)?
Will this program recur, or is this a one-off opportunity? (I’m quite interested, but unfortunately unsure whether I can take seven months off my PhD during this particular academic year.)
Really interesting! Do you have anything in mind for goods identified by competing ethical theories that you think would compete with, e.g., the beatific vision for the Christian or nirvana for the Buddhist? (A clear example here would be a valuable update for me.)
+1 on your comment that ‘Giving the right answers for the wrong reasons is still deeply unsatisfying.’ I think this is an under appreciated part of ethical theorizing and would even take a stronger methodological stance: getting the right explanatory answers (why we ought to do what we ought to) is just as important as getting the right extensional answers (what we ought to do). If an ethical theory gives you the wrong explanation, it’s not the right ethical theory!
Hi Michael, thanks for your comments! A few replies:
Re: amplification, I’m not sure about this proposal (I’m familiar with that section of the book). From the perspective of a supreme soteriology (e.g. (certain conceptions of) Christianity), attaining salvation is the best possible outcome, full stop. It is, to use MacAskill, Bykvist, and Ord’s terminology, maximally choiceworthy. It therefore seems to me wrong that ‘those other views could be further amplified lexically, too, all ad infinitum.’ To insist that we could lexically amplify a supreme soteriology would be to fail to take it seriously from its own internal perspective. But that is precisely what MacAskill, Bykvist, and Ord’s universal scale account requires us to do.
Of course, I agree that we can amplify other ethical theories that do not, in their standard forms, represent options or outcomes as maximally choiceworthy, such that the amplified theories do represent certain options/outcomes as maximally choiceworthy. But this is rather ad hoc.
Re: the ‘limited applicability’ suggestion, this strikes me as prima facie implausible on abudctive grounds (principally, parsimony, and to a lesser extent, elegance).
Re: the point that ‘there are other possible infinities that could dominate’: I’m not sure how the term ‘dominate’ is being used here. It’s not the case that other ethical theories which assign infinite choiceworthiness to certain options dominate supreme soteriologies in the game-theoretic useage of ‘dominate’ (on which option A dominates option B iff the outcome associated with A is at least as good as the corresponding outcome associated with B in every state of nature and strictly better in at least one).But if the point is rather simply that MEC does not require all agents—regardless of their credence distribution over descriptive and ethical hypotheses—to become religionists, I agree. To take a simplistic but illustrative example, MEC will tell an agent who has credence = 1 that doing whatever they feel like will generate an infinite quantity of the summum bonum to go ahead and do whatever they feel like. My thought is just that MEC will deliver sufficiently implausible verdicts to sufficiently many agents to cast serious doubt on its truth qua theory of what we ought to do in response to ethical uncertainty. This is particularly pressing in the context of prudential choice, due to the three factors highlighted in subsection 3.5 above. The points you make in the linked response to the question ‘why not accept Pascal’s Wager?’ are solid, and lead me to think that the extension of my argument from prudence to morality might not be quite as quick as I suggest at the end of the post. But if we can show that MEC is in big trouble in the domain of prudence, that seems to me like evidence against its candidacy in the domain of morality. (I don’t agree with MacAskill, Bykvist, and Ord’s suggestion that, on priors, we should expect the correct way to handle descriptive uncertainty to be more-or-less the correct way to handle ethical uncertainty. The descriptive and the ethical are quite different! But it would be relatively more surprising to me if the correct way to handle prudential uncertainty were wildly different from the correct way to handle moral uncertainty.)
Hi Cam, I’m glad you found the notes useful! Most of these (with The Precipice being an exception) were notes taken from audiobooks. As I was listening, I’d write down brief notes (sometimes as short as a key word or phrase) on the Notes app on iPhone. Then, once a day/once every couple days, I’d reference the Notes app to jog my memory, and write down the longer item of information in a Gdoc. Then, when I’d finished the book, I’d organize/synthesize the Gdoc into a coherent set of notes with sections etc.
These days I follow a similar system, but use Roam instead of Gdocs. Contrary to what some report, I don’t find that Roam has significantly improved anything for me, but I do like the ability to easily link among documents. As a philosopher I don’t find this super useful. I think if I were e.g. a historian I would find it a lot more useful.