Hi Toby, thanks so much for posting a transcript of this talk, and for giving the talk in the first place. It has been one of my key desires in community building to see more leaders engaging with the EA community post-FTX. This talk strikes a really good tone, and I dunno, impact measures are hard, but I suspect it was pretty important for how good of vibes EAG Bay Area felt over all, and Iâm excited for the Forum community to read it.
I overall like the content of the talk, but will be expressing some disagreements. First, though, things I liked about the content: I conceptually love the âstrive for excellenceâ frame, and the way you made âmaximization is perilousâ much more punchy for me than Holdenâs post did. I see the core of your talk as being about your dissertation. I overall think it was a valuable contribution to my models.
The core of my disagreement is about how we should think about naive utilitarianism/âconsequentialism. Iâd like to introduce some new terms, which I find helpful when talking to people about this subject:
Myopic consequentialism â e.g.: shoplifting to save money. Everyone can pretty easily see how this is going to go wrong pretty fast. It may be slightly nontrivial to work out how (âNobody can see me! Iâm really confident!â), but basically everyone agrees this is the bad kind of utilitarianism.
Base-level consequentialism â My read is that you, and most people who ground their morality in a fundamentally consequentialist way, start from here. Itâs the consequentialism you see in intro philosophy definitions. Good consequences are good.
Multi-layered consequentialism â We can start adding things on to our base-level consequentialism, to rescue it from the dangers of myopia that come from unrefined base-level consequentialism as a decision criterion.
I know many people who are roughly consequentialist. Everyone must necessarily use some amount of heuristics to guide decisions, instead of trying to calculate out the consequences of each breath they take. But in my experience, people vary substantially in how much they reify their heuristics into the realm of morality vs simply being useful ways to help make decisions.
For example, many people will have a pretty bright line around lying. Some might elevate that line into a moral dictum. Others might have it as a âreally solid heuristicâ, maybe even one that they commit to never violating. But lying is a pretty easy case. Thereâs a dispositional approach to âhow much of my decision-making is done by thinking directly about consequences.â
My best guess is that Toby will be too quick to create ironclad rules for himself, or pay too much attention to how a given decision fits in with too many virtues. But I am really uncertain whether I actually disagree. The disagreement I have is more about, âhow can we think about and talk about adding layers to our consequentialismâ, and, crucially, how much respect can we have for someone who is closer to the base-level.
Iâm not suggesting that we be tolerant of people who lie or cheat or have very bad effects on important virtues in our community. In fact I want to be say that the people in my life who are clear enough with themselves and with me about the layers on top of their consequentialism that they donât hold sacred, are generally very good models of EA virtues, and very easy to coordinate with, because they can take the other values more seriously.
Hi Toby, thanks so much for posting a transcript of this talk, and for giving the talk in the first place. It has been one of my key desires in community building to see more leaders engaging with the EA community post-FTX. This talk strikes a really good tone, and I dunno, impact measures are hard, but I suspect it was pretty important for how good of vibes EAG Bay Area felt over all, and Iâm excited for the Forum community to read it.
I overall like the content of the talk, but will be expressing some disagreements. First, though, things I liked about the content: I conceptually love the âstrive for excellenceâ frame, and the way you made âmaximization is perilousâ much more punchy for me than Holdenâs post did. I see the core of your talk as being about your dissertation. I overall think it was a valuable contribution to my models.
The core of my disagreement is about how we should think about naive utilitarianism/âconsequentialism. Iâd like to introduce some new terms, which I find helpful when talking to people about this subject:
Myopic consequentialism â e.g.: shoplifting to save money. Everyone can pretty easily see how this is going to go wrong pretty fast. It may be slightly nontrivial to work out how (âNobody can see me! Iâm really confident!â), but basically everyone agrees this is the bad kind of utilitarianism.
Base-level consequentialism â My read is that you, and most people who ground their morality in a fundamentally consequentialist way, start from here. Itâs the consequentialism you see in intro philosophy definitions. Good consequences are good.
Multi-layered consequentialism â We can start adding things on to our base-level consequentialism, to rescue it from the dangers of myopia that come from unrefined base-level consequentialism as a decision criterion.
I know many people who are roughly consequentialist. Everyone must necessarily use some amount of heuristics to guide decisions, instead of trying to calculate out the consequences of each breath they take. But in my experience, people vary substantially in how much they reify their heuristics into the realm of morality vs simply being useful ways to help make decisions.
For example, many people will have a pretty bright line around lying. Some might elevate that line into a moral dictum. Others might have it as a âreally solid heuristicâ, maybe even one that they commit to never violating. But lying is a pretty easy case. Thereâs a dispositional approach to âhow much of my decision-making is done by thinking directly about consequences.â
My best guess is that Toby will be too quick to create ironclad rules for himself, or pay too much attention to how a given decision fits in with too many virtues. But I am really uncertain whether I actually disagree. The disagreement I have is more about, âhow can we think about and talk about adding layers to our consequentialismâ, and, crucially, how much respect can we have for someone who is closer to the base-level.
Iâm not suggesting that we be tolerant of people who lie or cheat or have very bad effects on important virtues in our community. In fact I want to be say that the people in my life who are clear enough with themselves and with me about the layers on top of their consequentialism that they donât hold sacred, are generally very good models of EA virtues, and very easy to coordinate with, because they can take the other values more seriously.