I also don’t see any evidence for the claim of EA philosophers having “eroded the boundary between this kind of philosophizing and real-world decision-making”.
Have you visited the 80,000 Hours website recently?
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.
A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).
(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)
As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.
I’m glad you shared the J.S. Mill quote.
…the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better
EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).
To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.
My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.
(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)
Distinguish: (i) philosophically-informed ethical practice, vs (ii) “erod[ing] the boundary between [fantastical thought experiments] and real-world decision-making”
I think that (i) is straightforwardly good, central to EA, and a key component of what makes EA distinctively good. You seem to be asserting that (ii) is a common problem within EA, and I’m wondering what the evidence for this is. I don’t see anyone advocating for implementing the repugnant conclusion in real life, for example.
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business.
I think this is conflating distinct ideas. The “risky business” is simply real-world decision-making. There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasn’t until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]
Philosophers think about tricky edge cases which others tend to ignore, but unless you’ve some evidence that thinking about the edge cases makes us worse at responding to central cases—and again, I’m still waiting for evidence of this—then it seems to me that you’re inventing associations where none exist in reality.
EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers.
Of course. The end of the Mill quote is just flagging that traditional social norms are not beyond revision. We may have good grounds for critiquing the anti-gay sexual morality of our ancestors, for example, and so reject such outmoded norms (for everyone, not just ourselves) when we have truly “succeeded in finding better”.
there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.
Do you take yourself to be disagreeing with me here? (Me: “People shouldn’t be kings”. You: “systematizing philosophers shouldn’t be kings!” You realize that my claim entails yours, right?) I’m finding a lot of this exchange somewhat frustrating, because we seem to be talking past each other, and in a way where you seem to be implicitly attributing to me views or positions that I’ve already explicitly disavowed.
My sense is that we probably agree about which concrete things are bad, you perhaps have the false belief that I disagree with you on that, but actually the only disagreement is about whether philosophy tells us to do the things we both agree are bad (I say it doesn’t). But if that doesn’t match your sense of the dialectic, maybe you can clarify what it is that you take us to disagree about?
[12/15: Edited to tone down an intemperate sentence.]
There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasn’t until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]
I strongly disagree with this. The key reason is: most of the time, norms that have been exposed to evolutionary selection pressures beat explicit “rational reflection” by individual humans. One of the major mistakes of Enlightenment philosophers was to think it is usually the other way around. These mistakes were plausibly a necessary condition for some of the horrific violence that’s taken place since they started trending.
I often run into philosophy graduates who tell me that relying on intuitive moral judgements about particular cases is “arrogant”. I reply by asking “where do these intuitions come from?” The metaphysical realists say “they are truths of reason, underwritten by the non-natural essence of rationality itself”. The naturalists say: “these intuitions were transmitted to you via culture and genetics, itself subject to aeons of evolutionary pressure”. I side with the naturalists, despite all the best arguments for non-naturalism (to my mind, they’re mostly bad!).
One way to think about the 21st century predicament is that we usually learn via trial and error and selection pressures, but this dynamic in a world with modern technology seems unlikely to go well.
it wasn’t until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.
I agree that philosophers, especially Derek Parfit, Nick Bostrom and Tyler Cowen*, have helped get this up the agenda. So too have many economists, astronomers, futurists, etc. Philosophers don’t have a monopoly on identifying what matters in practice—in fact they’re usually pretty bad at this.
Same thing goes if we look at social movements instead of individuals: the anti-nuclear bomb and environmental folks may have done more for getting catastrophic risk up the agenda than effective altruism has so far—especially in terms of generating a widespread culture concern and sense of unease, which certainly warmed up the audience for Bostrom, Parfit, and so on.
Effective altruism movement is only just getting started (hopefully), and it has achieved remarkable successes already. So I do think we’re on track to play a critical role, and we have Bostrom and Parfit and Ord and Sidgwick and Cowen to thank for that—along with many, many others.
*Those who don’t see Tyler Cowen as fundamentally a philosopher—perhaps one of the greats, certainly better than Parfit (with whom he collaborated early on)—are not following carefully.
I’m not going to respond to the “show me the evidence” requests for now because I’m short on time and it’s hard to do this well. Also: I think you and most readers can probably identify a bunch of evidence in favour of these takes if you take a while to look.
I’m sorry to hear you’re finding this frustrating. Personally I’m enjoying our exchange because it’s giving me a reason to clarify and write down a bunch of things I’ve been thinking about for a long time, and I’m interested to hear what you and others make of them.
On Twitter I suggested we arrange a time to call. Would you be up for this? If yes, send me a DM.
Have you visited the 80,000 Hours website recently?
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.
A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).
(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)
As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.
I’m glad you shared the J.S. Mill quote.
EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).
To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.
In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.
My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.
(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)
Distinguish:
(i) philosophically-informed ethical practice, vs
(ii) “erod[ing] the boundary between [fantastical thought experiments] and real-world decision-making”
I think that (i) is straightforwardly good, central to EA, and a key component of what makes EA distinctively good. You seem to be asserting that (ii) is a common problem within EA, and I’m wondering what the evidence for this is. I don’t see anyone advocating for implementing the repugnant conclusion in real life, for example.
I think this is conflating distinct ideas. The “risky business” is simply real-world decision-making. There is no sense to the idea that philosophically-informed decision-making is inherently more risky than philosophically ignorant decision-making. [Quite the opposite: it wasn’t until philosophers raised the stakes to salience that x-risk started to be taken even close to sufficiently seriously.]
Philosophers think about tricky edge cases which others tend to ignore, but unless you’ve some evidence that thinking about the edge cases makes us worse at responding to central cases—and again, I’m still waiting for evidence of this—then it seems to me that you’re inventing associations where none exist in reality.
Of course. The end of the Mill quote is just flagging that traditional social norms are not beyond revision. We may have good grounds for critiquing the anti-gay sexual morality of our ancestors, for example, and so reject such outmoded norms (for everyone, not just ourselves) when we have truly “succeeded in finding better”.
Do you take yourself to be disagreeing with me here? (Me: “People shouldn’t be kings”. You: “systematizing philosophers shouldn’t be kings!” You realize that my claim entails yours, right?) I’m finding a lot of this exchange somewhat frustrating, because we seem to be talking past each other, and in a way where you seem to be implicitly attributing to me views or positions that I’ve already explicitly disavowed.
My sense is that we probably agree about which concrete things are bad, you perhaps have the false belief that I disagree with you on that, but actually the only disagreement is about whether philosophy tells us to do the things we both agree are bad (I say it doesn’t). But if that doesn’t match your sense of the dialectic, maybe you can clarify what it is that you take us to disagree about?
[12/15: Edited to tone down an intemperate sentence.]
I strongly disagree with this. The key reason is: most of the time, norms that have been exposed to evolutionary selection pressures beat explicit “rational reflection” by individual humans. One of the major mistakes of Enlightenment philosophers was to think it is usually the other way around. These mistakes were plausibly a necessary condition for some of the horrific violence that’s taken place since they started trending.
I often run into philosophy graduates who tell me that relying on intuitive moral judgements about particular cases is “arrogant”. I reply by asking “where do these intuitions come from?” The metaphysical realists say “they are truths of reason, underwritten by the non-natural essence of rationality itself”. The naturalists say: “these intuitions were transmitted to you via culture and genetics, itself subject to aeons of evolutionary pressure”. I side with the naturalists, despite all the best arguments for non-naturalism (to my mind, they’re mostly bad!).
One way to think about the 21st century predicament is that we usually learn via trial and error and selection pressures, but this dynamic in a world with modern technology seems unlikely to go well.
I agree that philosophers, especially Derek Parfit, Nick Bostrom and Tyler Cowen*, have helped get this up the agenda. So too have many economists, astronomers, futurists, etc. Philosophers don’t have a monopoly on identifying what matters in practice—in fact they’re usually pretty bad at this.
Same thing goes if we look at social movements instead of individuals: the anti-nuclear bomb and environmental folks may have done more for getting catastrophic risk up the agenda than effective altruism has so far—especially in terms of generating a widespread culture concern and sense of unease, which certainly warmed up the audience for Bostrom, Parfit, and so on.
Effective altruism movement is only just getting started (hopefully), and it has achieved remarkable successes already. So I do think we’re on track to play a critical role, and we have Bostrom and Parfit and Ord and Sidgwick and Cowen to thank for that—along with many, many others.
*Those who don’t see Tyler Cowen as fundamentally a philosopher—perhaps one of the greats, certainly better than Parfit (with whom he collaborated early on)—are not following carefully.
I’m not going to respond to the “show me the evidence” requests for now because I’m short on time and it’s hard to do this well. Also: I think you and most readers can probably identify a bunch of evidence in favour of these takes if you take a while to look.
I’m sorry to hear you’re finding this frustrating. Personally I’m enjoying our exchange because it’s giving me a reason to clarify and write down a bunch of things I’ve been thinking about for a long time, and I’m interested to hear what you and others make of them.
On Twitter I suggested we arrange a time to call. Would you be up for this? If yes, send me a DM.