I don’t know how much the FTX collapse is responsible for our current culture. They did cause unbelievable damage, acting extremely unethically and unilaterally and recklessly in destructive ways. But they did have this world-scale ambition, and urgency, and proclivity to actually make things happen in the world, that I think central EA orgs and the broader EA community sorely lack in light of the problems we’re hoping to solve.
But this is exactly why I don’t want to encourage heroic responsibility (despite the fact that I often take on that mindset myself). Empirically, its track record seems quite bad, and I’d feel that way even if you ignore FTX.
Like, my sense is that something along the lines of heroic responsibility causes people to:
Predictably bite off more than they can chew, and have massively reduced impact as a result
If 100 people each solved 1% of a problem, you’d be in a good place. Instead, 100 EAs with heroic responsibility each try to solve 100% of the problem, and each solve 0.01% of it, and then you still have 99% left. (And in practice I expect many also move backwards.)
Leave a genuinely impactful role because they can’t see how it will solve everything (and then go on to something not as good)
Cut corners due to increased urgency and responsibility, that leads to worse outcomes, because actually those corners were important
Underestimate the value of conventional wisdom
E.g. undervaluing the importance of management, ops, process, and maintenance, because it’s hard to state a clear, legible theory of change for them that is as potentially-high-upside as something like research
Trick themselves into thinking a bet is worth taking (“if this has even a 1% chance of working, it would be worth it” but actually the chance is more like 0.0001%)
To be clear in some sense these are all failures of epistemics, in that if you have sufficiently good epistemics then you wouldn’t make any of these mistakes even if you were taking on heroic responsibility. But in practice humans are enough of an epistemic mess that I instead think that it’s better to just not adopt heroic responsibility and instead err more in the direction of “the normal way to do things”.
While I really like the HPMOR quote, I don’t really resonate with heroic responsibility, and don’t resonate with the “Everything is my fault” framing. Responsibility is a helpful social coordination tool, but it doesn’t feel very “real” to me. I try to take the most helpful/impactful actions, even if they don’t seem like “my responsibility” (while being cooperative and not unilateral and with reasonable constraints).
I’m sympathetic to taking on heroic responsibility causing harm in certain cases, but I don’t see strong enough evidence that it causes more harm than good. The examples of moral courage from my talk all seem like examples of heroic responsibility with positive outcomes. The converse points to your bullets also generally seem more compelling to me:
1) It seems more likely to me that people taking too little responsibility for making the world better off has caused a lot more harm (like billionaires not doing more to reduce poverty, factory farming, climate change, AI risk, etc, or improve the media/disinformation landscape and political environment, etc). The harm is just much less visible since these are mostly failures of omission, not execution errors. It seems obvious to me the world could be much better off today, and the trajectory of the future could look much better than it does right now.
2) Not really the converse, but I don’t know of anyone leaving an impactful role because they can’t see how it will solve everything? I’ve never heard of anyone whose bar for taking on a job is “must be able to solve everything.”
3) I see tons of apathy, greed, laziness, inefficiency, etc that lead to worse outcomes. The world is on fire in various ways, but the vast majority of people don’t act like it.
4) Overvaluing conventional wisdom also causes tons of harm. How many well-resourced people never question general societal ethical norms (e.g. around the ethics of killing animals for food, or how much to donate, or how much social impact should be a priority in your career compared to salary, etc etc etc).
5) I’d argue EAs (and humans in general) are much more prone to prioritizing higher probability/certainty, lower-EV options over higher-EV, lower-probability options (Givewell donations over pro-global-health USG lobbying or political donations feels like a likely candidate). It’s very emotionally difficult to do something that has a low chance of succeeding. AI safety does seem like a strong counterexample in the EA community, but I’d guess a lot of the community’s members’ prioritization of AI safety and specific work people do has more to do with intellectual interest and it being high-status in the community than rigorous impact-optimization.
Two cruxes for whether to err more in the direction of doing things the normal way: 1) How well you expect things to go by default. 2) How easy it is to do good vs. cause harm.
I don’t feel great about 1), and honestly feel pretty good about 2), largely because I think that doing common-sense good things tends to actually be good, and doing good galaxy-brained ends-justify-the-means things that seem bad to normal people (like committing fraud or violence or whatever) are usually actually bad.
I’m totally on board with “if the broader world thought more like EAs that would be good”, which seems like the thrust of your comment. My claim was limited to the directional advice I would give EAs.
Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren’t significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that “heroic responsibility”/going overboard can be fraught.
Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it’s less intellectually stimulating, more stressful, etc; how many EAs are willing to donate a large portion of their income; how many EAs think about prioritization and population ethics very rigorously; etc. I’m very appreciative of how much more I see these in EA world than outside it, and I realize the above are unreasonable to expect from people.
Perhaps mentioned elsewhere here, but if we look for precedent for people doing an enormous amount of good (I can only think of Stanislav Petrov and people making big steps in curing disease), these actually did not act recklessly I think. It seems more like they persistently applied themselves to a problem, not super forcing an outcome and aligning a lot with others (like those eradicating smallpox). So if one wants a hero mindset, it might be good to emulate actual heroes we both think did a lot of good and that also reduced the risk of their actions.
I think there are examples supporting many different approaches and it depends immensely on what you’re trying to do, the levers available to you and the surrounding context. E.g. in the more bold and audacious, less cooperative direction, Chiune Sugihara or Osckar Schindler come to mind. Petrov doesn’t seem like a clear example in the “non-reckless” direction, and I’d put Arkhipov in a similar boat (they both acted rapidly under uncertainty in a way the people around them disagreed with, and took responsibility for a whole big situation when it probably would have been very easy to say to themselves that it wasn’t their job to do things other than obey orders and go with the group).
I agree. Reading your comment made me think that it might be interesting — even if just as a small experiment — to map out which historical figures we feel struck the ~right balance between ambition and caution.
I don’t know if it would reveal much, but perhaps reading about a few such people could help me (and maybe others) better calibrate our own mix of drive and risk averseness. I find it easier to internalize these balances through real people and stories than through abstract arguments. And perhaps that kind of reflection could, in perhaps only a small way, help prevent future crises of judgment like FTX.
But this is exactly why I don’t want to encourage heroic responsibility (despite the fact that I often take on that mindset myself). Empirically, its track record seems quite bad, and I’d feel that way even if you ignore FTX.
Like, my sense is that something along the lines of heroic responsibility causes people to:
Predictably bite off more than they can chew, and have massively reduced impact as a result
If 100 people each solved 1% of a problem, you’d be in a good place. Instead, 100 EAs with heroic responsibility each try to solve 100% of the problem, and each solve 0.01% of it, and then you still have 99% left. (And in practice I expect many also move backwards.)
Leave a genuinely impactful role because they can’t see how it will solve everything (and then go on to something not as good)
Cut corners due to increased urgency and responsibility, that leads to worse outcomes, because actually those corners were important
Underestimate the value of conventional wisdom
E.g. undervaluing the importance of management, ops, process, and maintenance, because it’s hard to state a clear, legible theory of change for them that is as potentially-high-upside as something like research
Trick themselves into thinking a bet is worth taking (“if this has even a 1% chance of working, it would be worth it” but actually the chance is more like 0.0001%)
To be clear in some sense these are all failures of epistemics, in that if you have sufficiently good epistemics then you wouldn’t make any of these mistakes even if you were taking on heroic responsibility. But in practice humans are enough of an epistemic mess that I instead think that it’s better to just not adopt heroic responsibility and instead err more in the direction of “the normal way to do things”.
While I really like the HPMOR quote, I don’t really resonate with heroic responsibility, and don’t resonate with the “Everything is my fault” framing. Responsibility is a helpful social coordination tool, but it doesn’t feel very “real” to me. I try to take the most helpful/impactful actions, even if they don’t seem like “my responsibility” (while being cooperative and not unilateral and with reasonable constraints).
I’m sympathetic to taking on heroic responsibility causing harm in certain cases, but I don’t see strong enough evidence that it causes more harm than good. The examples of moral courage from my talk all seem like examples of heroic responsibility with positive outcomes. The converse points to your bullets also generally seem more compelling to me:
1) It seems more likely to me that people taking too little responsibility for making the world better off has caused a lot more harm (like billionaires not doing more to reduce poverty, factory farming, climate change, AI risk, etc, or improve the media/disinformation landscape and political environment, etc). The harm is just much less visible since these are mostly failures of omission, not execution errors. It seems obvious to me the world could be much better off today, and the trajectory of the future could look much better than it does right now.
2) Not really the converse, but I don’t know of anyone leaving an impactful role because they can’t see how it will solve everything? I’ve never heard of anyone whose bar for taking on a job is “must be able to solve everything.”
3) I see tons of apathy, greed, laziness, inefficiency, etc that lead to worse outcomes. The world is on fire in various ways, but the vast majority of people don’t act like it.
4) Overvaluing conventional wisdom also causes tons of harm. How many well-resourced people never question general societal ethical norms (e.g. around the ethics of killing animals for food, or how much to donate, or how much social impact should be a priority in your career compared to salary, etc etc etc).
5) I’d argue EAs (and humans in general) are much more prone to prioritizing higher probability/certainty, lower-EV options over higher-EV, lower-probability options (Givewell donations over pro-global-health USG lobbying or political donations feels like a likely candidate). It’s very emotionally difficult to do something that has a low chance of succeeding. AI safety does seem like a strong counterexample in the EA community, but I’d guess a lot of the community’s members’ prioritization of AI safety and specific work people do has more to do with intellectual interest and it being high-status in the community than rigorous impact-optimization.
Two cruxes for whether to err more in the direction of doing things the normal way: 1) How well you expect things to go by default. 2) How easy it is to do good vs. cause harm.
I don’t feel great about 1), and honestly feel pretty good about 2), largely because I think that doing common-sense good things tends to actually be good, and doing good galaxy-brained ends-justify-the-means things that seem bad to normal people (like committing fraud or violence or whatever) are usually actually bad.
I’m totally on board with “if the broader world thought more like EAs that would be good”, which seems like the thrust of your comment. My claim was limited to the directional advice I would give EAs.
Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren’t significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that “heroic responsibility”/going overboard can be fraught.
Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it’s less intellectually stimulating, more stressful, etc; how many EAs are willing to donate a large portion of their income; how many EAs think about prioritization and population ethics very rigorously; etc. I’m very appreciative of how much more I see these in EA world than outside it, and I realize the above are unreasonable to expect from people.
Perhaps mentioned elsewhere here, but if we look for precedent for people doing an enormous amount of good (I can only think of Stanislav Petrov and people making big steps in curing disease), these actually did not act recklessly I think. It seems more like they persistently applied themselves to a problem, not super forcing an outcome and aligning a lot with others (like those eradicating smallpox). So if one wants a hero mindset, it might be good to emulate actual heroes we both think did a lot of good and that also reduced the risk of their actions.
I think there are examples supporting many different approaches and it depends immensely on what you’re trying to do, the levers available to you and the surrounding context. E.g. in the more bold and audacious, less cooperative direction, Chiune Sugihara or Osckar Schindler come to mind. Petrov doesn’t seem like a clear example in the “non-reckless” direction, and I’d put Arkhipov in a similar boat (they both acted rapidly under uncertainty in a way the people around them disagreed with, and took responsibility for a whole big situation when it probably would have been very easy to say to themselves that it wasn’t their job to do things other than obey orders and go with the group).
I agree. Reading your comment made me think that it might be interesting — even if just as a small experiment — to map out which historical figures we feel struck the ~right balance between ambition and caution.
I don’t know if it would reveal much, but perhaps reading about a few such people could help me (and maybe others) better calibrate our own mix of drive and risk averseness. I find it easier to internalize these balances through real people and stories than through abstract arguments. And perhaps that kind of reflection could, in perhaps only a small way, help prevent future crises of judgment like FTX.