While I really like the HPMOR quote, I don’t really resonate with heroic responsibility, and don’t resonate with the “Everything is my fault” framing. Responsibility is a helpful social coordination tool, but it doesn’t feel very “real” to me. I try to take the most helpful/impactful actions, even if they don’t seem like “my responsibility” (while being cooperative and not unilateral and with reasonable constraints).
I’m sympathetic to taking on heroic responsibility causing harm in certain cases, but I don’t see strong enough evidence that it causes more harm than good. The examples of moral courage from my talk all seem like examples of heroic responsibility with positive outcomes. The converse points to your bullets also generally seem more compelling to me:
1) It seems more likely to me that people taking too little responsibility for making the world better off has caused a lot more harm (like billionaires not doing more to reduce poverty, factory farming, climate change, AI risk, etc, or improve the media/disinformation landscape and political environment, etc). The harm is just much less visible since these are mostly failures of omission, not execution errors. It seems obvious to me the world could be much better off today, and the trajectory of the future could look much better than it does right now.
2) Not really the converse, but I don’t know of anyone leaving an impactful role because they can’t see how it will solve everything? I’ve never heard of anyone whose bar for taking on a job is “must be able to solve everything.”
3) I see tons of apathy, greed, laziness, inefficiency, etc that lead to worse outcomes. The world is on fire in various ways, but the vast majority of people don’t act like it.
4) Overvaluing conventional wisdom also causes tons of harm. How many well-resourced people never question general societal ethical norms (e.g. around the ethics of killing animals for food, or how much to donate, or how much social impact should be a priority in your career compared to salary, etc etc etc).
5) I’d argue EAs (and humans in general) are much more prone to prioritizing higher probability/certainty, lower-EV options over higher-EV, lower-probability options (Givewell donations over pro-global-health USG lobbying or political donations feels like a likely candidate). It’s very emotionally difficult to do something that has a low chance of succeeding. AI safety does seem like a strong counterexample in the EA community, but I’d guess a lot of the community’s members’ prioritization of AI safety and specific work people do has more to do with intellectual interest and it being high-status in the community than rigorous impact-optimization.
Two cruxes for whether to err more in the direction of doing things the normal way: 1) How well you expect things to go by default. 2) How easy it is to do good vs. cause harm.
I don’t feel great about 1), and honestly feel pretty good about 2), largely because I think that doing common-sense good things tends to actually be good, and doing good galaxy-brained ends-justify-the-means things that seem bad to normal people (like committing fraud or violence or whatever) are usually actually bad.
I’m totally on board with “if the broader world thought more like EAs that would be good”, which seems like the thrust of your comment. My claim was limited to the directional advice I would give EAs.
Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren’t significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that “heroic responsibility”/going overboard can be fraught.
Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it’s less intellectually stimulating, more stressful, etc; how many EAs are willing to donate a large portion of their income; how many EAs think about prioritization and population ethics very rigorously; etc. I’m very appreciative of how much more I see these in EA world than outside it, and I realize the above are unreasonable to expect from people.
While I really like the HPMOR quote, I don’t really resonate with heroic responsibility, and don’t resonate with the “Everything is my fault” framing. Responsibility is a helpful social coordination tool, but it doesn’t feel very “real” to me. I try to take the most helpful/impactful actions, even if they don’t seem like “my responsibility” (while being cooperative and not unilateral and with reasonable constraints).
I’m sympathetic to taking on heroic responsibility causing harm in certain cases, but I don’t see strong enough evidence that it causes more harm than good. The examples of moral courage from my talk all seem like examples of heroic responsibility with positive outcomes. The converse points to your bullets also generally seem more compelling to me:
1) It seems more likely to me that people taking too little responsibility for making the world better off has caused a lot more harm (like billionaires not doing more to reduce poverty, factory farming, climate change, AI risk, etc, or improve the media/disinformation landscape and political environment, etc). The harm is just much less visible since these are mostly failures of omission, not execution errors. It seems obvious to me the world could be much better off today, and the trajectory of the future could look much better than it does right now.
2) Not really the converse, but I don’t know of anyone leaving an impactful role because they can’t see how it will solve everything? I’ve never heard of anyone whose bar for taking on a job is “must be able to solve everything.”
3) I see tons of apathy, greed, laziness, inefficiency, etc that lead to worse outcomes. The world is on fire in various ways, but the vast majority of people don’t act like it.
4) Overvaluing conventional wisdom also causes tons of harm. How many well-resourced people never question general societal ethical norms (e.g. around the ethics of killing animals for food, or how much to donate, or how much social impact should be a priority in your career compared to salary, etc etc etc).
5) I’d argue EAs (and humans in general) are much more prone to prioritizing higher probability/certainty, lower-EV options over higher-EV, lower-probability options (Givewell donations over pro-global-health USG lobbying or political donations feels like a likely candidate). It’s very emotionally difficult to do something that has a low chance of succeeding. AI safety does seem like a strong counterexample in the EA community, but I’d guess a lot of the community’s members’ prioritization of AI safety and specific work people do has more to do with intellectual interest and it being high-status in the community than rigorous impact-optimization.
Two cruxes for whether to err more in the direction of doing things the normal way: 1) How well you expect things to go by default. 2) How easy it is to do good vs. cause harm.
I don’t feel great about 1), and honestly feel pretty good about 2), largely because I think that doing common-sense good things tends to actually be good, and doing good galaxy-brained ends-justify-the-means things that seem bad to normal people (like committing fraud or violence or whatever) are usually actually bad.
I’m totally on board with “if the broader world thought more like EAs that would be good”, which seems like the thrust of your comment. My claim was limited to the directional advice I would give EAs.
Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren’t significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that “heroic responsibility”/going overboard can be fraught.
Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it’s less intellectually stimulating, more stressful, etc; how many EAs are willing to donate a large portion of their income; how many EAs think about prioritization and population ethics very rigorously; etc. I’m very appreciative of how much more I see these in EA world than outside it, and I realize the above are unreasonable to expect from people.