Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas. To respond to your points:
1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that >95% of EAs’ lives don’t look like some extreme naive optimiser per your framing.
I think I see EA more as “how can we do the most amount of good you can do with X resources”, where it is up to you to determine X in terms of your time, money, career etc. When phrases begin with “EAs should”, I generally interpret that as “If you are wanting to have more impact, then you should”. I think the moral demandingness aspect is actually not very present in most EA discourse, and this is likely best for ensuring a healthy community.
EAs are of course human too, and the community from what I have seen of it is generally very supportive of people making decisions that are right for themselves when necessary (eg. career breaks, quitting a job which was very impactful, changing jobs to have kids etc—an example (read the comments)). Even if you are a “hard-core utilitarian”, then I think placing some value on your own happiness, motivation etc is still good for helping you achieve the best you can. Most EAs live on quite healthy salaries, in nice work environments, with a supportive community—while I don’t deny that there are also mental health issues within the group, I think EA as a movement thus far hasn’t caused many people to be self-sacrificial to the point of being detrimental to their wellbeing.
On whether maximisation is a good goal in the first place; the current societal default in most cases of altruistic work is to not consider optimisation or effectiveness at all. This has led to huge amounts of wasted time and money, which has by extension allowed massive amounts of suffering to continue. While you’re subpoint 5 about uncertainty is true, I think EA successes have proved the the ability to increase the expected impact you have with careful thought and evidence, hence the value EA has placed on rationality. Of course people make mistakes and some projects aren’t successful or even might be net negative, but I think it is reasonable to say that the expected value of your actions is what is important. If you buy that the effectiveness of interventions is roughly heavy-tailed, then you should also expect that the best options are much better than the “good” ones, and so it is worth taking a maximisation mindset to get the most value.
I don’t think saying “the world is a bad place” is a very useful or meaningful claim to make, but I think it is true that there is just so much low-hanging fruit still on the table for making it so much better, and that this is worth drawing attention to. People say things like the world is bad(which could be done in a better way) because honestly a lot of the world just doesn’t care about massive issues like poverty, factory farming, or threats from eg. pandemics or AI, and I think it is somewhat important to draw attention to the status quo being a bit messed up.
3. Ah your initial point is a classic argument that I think targets something no EA actually endorses. I think moral uncertainty and ideas of worldview diversification are highly regarded in EA, and I think everyone would immediately disregard acts that cause huge suffering today in the hope of increasing future potential, for both moral and epistemic uncertainty reasons.
I think your points regarding the insignificance of today’s events for humanity’s long-term seem to rely heavily on a view of non path dependency—my guess is that how the next couple of centuries go on key issues like AI, international coordination norms, factory farming, and space governance, could all significantly affect the long-term expected value of the future. I think ideas of hinginess are good to think about for this, see here: Hinge of history—EA Forum (effectivealtruism.org).
4. I agree it is generally a confusing topic and don’t have anything particularly useful to say besides wanting to highlight that people in the community are also very unsure. Fwiw I think most S-risk scenarios people are worried about are more to do with digital suffering/astronomical scale factory farming. I think human-slavery type situations are also quite unlikely.
Thanks for the clarification about how 1 and 2 may look very different in the EA communities.
I’m not particularly concerned about the thought that people might be out there taking maximization too far, the framing of my observations is more like “well here’s what going through the EA Handbook may prompt me to think about EA ideas or what other EAs may believe.
After thinking about your reply, I realized that I made a bunch of assumptions based on things that might just be incidental and not strongly connected. I came to the wrong impression that the EA Handbook is meant to be the most canonical and endorsed collection of EA fundamentals.
Here’s how I ended up there. In my encounters hearing about EA resources, the Handbook is the only introductory “course”, and presumably due to being the only one of its kind, it’s also the only one that’s been promoted to me via over multiple mediums. So I assumed that it must be the most official source of introduction, remaining alone in that spot over multiple years, seeing it bundled with EA VP also seemed like an endorsement. I also made the subconscious assumption that since there’s plenty of alternative high quality EA writing out there, as well as resources put into writing, that the Handbook as a compilation is probably designed to be the most representative collection of EA meta, otherwise it wouldn’t still be promoted the way it has been to me.
I’ve had almost no interaction with the EA Forum before reading the Handbook, so very limited prior context to gauge how “meta” the Handbook is among EA communities, or how meta any of its individual articles are. (Which now someone has helpfully provided a bunch of reading material that is also fundamental but while having quite different perspectives.)
Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas. To respond to your points:
1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that >95% of EAs’ lives don’t look like some extreme naive optimiser per your framing.
I think I see EA more as “how can we do the most amount of good you can do with X resources”, where it is up to you to determine X in terms of your time, money, career etc. When phrases begin with “EAs should”, I generally interpret that as “If you are wanting to have more impact, then you should”. I think the moral demandingness aspect is actually not very present in most EA discourse, and this is likely best for ensuring a healthy community.
EAs are of course human too, and the community from what I have seen of it is generally very supportive of people making decisions that are right for themselves when necessary (eg. career breaks, quitting a job which was very impactful, changing jobs to have kids etc—an example (read the comments)). Even if you are a “hard-core utilitarian”, then I think placing some value on your own happiness, motivation etc is still good for helping you achieve the best you can. Most EAs live on quite healthy salaries, in nice work environments, with a supportive community—while I don’t deny that there are also mental health issues within the group, I think EA as a movement thus far hasn’t caused many people to be self-sacrificial to the point of being detrimental to their wellbeing.
On whether maximisation is a good goal in the first place; the current societal default in most cases of altruistic work is to not consider optimisation or effectiveness at all. This has led to huge amounts of wasted time and money, which has by extension allowed massive amounts of suffering to continue. While you’re subpoint 5 about uncertainty is true, I think EA successes have proved the the ability to increase the expected impact you have with careful thought and evidence, hence the value EA has placed on rationality. Of course people make mistakes and some projects aren’t successful or even might be net negative, but I think it is reasonable to say that the expected value of your actions is what is important. If you buy that the effectiveness of interventions is roughly heavy-tailed, then you should also expect that the best options are much better than the “good” ones, and so it is worth taking a maximisation mindset to get the most value.
I don’t think saying “the world is a bad place” is a very useful or meaningful claim to make, but I think it is true that there is just so much low-hanging fruit still on the table for making it so much better, and that this is worth drawing attention to. People say things like the world is bad(which could be done in a better way) because honestly a lot of the world just doesn’t care about massive issues like poverty, factory farming, or threats from eg. pandemics or AI, and I think it is somewhat important to draw attention to the status quo being a bit messed up.
3. Ah your initial point is a classic argument that I think targets something no EA actually endorses. I think moral uncertainty and ideas of worldview diversification are highly regarded in EA, and I think everyone would immediately disregard acts that cause huge suffering today in the hope of increasing future potential, for both moral and epistemic uncertainty reasons.
I think your points regarding the insignificance of today’s events for humanity’s long-term seem to rely heavily on a view of non path dependency—my guess is that how the next couple of centuries go on key issues like AI, international coordination norms, factory farming, and space governance, could all significantly affect the long-term expected value of the future. I think ideas of hinginess are good to think about for this, see here: Hinge of history—EA Forum (effectivealtruism.org).
4. I agree it is generally a confusing topic and don’t have anything particularly useful to say besides wanting to highlight that people in the community are also very unsure. Fwiw I think most S-risk scenarios people are worried about are more to do with digital suffering/astronomical scale factory farming. I think human-slavery type situations are also quite unlikely.
Thanks for the clarification about how 1 and 2 may look very different in the EA communities.
I’m not particularly concerned about the thought that people might be out there taking maximization too far, the framing of my observations is more like “well here’s what going through the EA Handbook may prompt me to think about EA ideas or what other EAs may believe.
After thinking about your reply, I realized that I made a bunch of assumptions based on things that might just be incidental and not strongly connected. I came to the wrong impression that the EA Handbook is meant to be the most canonical and endorsed collection of EA fundamentals.
Here’s how I ended up there. In my encounters hearing about EA resources, the Handbook is the only introductory “course”, and presumably due to being the only one of its kind, it’s also the only one that’s been promoted to me via over multiple mediums. So I assumed that it must be the most official source of introduction, remaining alone in that spot over multiple years, seeing it bundled with EA VP also seemed like an endorsement. I also made the subconscious assumption that since there’s plenty of alternative high quality EA writing out there, as well as resources put into writing, that the Handbook as a compilation is probably designed to be the most representative collection of EA meta, otherwise it wouldn’t still be promoted the way it has been to me.
I’ve had almost no interaction with the EA Forum before reading the Handbook, so very limited prior context to gauge how “meta” the Handbook is among EA communities, or how meta any of its individual articles are. (Which now someone has helpfully provided a bunch of reading material that is also fundamental but while having quite different perspectives.)