To be clear, I don’t think we as a community should be scope insensitive. But here’s the FAQ I would write about this...
Q: does EA mean I should only work on the most important cause areas?
no! being in EA means you choose to do good with your life, and think about those choices. We hope that you’ll choose to improve your life / career / donations in more-altruistic ways, and we might talk with you and discover ideas for making your altruistic life even better.
Q: does EA mean I should do or support [crazy thing X] to improve the world?
Probably not: if it sounds crazy to you, trust your reasoning! However, EA is a big umbrella and we’re nurturing lots of weird ideas; some ideas that seem crazy to one person might make sense to another. We’re committed to reasoning about ideas that might actually help the world even if they sound absurd at first. Contribute to this reasoning process and you might well make a big impact.
Q: does ea’s “big umbrella” mean that I should avoid criticizing people for not reaching their potential or doing as much good as I think they could do?
This is very nuanced! You’ll see lots of internal feedback and criticism in EA spaces. We do have a norm against loudly and unsolicitedly critiquing people’s plans for not doing enough good, but this is overridden in cases where a) the person has asked for the feedback first or b) the person making the critique has a deep and nuanced understanding of the existing plan, as well as a strong relationship with the recipient of the feedback. Our advice, if you see something you want to critique, is to ask if they want feedback before offering.
Q: what about widely-recommended canonical public posts listing EA priorities, implicitly condemning anything that’s not on the priority list?
...yeah this feels like a big part of the problem to me. I think it makes sense to write up a standard disclaimer for such posts, saying “there’s lots of good things not on this list” (GiveWell had something like this for a while I think?) but I don’t know if it is enough.
Q: So is EA scope sensitive or not?
We are definitely scope sensitive. One of the best ways that reasoning can help figure out how to make the world better is by comparing different things, putting numbers on stuff, and/or figuring out other reasons why path A is better than path B.
I like this comment, but also genuinely think that this Q&A would indicate that EA had lost a lot of what I think makes it valuable, and I would likely be much less interested in being engaged.
Useful input. Can you give a bit more color about your feelings? In particular whether this is a disagreement with the core direction being proposed vs. just something I wrote down that seems off? (if the latter—i wrote this quickly trying to give a gist so not surprised. if the former i’m more surprised and interested in what I am missing.)
I am not fully sure, and it’s a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don’t even know whether a community that just broadly encourages people to do things that seem ambitious and good ends up net-positive for the world, since the world does indeed strike me as the kind of place that has lots of crucial considerations that suddenly invert the sign on various things, and I am primarily excited about EA as a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations.
I am also separately excited about a community that just helps people reason better, but indeed one of the key things I would try to get across in such a community is the contingency of the goodness of various actions in the world, and that the world is confusing and heavy-tailed. Making for a world where you really have to make the right decisions, or you might very well end up having caused great harm, or ended up missing out on extremely great benefits.
Useful perspective. (I’m excited about this debate because I think you’re wrong, but feel free to stop responding anytime obviously! You’ve already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph—my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the ‘curriculum’. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their careers).
I even agree with this:
EA is a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations
I have two main disagreements:
most stuff that seems good is good
siphoning people into AI without supporting the ones left behind leaves behind a hollow, inauthentic movement
Most stuff that seems good is good
You wrote:
lots of stuff turns out to be net-negative
I don’t really agree with this, but I don’t really expect to make much progress in a debate. I interpret this as you being generally against ‘progress studies’ also? I have pretty low priors on someone thinking they are working on something useful/innovative/altruistic, and then putting a lot of thought/effort behind it, and it ending up net negative.
Siphoning people into AI
A thing that I perceive is: EA is an important onramp into AI safety stuff. (How does this work? EA is broadly acceptable, incontrovertible—it gets a lot of people talking about it positively. Then the EA onboarding process is shaped to attracting and identifying people good at working on weird ideas, and pushes those people into AI.)
(To be clear, I may be misinterpreting you—you didn’t say this explicitly but I kind of get it from the “orient towards those crucial considerations” thing and so I’m addressing it directly.)
This is an ok thing to do on its own, and I think is a valid reason-to-exist of the community. But not the only one! I don’t think it will work in the long run without the community being able to exist on its own, independently of being a feeder into important projects. It has worked for a while “under the radar” of the recruitment process. I expect this to stop working for various reasons.
One major point of the changes I’m proposing is to make that more explicit, and one optional way that people can engage with EA.
Thanks for writing!
To be clear, I don’t think we as a community should be scope insensitive. But here’s the FAQ I would write about this...
Q: does EA mean I should only work on the most important cause areas?
no! being in EA means you choose to do good with your life, and think about those choices. We hope that you’ll choose to improve your life / career / donations in more-altruistic ways, and we might talk with you and discover ideas for making your altruistic life even better.
Q: does EA mean I should do or support [crazy thing X] to improve the world?
Probably not: if it sounds crazy to you, trust your reasoning! However, EA is a big umbrella and we’re nurturing lots of weird ideas; some ideas that seem crazy to one person might make sense to another. We’re committed to reasoning about ideas that might actually help the world even if they sound absurd at first. Contribute to this reasoning process and you might well make a big impact.
Q: does ea’s “big umbrella” mean that I should avoid criticizing people for not reaching their potential or doing as much good as I think they could do?
This is very nuanced! You’ll see lots of internal feedback and criticism in EA spaces. We do have a norm against loudly and unsolicitedly critiquing people’s plans for not doing enough good, but this is overridden in cases where a) the person has asked for the feedback first or b) the person making the critique has a deep and nuanced understanding of the existing plan, as well as a strong relationship with the recipient of the feedback. Our advice, if you see something you want to critique, is to ask if they want feedback before offering.
Q: what about widely-recommended canonical public posts listing EA priorities, implicitly condemning anything that’s not on the priority list?
...yeah this feels like a big part of the problem to me. I think it makes sense to write up a standard disclaimer for such posts, saying “there’s lots of good things not on this list” (GiveWell had something like this for a while I think?) but I don’t know if it is enough.
Q: So is EA scope sensitive or not?
We are definitely scope sensitive. One of the best ways that reasoning can help figure out how to make the world better is by comparing different things, putting numbers on stuff, and/or figuring out other reasons why path A is better than path B.
I like this comment, but also genuinely think that this Q&A would indicate that EA had lost a lot of what I think makes it valuable, and I would likely be much less interested in being engaged.
Can you say a bit more about what you think EA has lost that makes it valuable?
Useful input. Can you give a bit more color about your feelings? In particular whether this is a disagreement with the core direction being proposed vs. just something I wrote down that seems off? (if the latter—i wrote this quickly trying to give a gist so not surprised. if the former i’m more surprised and interested in what I am missing.)
I am not fully sure, and it’s a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don’t even know whether a community that just broadly encourages people to do things that seem ambitious and good ends up net-positive for the world, since the world does indeed strike me as the kind of place that has lots of crucial considerations that suddenly invert the sign on various things, and I am primarily excited about EA as a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations.
I am also separately excited about a community that just helps people reason better, but indeed one of the key things I would try to get across in such a community is the contingency of the goodness of various actions in the world, and that the world is confusing and heavy-tailed. Making for a world where you really have to make the right decisions, or you might very well end up having caused great harm, or ended up missing out on extremely great benefits.
Useful perspective. (I’m excited about this debate because I think you’re wrong, but feel free to stop responding anytime obviously! You’ve already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph—my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the ‘curriculum’. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their careers).
I even agree with this:
I have two main disagreements:
most stuff that seems good is good
siphoning people into AI without supporting the ones left behind leaves behind a hollow, inauthentic movement
Most stuff that seems good is good
You wrote:
I don’t really agree with this, but I don’t really expect to make much progress in a debate. I interpret this as you being generally against ‘progress studies’ also? I have pretty low priors on someone thinking they are working on something useful/innovative/altruistic, and then putting a lot of thought/effort behind it, and it ending up net negative.
Siphoning people into AI
A thing that I perceive is: EA is an important onramp into AI safety stuff. (How does this work? EA is broadly acceptable, incontrovertible—it gets a lot of people talking about it positively. Then the EA onboarding process is shaped to attracting and identifying people good at working on weird ideas, and pushes those people into AI.)
(To be clear, I may be misinterpreting you—you didn’t say this explicitly but I kind of get it from the “orient towards those crucial considerations” thing and so I’m addressing it directly.)
This is an ok thing to do on its own, and I think is a valid reason-to-exist of the community. But not the only one! I don’t think it will work in the long run without the community being able to exist on its own, independently of being a feeder into important projects. It has worked for a while “under the radar” of the recruitment process. I expect this to stop working for various reasons.
One major point of the changes I’m proposing is to make that more explicit, and one optional way that people can engage with EA.