I am not fully sure, and it’s a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don’t even know whether a community that just broadly encourages people to do things that seem ambitious and good ends up net-positive for the world, since the world does indeed strike me as the kind of place that has lots of crucial considerations that suddenly invert the sign on various things, and I am primarily excited about EA as a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations.
I am also separately excited about a community that just helps people reason better, but indeed one of the key things I would try to get across in such a community is the contingency of the goodness of various actions in the world, and that the world is confusing and heavy-tailed. Making for a world where you really have to make the right decisions, or you might very well end up having caused great harm, or ended up missing out on extremely great benefits.
Useful perspective. (I’m excited about this debate because I think you’re wrong, but feel free to stop responding anytime obviously! You’ve already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph—my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the ‘curriculum’. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their careers).
I even agree with this:
EA is a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations
I have two main disagreements:
most stuff that seems good is good
siphoning people into AI without supporting the ones left behind leaves behind a hollow, inauthentic movement
Most stuff that seems good is good
You wrote:
lots of stuff turns out to be net-negative
I don’t really agree with this, but I don’t really expect to make much progress in a debate. I interpret this as you being generally against ‘progress studies’ also? I have pretty low priors on someone thinking they are working on something useful/innovative/altruistic, and then putting a lot of thought/effort behind it, and it ending up net negative.
Siphoning people into AI
A thing that I perceive is: EA is an important onramp into AI safety stuff. (How does this work? EA is broadly acceptable, incontrovertible—it gets a lot of people talking about it positively. Then the EA onboarding process is shaped to attracting and identifying people good at working on weird ideas, and pushes those people into AI.)
(To be clear, I may be misinterpreting you—you didn’t say this explicitly but I kind of get it from the “orient towards those crucial considerations” thing and so I’m addressing it directly.)
This is an ok thing to do on its own, and I think is a valid reason-to-exist of the community. But not the only one! I don’t think it will work in the long run without the community being able to exist on its own, independently of being a feeder into important projects. It has worked for a while “under the radar” of the recruitment process. I expect this to stop working for various reasons.
One major point of the changes I’m proposing is to make that more explicit, and one optional way that people can engage with EA.
I am not fully sure, and it’s a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don’t even know whether a community that just broadly encourages people to do things that seem ambitious and good ends up net-positive for the world, since the world does indeed strike me as the kind of place that has lots of crucial considerations that suddenly invert the sign on various things, and I am primarily excited about EA as a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations.
I am also separately excited about a community that just helps people reason better, but indeed one of the key things I would try to get across in such a community is the contingency of the goodness of various actions in the world, and that the world is confusing and heavy-tailed. Making for a world where you really have to make the right decisions, or you might very well end up having caused great harm, or ended up missing out on extremely great benefits.
Useful perspective. (I’m excited about this debate because I think you’re wrong, but feel free to stop responding anytime obviously! You’ve already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph—my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the ‘curriculum’. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their careers).
I even agree with this:
I have two main disagreements:
most stuff that seems good is good
siphoning people into AI without supporting the ones left behind leaves behind a hollow, inauthentic movement
Most stuff that seems good is good
You wrote:
I don’t really agree with this, but I don’t really expect to make much progress in a debate. I interpret this as you being generally against ‘progress studies’ also? I have pretty low priors on someone thinking they are working on something useful/innovative/altruistic, and then putting a lot of thought/effort behind it, and it ending up net negative.
Siphoning people into AI
A thing that I perceive is: EA is an important onramp into AI safety stuff. (How does this work? EA is broadly acceptable, incontrovertible—it gets a lot of people talking about it positively. Then the EA onboarding process is shaped to attracting and identifying people good at working on weird ideas, and pushes those people into AI.)
(To be clear, I may be misinterpreting you—you didn’t say this explicitly but I kind of get it from the “orient towards those crucial considerations” thing and so I’m addressing it directly.)
This is an ok thing to do on its own, and I think is a valid reason-to-exist of the community. But not the only one! I don’t think it will work in the long run without the community being able to exist on its own, independently of being a feeder into important projects. It has worked for a while “under the radar” of the recruitment process. I expect this to stop working for various reasons.
One major point of the changes I’m proposing is to make that more explicit, and one optional way that people can engage with EA.