This seems basically right to me. That said, I thought I’d share some mild pushback because there are incentives against disagreeing with EA funders (not getting $) and so, when uncertain, it might be worth disagreeing publicly, if only to set some kind of norm and eliciting better pushback.
My main uncertainty about all this, beyond what you’ve already mentioned, is that I’m not sure it would’ve been good to “lock in our pitch” at any previous point in EA history (building on your counterargument “we’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.”).
For example, what if EAs in the early 2010s decided to stop explaining the core principles of EA, and instead made an argument like:
Effective charities are, or could very plausible be, very effective.
Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.
The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise more complicated things like rigorous research, scope sensitivity, and expected value-based reasoning.
This argument is probably different in important respects to yours, but illustrates the point. If we started using the above argument instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks/transformative tech as a top priority. This all seems pretty new in the grand scheme of things, so I guess I expect our priorities to change a lot.
But then again, things haven’t changed that much recently, so I’m convinced by:
many EA-first and longtermist-first people are, in practice, primarily concerned about imminent x-risk and transformative technology, have been that way for a while, and (I think) anticipate staying that way.
This seems basically right to me. That said, I thought I’d share some mild pushback because there are incentives against disagreeing with EA funders (not getting $) and so, when uncertain, it might be worth disagreeing publicly, if only to set some kind of norm and eliciting better pushback.
My main uncertainty about all this, beyond what you’ve already mentioned, is that I’m not sure it would’ve been good to “lock in our pitch” at any previous point in EA history (building on your counterargument “we’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.”).
For example, what if EAs in the early 2010s decided to stop explaining the core principles of EA, and instead made an argument like:
Effective charities are, or could very plausible be, very effective.
Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.
The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise more complicated things like rigorous research, scope sensitivity, and expected value-based reasoning.
This argument is probably different in important respects to yours, but illustrates the point. If we started using the above argument instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks/transformative tech as a top priority. This all seems pretty new in the grand scheme of things, so I guess I expect our priorities to change a lot.
But then again, things haven’t changed that much recently, so I’m convinced by: