This seems connected to a perennial question in EA: should organizations be means-focused or ends-focused. By that I mean should an EA-aligned org focus primarily on methods or primarily on outcomes. For example, when it comes to community building, an ends-focused approach would suggest we should grow as large as possible and get as many people as possible to give effectively, even if you have to lie to them to do it. A means-focused approach to community building would look more like what we have now, where there is a heavy focus on keeping EA true to its values even if it comes at a cost of convincing some people to give money effectively that could be had by methods that go against EA values like careful epistemics.
So far it seems EA orgs have decided to be primarily means-focused and accept giving up some of the gains possible via an ends-focus since it would risk diluting EA values and missions, and folks in the community have been pretty vocal when they feel orgs list too close to become ends-focused if they compromise too much on holding to EA values. I don’t know if that will continue in the future or if everyone in EA is on board with such a choice, but it’s at least what I’ve observed happening. Given that mean EAs are consequentialists I expect we’ll always see some version of this conversation happening so long as EA exists.
This seems connected to a perennial question in EA: should organizations be means-focused or ends-focused. By that I mean should an EA-aligned org focus primarily on methods or primarily on outcomes. For example, when it comes to community building, an ends-focused approach would suggest we should grow as large as possible and get as many people as possible to give effectively, even if you have to lie to them to do it. A means-focused approach to community building would look more like what we have now, where there is a heavy focus on keeping EA true to its values even if it comes at a cost of convincing some people to give money effectively that could be had by methods that go against EA values like careful epistemics.
So far it seems EA orgs have decided to be primarily means-focused and accept giving up some of the gains possible via an ends-focus since it would risk diluting EA values and missions, and folks in the community have been pretty vocal when they feel orgs list too close to become ends-focused if they compromise too much on holding to EA values. I don’t know if that will continue in the future or if everyone in EA is on board with such a choice, but it’s at least what I’ve observed happening. Given that mean EAs are consequentialists I expect we’ll always see some version of this conversation happening so long as EA exists.