My answer to the survey’s question “Given the growing salience of AI safety, how would you like EA to evolve?”:
I think EA is in a great place to influence the direction of AI progress and many orgs and people should be involved with this project. However, I think that many people in this forum think that the most important outcome of the EA community is by influencing this technology, and I think this is mistaken and misleading.
The alternative would be to continue supporting initiatives in this space, including AI safety-specific subcommunities, but to support a thriving EA community which is measured by the quality of thought and decision making, and the number of people actively dedicating a sizable proportion of their resources toward doing the most good they can (in contrast with measuring communities and individuals based on their deference to top-down cause prioritization).
I’m reasonably sure that the current wave of orgs and people working on AI safety is strong enough to maintain itself and grow well, and I’m worried about over-optimizing on near timelines.
Thank you for sharing this!
My answer to the survey’s question “Given the growing salience of AI safety, how would you like EA to evolve?”:
I think EA is in a great place to influence the direction of AI progress and many orgs and people should be involved with this project. However, I think that many people in this forum think that the most important outcome of the EA community is by influencing this technology, and I think this is mistaken and misleading.
The alternative would be to continue supporting initiatives in this space, including AI safety-specific subcommunities, but to support a thriving EA community which is measured by the quality of thought and decision making, and the number of people actively dedicating a sizable proportion of their resources toward doing the most good they can (in contrast with measuring communities and individuals based on their deference to top-down cause prioritization).
I’m reasonably sure that the current wave of orgs and people working on AI safety is strong enough to maintain itself and grow well, and I’m worried about over-optimizing on near timelines.
(Sharing this because I’m uncertain and would be interested in thoughts/pushbacks)