I agree! And this might be a hot take (especially for those who are already deep into AI issues), but I also see the need, first and foremost, to advocate for AI within our EA community.
People interacting on this forum do not, IMO, give a fully representative picture of EAs and tend to be very focused on AI while the broader EA community didn’t enter EA for ‘longtermist’ (as much as I hate using this label that could apply to so many causes labelled as neartermists) purposes/did not make the change between what they think is highly impactful and the recent turning point from CEA to focus a large amount of EA resources on longtermism.
People who have been making career switches and reading about global aid/animal welfare who suddenly find out that more than 50 percent of the talks at EA globals and resources are dedicated to AI rather than other causes, are lost. As a community builder, I am in a weird position where I have to explain why and convince many in my local community that EA’s focus is changing (focus coming from the top, the top being closely related to funding decisions etc, not saying these are the same people and it’s obv more complex than that but the change towards longtermism and focus on AI is indisputable) for the better.
This results in many EAs feeling highly skeptical about the new focus. It is good that 80k is making simple videos to explain the risks associated with EA, but I still feel that community epistemics are poor when it comes to justify this change, despite 80k very clear website pages about AI safety. The content is there; outreach, not so much.
And my resulting feeling (because its very hard to have actual numbers to gauge the truth) is that on one side we have AI afficionados, ready to switch careers and already in a deep level of knowledge about these topics (usually with the convenient background in STEM, machine learning etc), the same ones that do comment a lot on the forum, and the rest of the EA community that doesn’t feel much sense of belonging towards the EA community lately. I was planning to write a post about that but I still need to clarify my thoughts and sharpen my arguments, as you can see how poorly structured my comment is.
So I guess that my take is : before (or at the same time, but it seems more strategic for me to do this before in terms of allocating resources) advocating for AI safety outside of the community, let’s do it inside the community.
Footnote : I know about the RethinkPriorities survey that indicates that 70 percent of EAs do consider AI safety as the most impactful thing to work on (I might remember it badly though, not confident at all), but I have my reservations on how representative the survey actually is.
I don’t understand why doing outreach to EAs specifically to convince them of this would be an effective focus. It seems none of important,tractable, or neglected. The people in EA who you’re talking about, I think, are a small group compared to the general population, they aren’t in high-leverage positions to change things relevant to AI, and they are already aware of the topic and have not bought the arguments.
Because you leave from the premises that the majority of the EA community is already convinced and into AI already, which I don’t think is true at all, the last post about this showing diagrams of EAs in the community was based purely on intuition and nothing else.
EAs are highly-educated and wealthy people for the vast majority, and their skills are definitely needed in AI. Someone in EA will be much more easily brought onto a job in AI compared to someone who has a vague understanding of it OR doesn’t have the skills. So yes I do think they are in high-leverage positions since they already occupy good jobs.
As to bring the arguments, try going against the grain and expressing doubts on the fastness on how AI took over the EA community, how the funding is now distributed, and how does that feel to see the EA forum having its vast majority of posts dedicated to AI. Many of the EA who do think this way are not on the forum and prefer to stand aside since they don’t feel like they belong. I don’t want to lose these people. And the fact that I am being downvoted to hell every time I dare saying these things is just basic evidence. Everyone who disagrees with me, please explain why instead of just downvoting. That just increases the ‘this is not an opinion we condone’ without any explanation.
I don’t assume that they are convinced, I think that they are aware of the issues. They are also a tiny group compared to the general population—so I think you need a far stronger reason to focus on such a small group instead of the public than what has been suggested.
And I think you’re misconstruing my position about EA versus AI safety—I strongly agree that they should be separate, as I’ve said elsewhere.
Yeah I get your point and factually sure it is a small group. I still think that for cohesive community purposes advocating for AI within EA would be useful, and finding qualified members to work in AI is easier to do within the community than within the public given the profile of EAS.
As to be aware of the issues that is where we disagree. I don’t think AI has been brought in a careful, thoughtful way, with good epistemics in the community. AI became a thing for specialists and an evidence very quickly, to the detriment of other EAs who have a hard time adjusting. Ignoring this will not lead to good things and should not be undervalued.
I agree! And this might be a hot take (especially for those who are already deep into AI issues), but I also see the need, first and foremost, to advocate for AI within our EA community.
People interacting on this forum do not, IMO, give a fully representative picture of EAs and tend to be very focused on AI while the broader EA community didn’t enter EA for ‘longtermist’ (as much as I hate using this label that could apply to so many causes labelled as neartermists) purposes/did not make the change between what they think is highly impactful and the recent turning point from CEA to focus a large amount of EA resources on longtermism.
People who have been making career switches and reading about global aid/animal welfare who suddenly find out that more than 50 percent of the talks at EA globals and resources are dedicated to AI rather than other causes, are lost. As a community builder, I am in a weird position where I have to explain why and convince many in my local community that EA’s focus is changing (focus coming from the top, the top being closely related to funding decisions etc, not saying these are the same people and it’s obv more complex than that but the change towards longtermism and focus on AI is indisputable) for the better.
This results in many EAs feeling highly skeptical about the new focus. It is good that 80k is making simple videos to explain the risks associated with EA, but I still feel that community epistemics are poor when it comes to justify this change, despite 80k very clear website pages about AI safety. The content is there; outreach, not so much.
And my resulting feeling (because its very hard to have actual numbers to gauge the truth) is that on one side we have AI afficionados, ready to switch careers and already in a deep level of knowledge about these topics (usually with the convenient background in STEM, machine learning etc), the same ones that do comment a lot on the forum, and the rest of the EA community that doesn’t feel much sense of belonging towards the EA community lately. I was planning to write a post about that but I still need to clarify my thoughts and sharpen my arguments, as you can see how poorly structured my comment is.
So I guess that my take is : before (or at the same time, but it seems more strategic for me to do this before in terms of allocating resources) advocating for AI safety outside of the community, let’s do it inside the community.
Footnote : I know about the RethinkPriorities survey that indicates that 70 percent of EAs do consider AI safety as the most impactful thing to work on (I might remember it badly though, not confident at all), but I have my reservations on how representative the survey actually is.
I don’t understand why doing outreach to EAs specifically to convince them of this would be an effective focus. It seems none of important, tractable, or neglected. The people in EA who you’re talking about, I think, are a small group compared to the general population, they aren’t in high-leverage positions to change things relevant to AI, and they are already aware of the topic and have not bought the arguments.
Because you leave from the premises that the majority of the EA community is already convinced and into AI already, which I don’t think is true at all, the last post about this showing diagrams of EAs in the community was based purely on intuition and nothing else.
EAs are highly-educated and wealthy people for the vast majority, and their skills are definitely needed in AI. Someone in EA will be much more easily brought onto a job in AI compared to someone who has a vague understanding of it OR doesn’t have the skills. So yes I do think they are in high-leverage positions since they already occupy good jobs.
As to bring the arguments, try going against the grain and expressing doubts on the fastness on how AI took over the EA community, how the funding is now distributed, and how does that feel to see the EA forum having its vast majority of posts dedicated to AI. Many of the EA who do think this way are not on the forum and prefer to stand aside since they don’t feel like they belong. I don’t want to lose these people. And the fact that I am being downvoted to hell every time I dare saying these things is just basic evidence. Everyone who disagrees with me, please explain why instead of just downvoting. That just increases the ‘this is not an opinion we condone’ without any explanation.
I don’t assume that they are convinced, I think that they are aware of the issues. They are also a tiny group compared to the general population—so I think you need a far stronger reason to focus on such a small group instead of the public than what has been suggested.
And I think you’re misconstruing my position about EA versus AI safety—I strongly agree that they should be separate, as I’ve said elsewhere.
Yeah I get your point and factually sure it is a small group. I still think that for cohesive community purposes advocating for AI within EA would be useful, and finding qualified members to work in AI is easier to do within the community than within the public given the profile of EAS.
As to be aware of the issues that is where we disagree. I don’t think AI has been brought in a careful, thoughtful way, with good epistemics in the community. AI became a thing for specialists and an evidence very quickly, to the detriment of other EAs who have a hard time adjusting. Ignoring this will not lead to good things and should not be undervalued.
Do you mean “risks associated with AI”?
Yes my bad!