I’ve been musing about some critiques of EA and one I like is “what’s the biggest thing that we are missing”
In general, I don’t think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I’m glad to see more outreach to people in developing nations
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
Also science/tech people have a particular racial/gender makeup and there is the hidden assumption that there isn’t an effective way to reach a more broader group. (Personally I hope that a load of resources in India, Nigeria, Brazil, etc will go some way here, but I dunno still feels like a legitimate question)
People are scared what the future might look like if it is only in the view of MacAskill/Bostrom/SBF. Yeah, in fact my (poor) model of MacAskill is scared of this too. But we can surface more that we wish we had a larger group making these decisions too.
We could build better ways of outsiders feeding into decisionmaking. I read a piece about the effectiveness of community vegan meals being underrated in EA. Now I’m not saying it should be funded, but I was surprsied to read some of these conferences are 5000+ people (iirc). Maybe that genuinely is oversight. But it’s really hard for high signal information to get to decisionmakers. That really is a problem we could work on. If it’s hard for people who speak EA-ese, how much much harder is it for those who speak different community languages, whose concepts seem frustrating to us.
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there’s already a lot of brilliant tech people working on the problem, so one more won’t make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis.
I’ve been musing about some critiques of EA and one I like is “what’s the biggest thing that we are missing”
In general, I don’t think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I’m glad to see more outreach to people in developing nations
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
Also science/tech people have a particular racial/gender makeup and there is the hidden assumption that there isn’t an effective way to reach a more broader group. (Personally I hope that a load of resources in India, Nigeria, Brazil, etc will go some way here, but I dunno still feels like a legitimate question)
People are scared what the future might look like if it is only in the view of MacAskill/Bostrom/SBF. Yeah, in fact my (poor) model of MacAskill is scared of this too. But we can surface more that we wish we had a larger group making these decisions too.
We could build better ways of outsiders feeding into decisionmaking. I read a piece about the effectiveness of community vegan meals being underrated in EA. Now I’m not saying it should be funded, but I was surprsied to read some of these conferences are 5000+ people (iirc). Maybe that genuinely is oversight. But it’s really hard for high signal information to get to decisionmakers. That really is a problem we could work on. If it’s hard for people who speak EA-ese, how much much harder is it for those who speak different community languages, whose concepts seem frustrating to us.
More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there’s already a lot of brilliant tech people working on the problem, so one more won’t make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis.
+1 for policy people