I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
Ok, this makes a lot of sense and I did not have this framing.
Low quality/low effort comment:
I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA”. I don’t think you have mimic the above, this even be counterproductive and impractical.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field.
The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like:
There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”.
“Pre-AGI”. So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially.
When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases and 3) association of money, I guess that something dumb or underhanded is going on.
Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner.
10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”.
The above story contains (several) slurs and is not really what I believed.
I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important.
I’m not sure this is anything new and I guess that you thought about this already.
I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.
All that said, the risks of ignoring the area also seem substantial.
The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.
In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.
Ok, this makes a lot of sense and I did not have this framing.
Low quality/low effort comment:
For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA”. I don’t think you have mimic the above, this even be counterproductive and impractical.
The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like:
There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”.
“Pre-AGI”. So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially.
When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases and 3) association of money, I guess that something dumb or underhanded is going on.
Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner.
10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”.
The above story contains (several) slurs and is not really what I believed.
I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important.
I’m not sure this is anything new and I guess that you thought about this already.
I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.
All that said, the risks of ignoring the area also seem substantial.
The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.
In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.