I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now.
I’m also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.
I did think about “recruiting” as a wisdom/intelligence intervention. This seems more sensitive to the definition of “wisdom/intelligence” than other things, so I left it out here.
I’m not sure how extreme you’re meaning to be here. Are you claiming something like, > “All that matters is getting good people. We should only be focused on recruiting. We shouldn’t fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these.”
I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.
I’m not sure how extreme you’re meaning to be here. Are you claiming something like, > “All that matters is getting good people. We should only be focused on recruiting. We shouldn’t fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these.”
No, I am not that extreme. It’s not clear anyone should care about my opinion in “Wisdom and Intelligence” but I guess this is it:
From this list, it seems like there’s a set of interventions that EA have an advantage in. This probably includes “Software/Hardware”, e.g. promising AI/computer technologies. Also these domains have somewhat tangible outputs and can accept weirder cultural/social dynamics. This seems like a great place to be open and weird.
Liberalism, culture and virtue is also really important and should be developed. It also seems good to be ambitious or weird here, but EAs have less of an advantage in this domain. Also, I am worried about the possibility of accidentally canonizing or creating a place where marginal ideas (e.g. reinventions of psychology) are constantly emitted. This will drive out serious people and can bleed into other areas. It seems like the risks can be addressed by careful attention to founder effects. I am guessing you thought about this.
augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these
My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
I think I agree, though I can’t tell how much funding you have in mind.
Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
Ok, this makes a lot of sense and I did not have this framing.
Low quality/low effort comment:
I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA”. I don’t think you have mimic the above, this even be counterproductive and impractical.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field.
The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like:
There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”.
“Pre-AGI”. So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially.
When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases and 3) association of money, I guess that something dumb or underhanded is going on.
Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner.
10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”.
The above story contains (several) slurs and is not really what I believed.
I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important.
I’m not sure this is anything new and I guess that you thought about this already.
I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.
All that said, the risks of ignoring the area also seem substantial.
The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.
In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.
I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now.
I’m also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.
I did think about “recruiting” as a wisdom/intelligence intervention. This seems more sensitive to the definition of “wisdom/intelligence” than other things, so I left it out here.
I’m not sure how extreme you’re meaning to be here. Are you claiming something like,
> “All that matters is getting good people. We should only be focused on recruiting. We shouldn’t fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these.”
No, I am not that extreme. It’s not clear anyone should care about my opinion in “Wisdom and Intelligence” but I guess this is it:
From this list, it seems like there’s a set of interventions that EA have an advantage in. This probably includes “Software/Hardware”, e.g. promising AI/computer technologies. Also these domains have somewhat tangible outputs and can accept weirder cultural/social dynamics. This seems like a great place to be open and weird.
Liberalism, culture and virtue is also really important and should be developed. It also seems good to be ambitious or weird here, but EAs have less of an advantage in this domain. Also, I am worried about the possibility of accidentally canonizing or creating a place where marginal ideas (e.g. reinventions of psychology) are constantly emitted. This will drive out serious people and can bleed into other areas. It seems like the risks can be addressed by careful attention to founder effects. I am guessing you thought about this.
My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
I think I agree, though I can’t tell how much funding you have in mind.
Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).
Ok, this makes a lot of sense and I did not have this framing.
Low quality/low effort comment:
For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA”. I don’t think you have mimic the above, this even be counterproductive and impractical.
The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like:
There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”.
“Pre-AGI”. So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially.
When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases and 3) association of money, I guess that something dumb or underhanded is going on.
Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner.
10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”.
The above story contains (several) slurs and is not really what I believed.
I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important.
I’m not sure this is anything new and I guess that you thought about this already.
I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.
All that said, the risks of ignoring the area also seem substantial.
The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.
In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.
I just want to flag that I very much appreciate comments, as long as they don’t use dark arts or aggressive techniques.
Even if you aren’t an expert here, your questions can act as valuable data as to what others care about and think. Gauging the audience, so to speak.
At this point I feel like I have a very uncertain stance on what people think about this topic. Comments help here a whole lot.