Does “wisdom and intelligence” really represent a tractable idea to organize prioritization research around? What other options might be superior?
How promising should we expect the best identifiable interventions in wisdom and intelligence to be?
Rationality-related research, marketing, and community building (CFAR, Astral Codex Ten, LessWrong, Julia Galef, Clearer Thinking)
Lifehacking/biomedical (nootropics, antidepressants, air quality improvements, light therapy, quantified self)
It seems plausible that the observable things that make the rationalist Bay Area community look good comes from the aptitude of its members and their intrinsic motivation and self improvement efforts.
Without this pool of people, the institutions and competences of the community is not special, and it’s not even clear it’s above the baseline of similar communities.
So what?:
Since the community’s high quality is in attracting these people, not creating them, it’s difficult to clone LessWrong or create a new Slate Star Codex to expand the community in other cultures/places (indeed these sites are already everywhere in some sense).
I am worried the interventions described will fall short while or doing some harm but mainly making EA look silly.
This comment was longer but it involves content from a thread that is going on on LessWrong that I think most people are aware of. To summarize the point, what seems to be going on is a lot of base-rate sort of negative experiences you would expect with a young, male dominated community that is focused on intangible goals which allows marginal, self-appointed leaders to build sources of authority.
Because the exceptional value comes from its people, but the problems come from (sadly) prosaic dynamics, I think it is wise to be concerned that perspectives/interventions to create a community will conflate the rationalist quality with its norms/ideas/content.
Also, mainly because I guess I don’t trust execution or think that execution is really demanding, I think this critique applies to other interventions, most directly to lifehacking.
Less directly, I think caution is good for other interventions, e.g. “Epistemic Security”, “Cognitive bias research”, “Research management and research environments (for example, understanding what made Bell Labs work)”.
The underlying problem isn’t “woo”, it’s that there’s already people other investigating this too and the bar for founders seems high because of the downsides.
Less directly, I think caution is good for other interventions, e.g. “Epistemic Security”, “Cognitive bias research”, “Research management and research environments (for example, understanding what made Bell Labs work)”.
I’d also agree that caution is good for many of the listed interventions. To me, that seems to be even more of a case for more prioritization-style research though, which is the main thing I’m arguing for.
I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now.
I’m also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.
I did think about “recruiting” as a wisdom/intelligence intervention. This seems more sensitive to the definition of “wisdom/intelligence” than other things, so I left it out here.
I’m not sure how extreme you’re meaning to be here. Are you claiming something like, > “All that matters is getting good people. We should only be focused on recruiting. We shouldn’t fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these.”
I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.
I’m not sure how extreme you’re meaning to be here. Are you claiming something like, > “All that matters is getting good people. We should only be focused on recruiting. We shouldn’t fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these.”
No, I am not that extreme. It’s not clear anyone should care about my opinion in “Wisdom and Intelligence” but I guess this is it:
From this list, it seems like there’s a set of interventions that EA have an advantage in. This probably includes “Software/Hardware”, e.g. promising AI/computer technologies. Also these domains have somewhat tangible outputs and can accept weirder cultural/social dynamics. This seems like a great place to be open and weird.
Liberalism, culture and virtue is also really important and should be developed. It also seems good to be ambitious or weird here, but EAs have less of an advantage in this domain. Also, I am worried about the possibility of accidentally canonizing or creating a place where marginal ideas (e.g. reinventions of psychology) are constantly emitted. This will drive out serious people and can bleed into other areas. It seems like the risks can be addressed by careful attention to founder effects. I am guessing you thought about this.
augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these
My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
I think I agree, though I can’t tell how much funding you have in mind.
Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
Ok, this makes a lot of sense and I did not have this framing.
Low quality/low effort comment:
I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA”. I don’t think you have mimic the above, this even be counterproductive and impractical.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field.
The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like:
There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”.
“Pre-AGI”. So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially.
When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases and 3) association of money, I guess that something dumb or underhanded is going on.
Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner.
10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”.
The above story contains (several) slurs and is not really what I believed.
I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important.
I’m not sure this is anything new and I guess that you thought about this already.
I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.
All that said, the risks of ignoring the area also seem substantial.
The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.
In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.
It seems plausible that the observable things that make the rationalist Bay Area community look good comes from the aptitude of its members and their intrinsic motivation and self improvement efforts.
Without this pool of people, the institutions and competences of the community is not special, and it’s not even clear it’s above the baseline of similar communities.
So what?:
Since the community’s high quality is in attracting these people, not creating them, it’s difficult to clone LessWrong or create a new Slate Star Codex to expand the community in other cultures/places (indeed these sites are already everywhere in some sense).
I am worried the interventions described will fall short while or doing some harm but mainly making EA look silly.
This comment was longer but it involves content from a thread that is going on on LessWrong that I think most people are aware of. To summarize the point, what seems to be going on is a lot of base-rate sort of negative experiences you would expect with a young, male dominated community that is focused on intangible goals which allows marginal, self-appointed leaders to build sources of authority.
Because the exceptional value comes from its people, but the problems come from (sadly) prosaic dynamics, I think it is wise to be concerned that perspectives/interventions to create a community will conflate the rationalist quality with its norms/ideas/content.
Also, mainly because I guess I don’t trust execution or think that execution is really demanding, I think this critique applies to other interventions, most directly to lifehacking.
Less directly, I think caution is good for other interventions, e.g. “Epistemic Security”, “Cognitive bias research”, “Research management and research environments (for example, understanding what made Bell Labs work)”.
The underlying problem isn’t “woo”, it’s that there’s already people other investigating this too and the bar for founders seems high because of the downsides.
I’d also agree that caution is good for many of the listed interventions. To me, that seems to be even more of a case for more prioritization-style research though, which is the main thing I’m arguing for.
Honestly, I think my comment is just focused on “quality control” and preventing harm.
Based on your comments, I think it is possible that I am completely aligned with you.
I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now.
I’m also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.
I did think about “recruiting” as a wisdom/intelligence intervention. This seems more sensitive to the definition of “wisdom/intelligence” than other things, so I left it out here.
I’m not sure how extreme you’re meaning to be here. Are you claiming something like,
> “All that matters is getting good people. We should only be focused on recruiting. We shouldn’t fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn’t expect further returns to things like these.”
No, I am not that extreme. It’s not clear anyone should care about my opinion in “Wisdom and Intelligence” but I guess this is it:
From this list, it seems like there’s a set of interventions that EA have an advantage in. This probably includes “Software/Hardware”, e.g. promising AI/computer technologies. Also these domains have somewhat tangible outputs and can accept weirder cultural/social dynamics. This seems like a great place to be open and weird.
Liberalism, culture and virtue is also really important and should be developed. It also seems good to be ambitious or weird here, but EAs have less of an advantage in this domain. Also, I am worried about the possibility of accidentally canonizing or creating a place where marginal ideas (e.g. reinventions of psychology) are constantly emitted. This will drive out serious people and can bleed into other areas. It seems like the risks can be addressed by careful attention to founder effects. I am guessing you thought about this.
My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
I think I agree, though I can’t tell how much funding you have in mind.
Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.
I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.
My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).
Ok, this makes a lot of sense and I did not have this framing.
Low quality/low effort comment:
For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA”. I don’t think you have mimic the above, this even be counterproductive and impractical.
The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like:
There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”.
“Pre-AGI”. So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially.
When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases and 3) association of money, I guess that something dumb or underhanded is going on.
Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner.
10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”.
The above story contains (several) slurs and is not really what I believed.
I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important.
I’m not sure this is anything new and I guess that you thought about this already.
I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.
All that said, the risks of ignoring the area also seem substantial.
The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.
In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.
I just want to flag that I very much appreciate comments, as long as they don’t use dark arts or aggressive techniques.
Even if you aren’t an expert here, your questions can act as valuable data as to what others care about and think. Gauging the audience, so to speak.
At this point I feel like I have a very uncertain stance on what people think about this topic. Comments help here a whole lot.