I’ll let Peter/Marcus/others give the organizational answer, but speaking for myself I’m pretty bullish about having more RP-like organizations. I think there are a number of good reasons for having more orgs like RP (or somewhat different from us), and these reasons are stronger at first glance than reasons for consolidation (eg reduced communication overhead, PR).
RP clones situated slightly differently can be helpful in allowing the EA movement to unlock more talent than RP will be able to.
For example, we are a remote-first/remote-only organization, which in theory means we can hire talent from anywhere. But in practice, many people may prefer working in an in-person org, so an RP clone with a physical location may unlock talent that RP is unable to productively use.
We have a particular hiring bar. It’s plausible to me that having a noticeably higher or lower hiring bar can result in a more cost-effective organization than us.
For example, having a higher hiring bar may allow you to create a small tight-knit group of supergeniuses pursuing ambitious research agendas
Having a lower hiring bar may allow you to take larger chances on untapped EA talent, is maybe better for scalability, and also I have a strong suspicion that a lot of needed research work in EA “just isn’t that hard” and if it’s done by less competent people, this frees up other EA researchers to do more important work.
More generally, RP has explicitly or implicitly made a number of organizational decisions for how a research org can be set up, and it’s plausible/likely to me that greater experimentation at the movement level will allow different orgs to learn from each other.
Having RP competitors can help keep us on our toes, and improve quality via the normal good things that come from healthy competition.
Having an RP competitor can help spot-check us and point out our blindspots.
I’m pretty excited about an EA red-teaming institute, and maybe a good home for it is at RP. But even if it is situated at RP, who watches the watchmen? I think it’d be really good for there to be external checks/red-teaming/evaluation of RP research outputs.
Right now, the only org I trust to do this well is Open Phil. But Open Phil people are very busy, so I’d be really excited to see a different org spring up to red-team and evaluate us.
AFAICT when doing very rough BOTECs on the expected impact of RP’s research, the EV of RP work is massively cost-effective (flag: bias). If true, I think there’s a very simple economics argument that marginal cost (including opportunity cost) should equal marginal revenue (expected impact), so in theory we should be excited to see many competitors to RP until marginal cost-effectiveness becomes much lower.
also I have a strong suspicion that a lot of needed research work in EA “just isn’t that hard” and if it’s done by less competent people, this frees up other EA researchers to do more important work.
I agree with that suspicion, especially if we include things like “Just collect a bunch of stuff in one place” or “Just summarise some stuff” as “research”. I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: “I’maddictedtocreatingcollections”). It basically always feel like (a) a lot of other people could’ve done what I’m doing and (b) it’s kinda crazy no one had yet. I also sometimes don’t have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress otherwork that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically “research” outputs than collections or summaries are.
But I want to push back on “this frees up other EA researchers to do more important work”. I think you probably mean “this frees up other EA researchers to do work that they’re more uniquely suited for”? I think (and your comment seems to imply you agree?) that there’s not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required—i.e., many low-hanging fruit remain unplucked despite being rather juicy.
Strongly agree with this. While I was working on LEAN and the EA Hub I felt that there were a lot of very necessary and valuable things to do, that nobody wanted to do (or fund) because they seemed too easy. But a lot of value is lost, and important things are undermined if everyone turns their noses up at simple tasks. I’m really glad that since then CEA has significantly built up their local group support. But it’s a perennial pitfall to watch out for.
But I want to push back on “this frees up other EA researchers to do more important work”. I think you probably mean “this frees up other EA researchers to do work that they’re more uniquely suited for”? I think (and your comment seems to imply you agree?) that there’s not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required—i.e., many low-hanging fruit remain unplucked despite being rather juicy.
I think this is probably true. One thing to flag here is people’s counterfactuals are not necessarily in research. I think one belief that I recently updated towards but haven’t fully incorporated in my decision-making is that for a non-trivial subset of EAs in prominent org positions (particularly STEM-trained risk-neutral Americans with elite networks), counterfactuals might be more like expected E2G earnings more in the mid-7 figures or so* than the low- to mid- 6 figures I was previously assuming.
*to be clear, almost all of this is EV is in the high upside things, very few people make 7 figures working jobby jobs.
I agree on all points (except the nit-pick in my other comment).
A couple things I’d add:
I think this thread could be misread as “Should RP grow a bunch but no similar orgs be set up, or should RP grow less but other similar orgs are set up?”
If that was the question, I wouldn’t actually be sure what the best answer would be—I think it’d be necessary to look at the specifics, e.g. what are the other org’s specific plans, who are their founders, etc.?
Another tricky question would be something like “Should [specific person] join RP with an eye to helping it scale further, join some org that’s not on as much of a growth trajectory and try to get it onto one, or start a new org aiming to be somewhat RP-like?”Any of those three options could be best depending on the person and on other specifics.
But what I’m more confident of is that, in addition to RP growing a bunch, there should also be various new things that are very/somewhat/mildly RP-like.
Somewhat relatedly, I’d guess that “reduced communication” and “PR” aren’t the main arguments in favour of prioritising growing existing good orgs over creating new ones or growing small potentially good ones. (I’m guessing you (Linch) would agree; I’m just aiming to counter a possible inference.)
Other stronger arguments (in my view) include that past performance is a pretty good indicator of future performance (despite the protestation of a legion of disclaimers) and that there’s substantial fixed costs to creating each new org.
I’ll let Peter/Marcus/others give the organizational answer, but speaking for myself I’m pretty bullish about having more RP-like organizations. I think there are a number of good reasons for having more orgs like RP (or somewhat different from us), and these reasons are stronger at first glance than reasons for consolidation (eg reduced communication overhead, PR).
The EA movement has a strong appetite for research consultancy work, and RP is far from sufficient for meeting all the needs of the movement.
RP clones situated slightly differently can be helpful in allowing the EA movement to unlock more talent than RP will be able to.
For example, we are a remote-first/remote-only organization, which in theory means we can hire talent from anywhere. But in practice, many people may prefer working in an in-person org, so an RP clone with a physical location may unlock talent that RP is unable to productively use.
We have a particular hiring bar. It’s plausible to me that having a noticeably higher or lower hiring bar can result in a more cost-effective organization than us.
For example, having a higher hiring bar may allow you to create a small tight-knit group of supergeniuses pursuing ambitious research agendas
Having a lower hiring bar may allow you to take larger chances on untapped EA talent, is maybe better for scalability, and also I have a strong suspicion that a lot of needed research work in EA “just isn’t that hard” and if it’s done by less competent people, this frees up other EA researchers to do more important work.
More generally, RP has explicitly or implicitly made a number of organizational decisions for how a research org can be set up, and it’s plausible/likely to me that greater experimentation at the movement level will allow different orgs to learn from each other.
Having RP competitors can help keep us on our toes, and improve quality via the normal good things that come from healthy competition.
Having an RP competitor can help spot-check us and point out our blindspots.
I’m pretty excited about an EA red-teaming institute, and maybe a good home for it is at RP. But even if it is situated at RP, who watches the watchmen? I think it’d be really good for there to be external checks/red-teaming/evaluation of RP research outputs.
Right now, the only org I trust to do this well is Open Phil. But Open Phil people are very busy, so I’d be really excited to see a different org spring up to red-team and evaluate us.
AFAICT when doing very rough BOTECs on the expected impact of RP’s research, the EV of RP work is massively cost-effective (flag: bias). If true, I think there’s a very simple economics argument that marginal cost (including opportunity cost) should equal marginal revenue (expected impact), so in theory we should be excited to see many competitors to RP until marginal cost-effectiveness becomes much lower.
I agree with that suspicion, especially if we include things like “Just collect a bunch of stuff in one place” or “Just summarise some stuff” as “research”. I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: “I’m addicted to creating collections”). It basically always feel like (a) a lot of other people could’ve done what I’m doing and (b) it’s kinda crazy no one had yet. I also sometimes don’t have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress other work that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically “research” outputs than collections or summaries are.
But I want to push back on “this frees up other EA researchers to do more important work”. I think you probably mean “this frees up other EA researchers to do work that they’re more uniquely suited for”? I think (and your comment seems to imply you agree?) that there’s not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required—i.e., many low-hanging fruit remain unplucked despite being rather juicy.
Strongly agree with this. While I was working on LEAN and the EA Hub I felt that there were a lot of very necessary and valuable things to do, that nobody wanted to do (or fund) because they seemed too easy. But a lot of value is lost, and important things are undermined if everyone turns their noses up at simple tasks. I’m really glad that since then CEA has significantly built up their local group support. But it’s a perennial pitfall to watch out for.
I think this is probably true. One thing to flag here is people’s counterfactuals are not necessarily in research. I think one belief that I recently updated towards but haven’t fully incorporated in my decision-making is that for a non-trivial subset of EAs in prominent org positions (particularly STEM-trained risk-neutral Americans with elite networks), counterfactuals might be more like expected E2G earnings more in the mid-7 figures or so* than the low- to mid- 6 figures I was previously assuming.
*to be clear, almost all of this is EV is in the high upside things, very few people make 7 figures working jobby jobs.
I agree on all points (except the nit-pick in my other comment).
A couple things I’d add:
I think this thread could be misread as “Should RP grow a bunch but no similar orgs be set up, or should RP grow less but other similar orgs are set up?”
If that was the question, I wouldn’t actually be sure what the best answer would be—I think it’d be necessary to look at the specifics, e.g. what are the other org’s specific plans, who are their founders, etc.?
Another tricky question would be something like “Should [specific person] join RP with an eye to helping it scale further, join some org that’s not on as much of a growth trajectory and try to get it onto one, or start a new org aiming to be somewhat RP-like?”Any of those three options could be best depending on the person and on other specifics.
But what I’m more confident of is that, in addition to RP growing a bunch, there should also be various new things that are very/somewhat/mildly RP-like.
Somewhat relatedly, I’d guess that “reduced communication” and “PR” aren’t the main arguments in favour of prioritising growing existing good orgs over creating new ones or growing small potentially good ones. (I’m guessing you (Linch) would agree; I’m just aiming to counter a possible inference.)
Other stronger arguments (in my view) include that past performance is a pretty good indicator of future performance (despite the protestation of a legion of disclaimers) and that there’s substantial fixed costs to creating each new org.
See also this interesting comment thread.
But again, ultimately I do think there should be more new RP-like orgs being started (if started by fitting people with access to good advisors etc.)