Recently Iāve been thinking about improving the EA-aligned research pipeline, and Iād be interested in the fund managersā thoughts on that. Some specific questions (feel free to just answer one or two, or to say things about the general topic but not these questions):
In Whatās wrong with the EA-aligned research pipeline?, I ābriefly highlight[ed] some things that I (and I think many others) have observed or believe, which I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are āproducedā are at least somewhat insufficient, inefficient, and prone to error.ā Do those observations or beliefs ring true to you? Would you diagnose the āproblem(s)ā differently?
More recently, I ābriefly discuss[ed] 19 interventions thatmightimprove [this] situation. I discuss[ed] them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline.ā Do you think any of those ideas seem especially great or terrible? Would you rank ordering be different to mine?
Do you think there are promising intervention options I omitted?
(No need to read more of those posts than you have the time and interest for. I expect youād be able to come up with interesting thoughts on these questions without clicking any of those links, and definitely if you just read the summary sections without reading the rest of the posts.)
Re your 19 interventions, here are my quick takes on all of them
Creating, scaling, and/āor improving EA-aligned research orgs
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
Creating, scaling, and/āor improving EA-aligned research training programs
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, itās very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Yeah this seems good if you can do it, but I donāt think this is that much of the bottleneck on research. It doesnāt take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so Iād love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I donāt feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
Scaling Effective Thesis, improving it, and/āor creating new things sort-of like it
I donāt immediately feel excited by this for longtermist research; I wouldnāt be surprised if itās good for animal welfare stuff but Iām not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I donāt think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
Increasing and/āor improving research by non-EAs on high-priority topics
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually canāt think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. Iām excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
I feel pessimistic; I donāt think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldnāt need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesnāt seem like the key thing to work on.
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if itās amazing we should expect it to be extremely commercially successful; I think Iāll wait to see if Iām hearing people rave about it and then try it if so.
I think this is worth doing to some extent, obviously; I think that my guess is that EAs arenāt as into forecasting as they should be (including me unfortunately.) Iād need to know your specific proposal in order to have more specific thoughts.
I think that facilitating junior researchers to connect with each other is somewhat good but doesnāt seem as good as having them connect more with senior researchers somehow.
Iām into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job āspend many hours a day talking to EAs who arenāt as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging themā is not as good as what Iām currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
Iām not sure that this is better than providing funding to people, though itās worth considering. Iām worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who arenāt as promising.
Another way of putting this is that I think itās kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; Iād rather they tried to get funding to try it really hard for a while, and if it doesnāt go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
I am a total sucker for this stuff, and would love to make it happen; I donāt think itās a very leveraged way of working on increasing the EA-aligned research pipeline though.
Yeah Iām into this; I think that strong web developers should consider reaching out to LessWrong and saying āhey do you want to hire me to make your site betterā.
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I donāt know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though Iād still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
This seems almost entirely useless; I donāt think this would help at all.
discovering, writing, and/āor promoting positive case studies
Seems like a good use of someoneās time.
---------------
This was a pretty good list of suggestions. I guess my takeaways from this are:
I care a lot about access to mentorship
I think that people who are willing to talk to lots of new people are a scarce and valuable resource
I think that most of the good that can be done in this space looks a lot more like ādo a long schlepā than āimplement this one relatively cheap thing, like making a website for a database of projectsā.
I wonder whether I should try making up an EA interview
I would be enthusiastic about this. If you donāt do it, I might try doing this myself at some point.
I would guess the main challenge is to get sufficient inter-rater reliability; i.e., if different interviewers used this interview to interview the same person (or if different raters watched the same recorded interview), how similar would their ratings be?
I.e., Iām worried that the bottleneck might be something like āthere are only very few people who are good at assessing other peopleā as opposed to āpeople typically use the wrong method to try to assess peopleā.
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity.
Sorry, minor confusion about this. By ātop 25%,ā do you mean 75th percentile? Or are you encompassing the full range here?
This seems almost entirely useless; I donāt think this would help at all.
Iām pretty surprised by the strength of that reaction. Some followups:
How do you square that with the EA Funds (a) funding things that would increase the amount/āquality/āimpact of EA-aligned research(ers), and (b) indicating in some places (e.g. here) the funds have room for more funding?
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
Do you disagree that the funds have room for more funding?
Do you think increasing available funding wouldnāt help with any EA stuff, or do you just mean for increasing the amount/āquality/āimpact of EA-aligned research(ers)?
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Re 1: I think that the funds can maybe disburse more money (though Iām a little more bearish on this than Jonas and Max, I think). But I donāt feel very excited about increasing the amount of stuff we fund by lowering our bar; as Iāve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like āis this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make itā than āis this grant good enough to be worth the moneyā.
I think that the fundsā RFMF is only slightly realāI think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesnāt really increase my ability to direct money at promising projects that I run across. (Itās helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldnāt have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
Do you think increasing available funding wouldnāt help with any EA stuff, or do you just mean for increasing the amount/āquality/āimpact of EA-aligned research(ers)?
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically wonāt help at all for causing interventions of the types you listed in your postāall of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of courseāthereās enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but thatās not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and Iād rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, Iām worried that (at least in my case) this is too anchored on imagining ābusiness as usual, but with more total capitalā. Iām wondering if most of the expected value of an additional $100Bāespecially when controlled by a single donor who can flexibly deploy themācomes from ācrazyā and somewhat unlikely-to-pan-out options. I.e., things like:
Building an āEA cityā somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a ārealisticā additional donor wouldnāt be open to such things. Iām just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that ābusiness as usual but with more total capitalā leads to way less increased impact than 20%; I am taking into account the fact that weād need to do crazy new types of spending.
Incidentally, you canāt buy the New York Times on public markets; youād have to do a private deal with the family who runs it
Hmm. Then Iām not sure I agree. When I think of prototypical example scenarios of ābusiness as usual but with more total capitalā I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based āutility functionā Iād be surprised if it had returns than diminish much more strongly than logarithmic. (Thatās at least my initial intuitionānot sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly weāre assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then Iām much more inclined to agree with ābusiness as usual + this extra capital adds much less than 20%ā. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesnāt work because itās basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, ācrazyā opportunities become available.
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, letās assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Philās last dollar), and are truly increasing overall resources*. I think that thereās a high chance that more financial resources wonāt be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/ācalculations that Iāve seen donāt do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations /ā Shapley value (the fundraising organization often doesnāt deserve 100% of the credit for the money raised ā some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buckās overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. Iād rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
Iām curious how much $s you and others think that longtermist EA has access to right now/āwill have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because itās a combination of several things, many of which are highly uncertain:
How much longtermist $$ is there now?
This is the least uncertain one. Itās not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but Iād be surprised if my estimate on this was off by 10x.
What will the financial returns on current longtermist $$ be before theyāre being spent?
Over long timescales, for some of that capital, this might be āonlyā as volatile as the stock market or some other ābroadā index.
But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
How much new longtermist $$ will come in at which times in the future?
This seems highly uncertain because itās probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
What should the discount rate for longtermist $$ be?
Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?
I think for some narrow notion of roughly ālongtermist $$ as āalignedā as Open Philās longtermist potā my 80% credence interval for the net present value is $30B - $1 trillion. Iām super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.
Generally my view on this isnāt that well considered and probably not that resilient.
ā¦ my 80% credence interval for the net present value is $30B - $1 trillion. Iām super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.ā² [emphases added]
Shouldnāt your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/āruling out some AI stuff?
(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
Shouldnāt your lower bound for the 50% interval be higher than for the 80% interval?
If the intervals were centeredāi.e., spanning the 10th to 90th and the 25th to 75th percentile, respectivelyāthen it should be, yes.
I could now claim that I wasnāt giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect Iām (perhaps significantly) more optimistic than you about āindirectā effects from promoting good content and advice on effective giving, promoting it as a āsocial normā, etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a āgatewayā toward more impactful behaviors.
One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, Iād guess itās much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces āevangelistsā rather than just people whoāll start giving 1% as a āhobbyā, are quiet about it, and otherwise donāt think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.
So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.
(Iām also quite uncertain about all of this. E.g., I wouldnāt be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective givingāeven in a āgoodā wayāwere significantly net negative.)
When I said that the EAIF and LTFF have room for more funding, I didnāt mean to say āEA research is funding-constrainedā but āI think some of the abundant EA research funding should be allocated here.ā
Saying āthis particular pot has room for more fundingā can be fully consistent with the overall ecosystem being saturated with funding.
Do you think increasing available funding wouldnāt help with any EA stuff
I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research ā but the difference you can make through direct work is plausibly vastly greater (>10x greater).
* Substantial in the sense āif you calculate the expected impact, itāll be hugeā, not āsubstantial relative to the EA communityās total impact.ā
When I said that the EAIF and LTFF have room for more funding, I didnāt mean to say āEA research is funding-constrainedā but āI think some of the abundant EA research funding should be allocated here.ā
Ah, good point. So is your independent impression that the very large donors (e.g., Open Phil) are making a mistake by not multiplying the total funding allocated to EAIF and LTFF by (say) a factor of 0.5-5?
(I donāt think that that is a logically necessary consequence of what you said, but seems like it could be a consequence of what you said + some plausible other premises.
I ask about the very large donors specifically because things youāve said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF. But maybe Iām wrong about that.)
Hmm, why do you think this? I donāt remember having said that.
Actually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post, which actually made almost the opposite claim. Specifically, you say āFor this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.ā
(Which implies you think that thatās a more effective way for most smaller donors to give than giving to the EA Funds right awayārather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.)
I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said.
FWIW, I agree that your concerns about āReducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careersā are well worth bearing in mind and that they make at least some versions of this intervention much less valuable or even net negative.
I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldnāt need this database as much.
I think I agree with this, though part of the aim for the database would be to help people find mentors (or people/āresources that fill similar roles). But this wasnāt described in the title of that section, and will be described in the post coming out in a few weeks, so Iāll leave this topic there :)
Thanks for this detailed response! Lots of useful food for thought here, and I agree with much of what you say.
Regarding Effective Thesis:
I think I agree that āmost research areas relevant to longtermism require high context in order to contribute toā, at least given our current question lists and support options.
I also think this is the main reason Iām currently useful as a researcher despite (a) having little formal background in the areas I work in and (b) there being a bunch of non-longtermist specialists who already work in roughly those areas.
On the other hand, it seems like we should be able to identify many crisp, useful questions that are relatively easy to delegate to peopleāparticularly specialistsāwith less context, especially if accompanied with suggested resources, a mentor with more context, etc.
E.g., there are presumably specific technical-ish questions related to pathogens, antivirals, climate modelling, or international relations that could be delegated to people with good subject area knowledge but less longtermist context.
I think in theory Effective Thesis or things like it could contribute to that
After writing that, I saw you said the following, so I think we mostly agree here: āI think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually canāt think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. Iām excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.ā
But in any case, I donāt see the main value proposition as the direct impact of the theses Effective Thesis guides people towards or through writing. I see the main value propositions as (a) increasing the number of people who will go on to become more involved in an area, get more context on it, and do useful research in it later, and (b) making it easier for people who already have good context, priorities, etc. to find mentorship and other support
Rather than the direct value of the theses themselves
(Disclaimer: This is a quick, high-level description of my thoughts, without explaining all my related thoughts of re-reading Effective Thesisās strategy, impact assessment, etc.)
Recently Iāve been thinking about improving the EA-aligned research pipeline, and Iād be interested in the fund managersā thoughts on that. Some specific questions (feel free to just answer one or two, or to say things about the general topic but not these questions):
In Whatās wrong with the EA-aligned research pipeline?, I ābriefly highlight[ed] some things that I (and I think many others) have observed or believe, which I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are āproducedā are at least somewhat insufficient, inefficient, and prone to error.ā Do those observations or beliefs ring true to you? Would you diagnose the āproblem(s)ā differently?
More recently, I ābriefly discuss[ed] 19 interventions that might improve [this] situation. I discuss[ed] them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline.ā Do you think any of those ideas seem especially great or terrible? Would you rank ordering be different to mine?
Do you think there are promising intervention options I omitted?
(No need to read more of those posts than you have the time and interest for. I expect youād be able to come up with interesting thoughts on these questions without clicking any of those links, and definitely if you just read the summary sections without reading the rest of the posts.)
Re your 19 interventions, here are my quick takes on all of them
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, itās very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Yeah this seems good if you can do it, but I donāt think this is that much of the bottleneck on research. It doesnāt take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so Iād love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I donāt feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
I donāt immediately feel excited by this for longtermist research; I wouldnāt be surprised if itās good for animal welfare stuff but Iām not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I donāt think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
Iām not confident.
The post doesnāt seem to exist yet so idk
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually canāt think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. Iām excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
I feel pessimistic; I donāt think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldnāt need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesnāt seem like the key thing to work on.
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if itās amazing we should expect it to be extremely commercially successful; I think Iāll wait to see if Iām hearing people rave about it and then try it if so.
I think this is worth doing to some extent, obviously; I think that my guess is that EAs arenāt as into forecasting as they should be (including me unfortunately.) Iād need to know your specific proposal in order to have more specific thoughts.
I think that facilitating junior researchers to connect with each other is somewhat good but doesnāt seem as good as having them connect more with senior researchers somehow.
Iām into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job āspend many hours a day talking to EAs who arenāt as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging themā is not as good as what Iām currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
Iām not sure that this is better than providing funding to people, though itās worth considering. Iām worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who arenāt as promising.
Another way of putting this is that I think itās kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; Iād rather they tried to get funding to try it really hard for a while, and if it doesnāt go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
Iām not sure; seems worth people making some materials, but Iād think that we should mostly be relying on materials not produced by EAs
I am a total sucker for this stuff, and would love to make it happen; I donāt think itās a very leveraged way of working on increasing the EA-aligned research pipeline though.
Yeah Iām into this; I think that strong web developers should consider reaching out to LessWrong and saying āhey do you want to hire me to make your site betterā.
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I donāt know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though Iād still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
This seems almost entirely useless; I donāt think this would help at all.
Seems like a good use of someoneās time.
---------------
This was a pretty good list of suggestions. I guess my takeaways from this are:
I care a lot about access to mentorship
I think that people who are willing to talk to lots of new people are a scarce and valuable resource
I think that most of the good that can be done in this space looks a lot more like ādo a long schlepā than āimplement this one relatively cheap thing, like making a website for a database of projectsā.
I would be enthusiastic about this. If you donāt do it, I might try doing this myself at some point.
I would guess the main challenge is to get sufficient inter-rater reliability; i.e., if different interviewers used this interview to interview the same person (or if different raters watched the same recorded interview), how similar would their ratings be?
I.e., Iām worried that the bottleneck might be something like āthere are only very few people who are good at assessing other peopleā as opposed to āpeople typically use the wrong method to try to assess peopleā.
(FWIW, at first glance, Iād also be enthusiastic about one of you trying this.)
Sorry, minor confusion about this. By ātop 25%,ā do you mean 75th percentile? Or are you encompassing the full range here?
Iām pretty surprised by the strength of that reaction. Some followups:
How do you square that with the EA Funds (a) funding things that would increase the amount/āquality/āimpact of EA-aligned research(ers), and (b) indicating in some places (e.g. here) the funds have room for more funding?
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
Do you disagree that the funds have room for more funding?
Do you think increasing available funding wouldnāt help with any EA stuff, or do you just mean for increasing the amount/āquality/āimpact of EA-aligned research(ers)?
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Re 1: I think that the funds can maybe disburse more money (though Iām a little more bearish on this than Jonas and Max, I think). But I donāt feel very excited about increasing the amount of stuff we fund by lowering our bar; as Iāve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like āis this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make itā than āis this grant good enough to be worth the moneyā.
I think that the fundsā RFMF is only slightly realāI think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesnāt really increase my ability to direct money at promising projects that I run across. (Itās helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldnāt have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically wonāt help at all for causing interventions of the types you listed in your postāall of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of courseāthereās enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but thatās not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and Iād rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, Iām worried that (at least in my case) this is too anchored on imagining ābusiness as usual, but with more total capitalā. Iām wondering if most of the expected value of an additional $100Bāespecially when controlled by a single donor who can flexibly deploy themācomes from ācrazyā and somewhat unlikely-to-pan-out options. I.e., things like:
Building an āEA cityā somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a ārealisticā additional donor wouldnāt be open to such things. Iām just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that ābusiness as usual but with more total capitalā leads to way less increased impact than 20%; I am taking into account the fact that weād need to do crazy new types of spending.
Incidentally, you canāt buy the New York Times on public markets; youād have to do a private deal with the family who runs it
.
Hmm. Then Iām not sure I agree. When I think of prototypical example scenarios of ābusiness as usual but with more total capitalā I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based āutility functionā Iād be surprised if it had returns than diminish much more strongly than logarithmic. (Thatās at least my initial intuitionānot sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly weāre assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then Iām much more inclined to agree with ābusiness as usual + this extra capital adds much less than 20%ā. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesnāt work because itās basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, ācrazyā opportunities become available.
Hereās a toy model:
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, letās assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Philās last dollar), and are truly increasing overall resources*. I think that thereās a high chance that more financial resources wonāt be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/ācalculations that Iāve seen donāt do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations /ā Shapley value (the fundraising organization often doesnāt deserve 100% of the credit for the money raised ā some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buckās overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. Iād rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
Iām curious how much $s you and others think that longtermist EA has access to right now/āwill have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because itās a combination of several things, many of which are highly uncertain:
How much longtermist $$ is there now?
This is the least uncertain one. Itās not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but Iād be surprised if my estimate on this was off by 10x.
What will the financial returns on current longtermist $$ be before theyāre being spent?
Over long timescales, for some of that capital, this might be āonlyā as volatile as the stock market or some other ābroadā index.
But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
How much new longtermist $$ will come in at which times in the future?
This seems highly uncertain because itās probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
What should the discount rate for longtermist $$ be?
Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?
I think for some narrow notion of roughly ālongtermist $$ as āalignedā as Open Philās longtermist potā my 80% credence interval for the net present value is $30B - $1 trillion. Iām super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.
Generally my view on this isnāt that well considered and probably not that resilient.
Interesting, thanks.
Shouldnāt your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/āruling out some AI stuff?
(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
If the intervals were centeredāi.e., spanning the 10th to 90th and the 25th to 75th percentile, respectivelyāthen it should be, yes.
I could now claim that I wasnāt giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I also now think that the lower end of the 80% interval should probably be more like $5-15B.
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect Iām (perhaps significantly) more optimistic than you about āindirectā effects from promoting good content and advice on effective giving, promoting it as a āsocial normā, etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a āgatewayā toward more impactful behaviors.
One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, Iād guess itās much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces āevangelistsā rather than just people whoāll start giving 1% as a āhobbyā, are quiet about it, and otherwise donāt think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.
So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.
(Iām also quite uncertain about all of this. E.g., I wouldnāt be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective givingāeven in a āgoodā wayāwere significantly net negative.)
When I said that the EAIF and LTFF have room for more funding, I didnāt mean to say āEA research is funding-constrainedā but āI think some of the abundant EA research funding should be allocated here.ā
Saying āthis particular pot has room for more fundingā can be fully consistent with the overall ecosystem being saturated with funding.
I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research ā but the difference you can make through direct work is plausibly vastly greater (>10x greater).
* Substantial in the sense āif you calculate the expected impact, itāll be hugeā, not āsubstantial relative to the EA communityās total impact.ā
Ah, good point. So is your independent impression that the very large donors (e.g., Open Phil) are making a mistake by not multiplying the total funding allocated to EAIF and LTFF by (say) a factor of 0.5-5?
(I donāt think that that is a logically necessary consequence of what you said, but seems like it could be a consequence of what you said + some plausible other premises.
I ask about the very large donors specifically because things youāve said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF. But maybe Iām wrong about that.)I donāt think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didnāt allocate more funding this year.
Edit:
Hmm, why do you think this? I donāt remember having said that.
Actually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post, which actually made almost the opposite claim. Specifically, you say āFor this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.ā
(Which implies you think that thatās a more effective way for most smaller donors to give than giving to the EA Funds right awayārather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.)
I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said.
Iāve now struck out that part of my comment.
FWIW, I agree that your concerns about āReducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careersā are well worth bearing in mind and that they make at least some versions of this intervention much less valuable or even net negative.
I think I agree with this, though part of the aim for the database would be to help people find mentors (or people/āresources that fill similar roles). But this wasnāt described in the title of that section, and will be described in the post coming out in a few weeks, so Iāll leave this topic there :)
Thanks for this detailed response! Lots of useful food for thought here, and I agree with much of what you say.
Regarding Effective Thesis:
I think I agree that āmost research areas relevant to longtermism require high context in order to contribute toā, at least given our current question lists and support options.
I also think this is the main reason Iām currently useful as a researcher despite (a) having little formal background in the areas I work in and (b) there being a bunch of non-longtermist specialists who already work in roughly those areas.
On the other hand, it seems like we should be able to identify many crisp, useful questions that are relatively easy to delegate to peopleāparticularly specialistsāwith less context, especially if accompanied with suggested resources, a mentor with more context, etc.
E.g., there are presumably specific technical-ish questions related to pathogens, antivirals, climate modelling, or international relations that could be delegated to people with good subject area knowledge but less longtermist context.
I think in theory Effective Thesis or things like it could contribute to that
After writing that, I saw you said the following, so I think we mostly agree here: āI think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually canāt think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. Iām excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.ā
OTOH, in terms of examples of this happening, I think at least Luke Muehlhauser seems to believe some of this has happened for Open Philās AI governance grantmaking (though I havenāt looked into the details myself), based on this post: https://āāwww.openphilanthropy.org/āāblog/āāai-governance-grantmaking
But in any case, I donāt see the main value proposition as the direct impact of the theses Effective Thesis guides people towards or through writing. I see the main value propositions as (a) increasing the number of people who will go on to become more involved in an area, get more context on it, and do useful research in it later, and (b) making it easier for people who already have good context, priorities, etc. to find mentorship and other support
Rather than the direct value of the theses themselves
(Disclaimer: This is a quick, high-level description of my thoughts, without explaining all my related thoughts of re-reading Effective Thesisās strategy, impact assessment, etc.)