In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, youāll always have either too much supply, too much demand, or a perception of compacency (where weāve matched them up just right, but are disappointed that we havenāt scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.
I think thereās valuable something to this point, but I donāt think itās quite right.
In particular, I think it implies the only relevant type of ādemandā is that coming from funders etc., whereas Iād want to frame this in terms of ways the world could be improved. Iām not sure we could ever reach a perfect world where it seems thereās 0 room for additional impactful acts, but we could clearly be in a much better world where the room/āneed for additional impactful act is smaller and less pressing.
Relatedly, until we reach a far better world, it seems useful to have people regularly spotting what thereās an undersupply of at the moment and thinking about how to address that. The point isnāt to reach a perfect equilibrium between the resources and then stay there, but to notice which type of resource tends to be particularly useful at the moment and then focus a little more on providing/āfinding/āusing that type of resource. (Though some people should still do other things anyway, for reasons of comparative advantage, taking a portfolio approach, etc.) I like Ben Toddās comments on this sort of thing.
In particular, I think it implies the only relevant type of ādemandā is that coming from funders etc., whereas Iād want to frame this in terms of ways the world could be improved.
My position is that ādemandā is a word for āwhat people will pay you for.ā EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is āfree ridingā on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesnāt generate much additional supply. This is the problem weāre exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
My position is that ādemandā is a word for āwhat people will pay you for.ā
This seems reasonable (at least in an econ/ābusiness context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that theyāll pretty much always not be in perfect equilibrium, and say āNone of those problems indicate that something is wrongā, without noting that the thing thatās wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
I think I sort-of agree with your other two points, but I think they seem to constrain the focus to ādemandā in the sense of āhow much will people pay for people to work on thisā, and āsupplyā in the sense of āpeople who are willing and able to work on this if given moneyā, whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things.
(Iām not sure if Iāve expressed myself well here. I basically just have a sense that the framing youāve used isnāt clearly highlighting all the key things in a productive way. But Iām not sure there are actual any interesting, major disagreements here.)
Your previous comment seemed to me to focus on demand and supply and note that theyāll pretty much always not be in perfect equilibrium, and say āNone of those problems indicate that something is wrongā, without noting that the thing thatās wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
In the context of the EA forum, I donāt think itās necessary to specify that these are problems. To state it another way, there are three conditions that could exist (letās say in a given year):
Grantmakers run out of money and arenāt able to fund all high-quality EA projects.
Grantmakers have extra money, and donāt have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of āhigh quality EA projectā that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck theyāre facing.
For the rest, Iād say that thereās a difference between āwillingness to workā and ālikelihood of success.ā Weāre interested in the reasons for EA project supply inelasticity. Why arenāt grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them arenāt motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, weād see an increase in supply.
An alternative possibility is that high-quality ideas/āteams are rare right now, and canāt be had at any price grantmakers are willing or able to pay.
I think itās not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if weāre not in that situation vetting is still a constraintāitās not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.
Relatedly, there could be a problem of grantmakers giving to things that are āactually relatively low EVā (in a way that couldāve been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that).
So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake.
I think maybe thereās been some confusion where youāre thinking Iām saying grantmakers have ātoo high a barā? Iām not saying that. (Iām agnostic on the question, and would expect it differs between grantmakers.)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/āOpenPhil was that they donāt feel they have room to grow in terms of determining the expected value of the projects theyāre looking at. Very prepared to change my mind on this; Iām literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that theyāve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely donāt think that grantmakers are already doing the best that could be done at determining the EV of projects. And Iād be surprised if any EA grantmaker thought that that was the case, and I donāt think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isnāt quite āvettingā, which is not the same as the claim that thereād be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: āas a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy expertsā¦ Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.ā
I also think that ādoing the best they can at determining EV of projectsā implies that the question is just whether the grantmakersā EV assessments are correct. But whatās often happening is more like they either donāt hear about something or (in a sense) they ādonāt really make an EV assessmentāābecause a very very quick sort of heuristic/āintuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think thereās ample evidence that these things happen, and itās obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. Iām not saying theyāre ādoing a bad jobā, but rather simply the very weak and common-sense claim that they arenāt already picking only and all the highest EV projects, partly because there arenāt enough of the grantmakers to do all the evaluations, partly because some projects donāt come to their attention, partly because some projects havenāt yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply ālower their barā.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, hereās a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
Unfortunately, itās difficult to know which āintermediate goalsā we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more positive-sum political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be tractable ā we are also uncertain about whether achieving the intermediate goal would be good or bad for society, in the long run. Such āsign uncertaintyā can dramatically reduce the expected value of pursuing some particular goal,19 often enough for us to not prioritize that goal.20
As such, our AI governance grantmaking tends to focus onā¦
ā¦research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of āvetting bottleneckā could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that thatās clearly the case in probably all EA domains (though note that Iām not claiming this is the biggest bottleneck in all domains).
I think thereās valuable something to this point, but I donāt think itās quite right.
In particular, I think it implies the only relevant type of ādemandā is that coming from funders etc., whereas Iād want to frame this in terms of ways the world could be improved. Iām not sure we could ever reach a perfect world where it seems thereās 0 room for additional impactful acts, but we could clearly be in a much better world where the room/āneed for additional impactful act is smaller and less pressing.
Relatedly, until we reach a far better world, it seems useful to have people regularly spotting what thereās an undersupply of at the moment and thinking about how to address that. The point isnāt to reach a perfect equilibrium between the resources and then stay there, but to notice which type of resource tends to be particularly useful at the moment and then focus a little more on providing/āfinding/āusing that type of resource. (Though some people should still do other things anyway, for reasons of comparative advantage, taking a portfolio approach, etc.) I like Ben Toddās comments on this sort of thing.
My position is that ādemandā is a word for āwhat people will pay you for.ā EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is āfree ridingā on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesnāt generate much additional supply. This is the problem weāre exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
This seems reasonable (at least in an econ/ābusiness context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that theyāll pretty much always not be in perfect equilibrium, and say āNone of those problems indicate that something is wrongā, without noting that the thing thatās wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
I think I sort-of agree with your other two points, but I think they seem to constrain the focus to ādemandā in the sense of āhow much will people pay for people to work on thisā, and āsupplyā in the sense of āpeople who are willing and able to work on this if given moneyā, whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things.
(Iām not sure if Iāve expressed myself well here. I basically just have a sense that the framing youāve used isnāt clearly highlighting all the key things in a productive way. But Iām not sure there are actual any interesting, major disagreements here.)
In the context of the EA forum, I donāt think itās necessary to specify that these are problems. To state it another way, there are three conditions that could exist (letās say in a given year):
Grantmakers run out of money and arenāt able to fund all high-quality EA projects.
Grantmakers have extra money, and donāt have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of āhigh quality EA projectā that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck theyāre facing.
For the rest, Iād say that thereās a difference between āwillingness to workā and ālikelihood of success.ā Weāre interested in the reasons for EA project supply inelasticity. Why arenāt grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them arenāt motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, weād see an increase in supply.
An alternative possibility is that high-quality ideas/āteams are rare right now, and canāt be had at any price grantmakers are willing or able to pay.
I think itās not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if weāre not in that situation vetting is still a constraintāitās not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.
Relatedly, there could be a problem of grantmakers giving to things that are āactually relatively low EVā (in a way that couldāve been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that).
I think maybe thereās been some confusion where youāre thinking Iām saying grantmakers have ātoo high a barā? Iām not saying that. (Iām agnostic on the question, and would expect it differs between grantmakers.)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/āOpenPhil was that they donāt feel they have room to grow in terms of determining the expected value of the projects theyāre looking at. Very prepared to change my mind on this; Iām literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that theyāve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely donāt think that grantmakers are already doing the best that could be done at determining the EV of projects. And Iād be surprised if any EA grantmaker thought that that was the case, and I donāt think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isnāt quite āvettingā, which is not the same as the claim that thereād be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: āas a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy expertsā¦ Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.ā
I also think that ādoing the best they can at determining EV of projectsā implies that the question is just whether the grantmakersā EV assessments are correct. But whatās often happening is more like they either donāt hear about something or (in a sense) they ādonāt really make an EV assessmentāābecause a very very quick sort of heuristic/āintuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think thereās ample evidence that these things happen, and itās obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. Iām not saying theyāre ādoing a bad jobā, but rather simply the very weak and common-sense claim that they arenāt already picking only and all the highest EV projects, partly because there arenāt enough of the grantmakers to do all the evaluations, partly because some projects donāt come to their attention, partly because some projects havenāt yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply ālower their barā.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, hereās a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
ā¦research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of āvetting bottleneckā could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that thatās clearly the case in probably all EA domains (though note that Iām not claiming this is the biggest bottleneck in all domains).