In particular, I think it implies the only relevant type of “demand” is that coming from funders etc., whereas I’d want to frame this in terms of ways the world could be improved.
My position is that “demand” is a word for “what people will pay you for.” EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is “free riding” on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn’t generate much additional supply. This is the problem we’re exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
My position is that “demand” is a word for “what people will pay you for.”
This seems reasonable (at least in an econ/business context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that they’ll pretty much always not be in perfect equilibrium, and say “None of those problems indicate that something is wrong”, without noting that the thing that’s wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
I think I sort-of agree with your other two points, but I think they seem to constrain the focus to “demand” in the sense of “how much will people pay for people to work on this”, and “supply” in the sense of “people who are willing and able to work on this if given money”, whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things.
(I’m not sure if I’ve expressed myself well here. I basically just have a sense that the framing you’ve used isn’t clearly highlighting all the key things in a productive way. But I’m not sure there are actual any interesting, major disagreements here.)
Your previous comment seemed to me to focus on demand and supply and note that they’ll pretty much always not be in perfect equilibrium, and say “None of those problems indicate that something is wrong”, without noting that the thing that’s wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
In the context of the EA forum, I don’t think it’s necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let’s say in a given year):
Grantmakers run out of money and aren’t able to fund all high-quality EA projects.
Grantmakers have extra money, and don’t have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of “high quality EA project” that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they’re facing.
For the rest, I’d say that there’s a difference between “willingness to work” and “likelihood of success.” We’re interested in the reasons for EA project supply inelasticity. Why aren’t grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them aren’t motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we’d see an increase in supply.
An alternative possibility is that high-quality ideas/teams are rare right now, and can’t be had at any price grantmakers are willing or able to pay.
I think it’s not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if we’re not in that situation vetting is still a constraint—it’s not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.
Relatedly, there could be a problem of grantmakers giving to things that are “actually relatively low EV” (in a way that could’ve been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that).
So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake.
I think maybe there’s been some confusion where you’re thinking I’m saying grantmakers have “too high a bar”? I’m not saying that. (I’m agnostic on the question, and would expect it differs between grantmakers.)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don’t feel they have room to grow in terms of determining the expected value of the projects they’re looking at. Very prepared to change my mind on this; I’m literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they’ve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely don’t think that grantmakers are already doing the best that could be done at determining the EV of projects. And I’d be surprised if any EA grantmaker thought that that was the case, and I don’t think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn’t quite “vetting”, which is not the same as the claim that there’d be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
I also think that “doing the best they can at determining EV of projects” implies that the question is just whether the grantmakers’ EV assessments are correct. But what’s often happening is more like they either don’t hear about something or (in a sense) they “don’t really make an EV assessment”—because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think there’s ample evidence that these things happen, and it’s obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. I’m not saying they’re “doing a bad job”, but rather simply the very weak and common-sense claim that they aren’t already picking only and all the highest EV projects, partly because there aren’t enough of the grantmakers to do all the evaluations, partly because some projects don’t come to their attention, partly because some projects haven’t yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply “lower their bar”.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here’s a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
Unfortunately, it’s difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more positive-sum political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be tractable — we are also uncertain about whether achieving the intermediate goal would be good or bad for society, in the long run. Such “sign uncertainty” can dramatically reduce the expected value of pursuing some particular goal,19 often enough for us to not prioritize that goal.20
As such, our AI governance grantmaking tends to focus on…
…research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of “vetting bottleneck” could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that’s clearly the case in probably all EA domains (though note that I’m not claiming this is the biggest bottleneck in all domains).
My position is that “demand” is a word for “what people will pay you for.” EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is “free riding” on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn’t generate much additional supply. This is the problem we’re exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
This seems reasonable (at least in an econ/business context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that they’ll pretty much always not be in perfect equilibrium, and say “None of those problems indicate that something is wrong”, without noting that the thing that’s wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
I think I sort-of agree with your other two points, but I think they seem to constrain the focus to “demand” in the sense of “how much will people pay for people to work on this”, and “supply” in the sense of “people who are willing and able to work on this if given money”, whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things.
(I’m not sure if I’ve expressed myself well here. I basically just have a sense that the framing you’ve used isn’t clearly highlighting all the key things in a productive way. But I’m not sure there are actual any interesting, major disagreements here.)
In the context of the EA forum, I don’t think it’s necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let’s say in a given year):
Grantmakers run out of money and aren’t able to fund all high-quality EA projects.
Grantmakers have extra money, and don’t have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of “high quality EA project” that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they’re facing.
For the rest, I’d say that there’s a difference between “willingness to work” and “likelihood of success.” We’re interested in the reasons for EA project supply inelasticity. Why aren’t grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them aren’t motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we’d see an increase in supply.
An alternative possibility is that high-quality ideas/teams are rare right now, and can’t be had at any price grantmakers are willing or able to pay.
I think it’s not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if we’re not in that situation vetting is still a constraint—it’s not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.
Relatedly, there could be a problem of grantmakers giving to things that are “actually relatively low EV” (in a way that could’ve been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that).
I think maybe there’s been some confusion where you’re thinking I’m saying grantmakers have “too high a bar”? I’m not saying that. (I’m agnostic on the question, and would expect it differs between grantmakers.)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don’t feel they have room to grow in terms of determining the expected value of the projects they’re looking at. Very prepared to change my mind on this; I’m literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they’ve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely don’t think that grantmakers are already doing the best that could be done at determining the EV of projects. And I’d be surprised if any EA grantmaker thought that that was the case, and I don’t think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn’t quite “vetting”, which is not the same as the claim that there’d be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
I also think that “doing the best they can at determining EV of projects” implies that the question is just whether the grantmakers’ EV assessments are correct. But what’s often happening is more like they either don’t hear about something or (in a sense) they “don’t really make an EV assessment”—because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think there’s ample evidence that these things happen, and it’s obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. I’m not saying they’re “doing a bad job”, but rather simply the very weak and common-sense claim that they aren’t already picking only and all the highest EV projects, partly because there aren’t enough of the grantmakers to do all the evaluations, partly because some projects don’t come to their attention, partly because some projects haven’t yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply “lower their bar”.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here’s a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
…research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of “vetting bottleneck” could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that’s clearly the case in probably all EA domains (though note that I’m not claiming this is the biggest bottleneck in all domains).