$1B is a lot. It also gets really hard if I don’t get to distribute it to other grantmakers. Here are some really random guesses. Please don’t hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.
My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output.
My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so.
I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don’t like the modern news landscape, and it doesn’t take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).
I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.
I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don’t know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.
If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It’s really embarrassing that we don’t have more data on that yet, even though it has such a large effect.
Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn’t seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
I’m really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable?
It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives.
My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B).
The cop-out answer of course is to say we’d grow the fund team or, if that isn’t an option, we’d all start working full-time on the LTFF and spend a lot more time thinking about it.
If there’s some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:
For any long-termist org who (a) I’d usually want to fund at a small scale; and (b) whose leadership’s judgement I’d trust, I’d give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).
I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation’s prestige, and some things (like creating a professorship) often require endowments.
However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wrong people, like with the resource curse. So I’d be selective here, but more in terms of “do I trust the board and leadership to a blank cheque” than “at a detailed level, do I think this org is doing the most valuable work?”
I’d also be tempted to throw a lot of money at interventions that seem shovel-ready and robustly positive, even if they wouldn’t normally be something I’d be excited about. For example, I’d feel reasonably good about funding the CES for $10-20m, and probably similar sized grants to Nuclear Threat Initiative, etc.
This is more speculative, but I’d be tempted to try and become the go-to angel investor or VC fund for AI startups. I think I’m in a reasonably good position for this now, being an AI researcher and also having a finance background, and having a billion dollars would help out here.
The goal wouldn’t be to make money (which is good since most VC’s don’t seem to do that well!) But being an early investor gives a lot of leverage over a company’s direction. Industry is a huge player in fundamental AI research, and in particular I would 85% predict the first transformative AI is developed by an industry lab, not academia. Having a board seat and early insight into a start-up that is about to develop the first transformative AI seems hugely valuable. Of course, there’s no guarantee I’d manage this—perhaps I miss that startup, or a national lab or pre-existing industrial lab (Google/Facebook/Huawei/etc) develops the technologies first. But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
What’s your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?
(Don’t take too much time on this question, I just want to see a gut check plus a few sentences if possible).
About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.
I think I put around 40% on it being a company that does already exist, and 20% on “other” (academia, national labs, etc).
Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower—maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!
If you had $1B, and you weren’t allowed to give it to other grantmakers or fund prioritisation research, where might you allocate it?
$1B is a lot. It also gets really hard if I don’t get to distribute it to other grantmakers. Here are some really random guesses. Please don’t hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.
My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output.
My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so.
I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don’t like the modern news landscape, and it doesn’t take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).
I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.
I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don’t know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.
If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It’s really embarrassing that we don’t have more data on that yet, even though it has such a large effect.
Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn’t seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.
I’m really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable?
It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives.
My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B).
The cop-out answer of course is to say we’d grow the fund team or, if that isn’t an option, we’d all start working full-time on the LTFF and spend a lot more time thinking about it.
If there’s some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:
For any long-termist org who (a) I’d usually want to fund at a small scale; and (b) whose leadership’s judgement I’d trust, I’d give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).
I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation’s prestige, and some things (like creating a professorship) often require endowments.
However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wrong people, like with the resource curse. So I’d be selective here, but more in terms of “do I trust the board and leadership to a blank cheque” than “at a detailed level, do I think this org is doing the most valuable work?”
I’d also be tempted to throw a lot of money at interventions that seem shovel-ready and robustly positive, even if they wouldn’t normally be something I’d be excited about. For example, I’d feel reasonably good about funding the CES for $10-20m, and probably similar sized grants to Nuclear Threat Initiative, etc.
This is more speculative, but I’d be tempted to try and become the go-to angel investor or VC fund for AI startups. I think I’m in a reasonably good position for this now, being an AI researcher and also having a finance background, and having a billion dollars would help out here.
The goal wouldn’t be to make money (which is good since most VC’s don’t seem to do that well!) But being an early investor gives a lot of leverage over a company’s direction. Industry is a huge player in fundamental AI research, and in particular I would 85% predict the first transformative AI is developed by an industry lab, not academia. Having a board seat and early insight into a start-up that is about to develop the first transformative AI seems hugely valuable. Of course, there’s no guarantee I’d manage this—perhaps I miss that startup, or a national lab or pre-existing industrial lab (Google/Facebook/Huawei/etc) develops the technologies first. But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
What’s your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?
(Don’t take too much time on this question, I just want to see a gut check plus a few sentences if possible).
About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.
I think I put around 40% on it being a company that does already exist, and 20% on “other” (academia, national labs, etc).
Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower—maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!
Thanks a lot, really appreciate your thoughts here!