$1B is a lot. It also gets really hard if I don’t get to distribute it to other grantmakers. Here are some really random guesses. Please don’t hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.
My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output.
My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so.
I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don’t like the modern news landscape, and it doesn’t take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).
I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.
I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don’t know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.
If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It’s really embarrassing that we don’t have more data on that yet, even though it has such a large effect.
Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn’t seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
I’m really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable?
It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives.
My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B).
$1B is a lot. It also gets really hard if I don’t get to distribute it to other grantmakers. Here are some really random guesses. Please don’t hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.
My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output.
My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so.
I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don’t like the modern news landscape, and it doesn’t take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).
I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.
I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don’t know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.
If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It’s really embarrassing that we don’t have more data on that yet, even though it has such a large effect.
Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn’t seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.
I’m really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable?
It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives.
My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B).