The cop-out answer of course is to say we’d grow the fund team or, if that isn’t an option, we’d all start working full-time on the LTFF and spend a lot more time thinking about it.
If there’s some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:
For any long-termist org who (a) I’d usually want to fund at a small scale; and (b) whose leadership’s judgement I’d trust, I’d give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).
I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation’s prestige, and some things (like creating a professorship) often require endowments.
However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wrong people, like with the resource curse. So I’d be selective here, but more in terms of “do I trust the board and leadership to a blank cheque” than “at a detailed level, do I think this org is doing the most valuable work?”
I’d also be tempted to throw a lot of money at interventions that seem shovel-ready and robustly positive, even if they wouldn’t normally be something I’d be excited about. For example, I’d feel reasonably good about funding the CES for $10-20m, and probably similar sized grants to Nuclear Threat Initiative, etc.
This is more speculative, but I’d be tempted to try and become the go-to angel investor or VC fund for AI startups. I think I’m in a reasonably good position for this now, being an AI researcher and also having a finance background, and having a billion dollars would help out here.
The goal wouldn’t be to make money (which is good since most VC’s don’t seem to do that well!) But being an early investor gives a lot of leverage over a company’s direction. Industry is a huge player in fundamental AI research, and in particular I would 85% predict the first transformative AI is developed by an industry lab, not academia. Having a board seat and early insight into a start-up that is about to develop the first transformative AI seems hugely valuable. Of course, there’s no guarantee I’d manage this—perhaps I miss that startup, or a national lab or pre-existing industrial lab (Google/Facebook/Huawei/etc) develops the technologies first. But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
What’s your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?
(Don’t take too much time on this question, I just want to see a gut check plus a few sentences if possible).
About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.
I think I put around 40% on it being a company that does already exist, and 20% on “other” (academia, national labs, etc).
Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower—maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!
The cop-out answer of course is to say we’d grow the fund team or, if that isn’t an option, we’d all start working full-time on the LTFF and spend a lot more time thinking about it.
If there’s some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:
For any long-termist org who (a) I’d usually want to fund at a small scale; and (b) whose leadership’s judgement I’d trust, I’d give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).
I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation’s prestige, and some things (like creating a professorship) often require endowments.
However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wrong people, like with the resource curse. So I’d be selective here, but more in terms of “do I trust the board and leadership to a blank cheque” than “at a detailed level, do I think this org is doing the most valuable work?”
I’d also be tempted to throw a lot of money at interventions that seem shovel-ready and robustly positive, even if they wouldn’t normally be something I’d be excited about. For example, I’d feel reasonably good about funding the CES for $10-20m, and probably similar sized grants to Nuclear Threat Initiative, etc.
This is more speculative, but I’d be tempted to try and become the go-to angel investor or VC fund for AI startups. I think I’m in a reasonably good position for this now, being an AI researcher and also having a finance background, and having a billion dollars would help out here.
The goal wouldn’t be to make money (which is good since most VC’s don’t seem to do that well!) But being an early investor gives a lot of leverage over a company’s direction. Industry is a huge player in fundamental AI research, and in particular I would 85% predict the first transformative AI is developed by an industry lab, not academia. Having a board seat and early insight into a start-up that is about to develop the first transformative AI seems hugely valuable. Of course, there’s no guarantee I’d manage this—perhaps I miss that startup, or a national lab or pre-existing industrial lab (Google/Facebook/Huawei/etc) develops the technologies first. But start-ups are responsible for a big fraction of disruptive technology, so it’s a reasonable bet.
What’s your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?
(Don’t take too much time on this question, I just want to see a gut check plus a few sentences if possible).
About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.
I think I put around 40% on it being a company that does already exist, and 20% on “other” (academia, national labs, etc).
Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower—maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!
Thanks a lot, really appreciate your thoughts here!