This is an interesting post, and I’m glad you wrote it.
People are driven by incentives, which can be created with cash in a variety of ways
I agree with these ways, but I think it’s quite hard to manage incentives properly. You mention DARPA, but DARPA is a major bureacracy comprised of people who are aligned to their own incentive structures, and ultimately the most powerful organization in the world (the US government). Such a thing does not exist in AI safety—not even close. Money would certainly help with this, but it certainly can’t just be straightforwardly turned into good research.
Where is that money going to come from? How are they going to fine-tune the latest models when OpenAI is spending billions?
It’s unclear to me that having EA people starting an AI startup is more tractable than convincing other people that the work is worth funding. It certainly works faster to convince people who already have money now, vs. creating people who might make money later. I don’t really have a strong opinion, but this doesn’t seem wholly justified.
It’s frustratingly difficult to predict what will actually be useful for AI Safety...But money is flexible. It’s hard to imagine a world where another billion doesn’t come in handy.
I don’t see how the flexibility of money makes any difference? Isn’t it frustratingly difficult to predict which uses of money will actually be useful for AI safety? In that case, you still have the same problem.
It’s unclear to me that having EA people starting an AI startup is more tractable than convincing other people that the work is worth funding
Yeah, this is unclear to me to. But you can encourage lots of people to pursue earn-to-give paths (maybe a few will succeed). Not many are in a position to persuade people, and more people having this as an explicit goal seems dangerous.
Also, as an undergraduate student with short timelines, the startup path seems like a better fit.
I don’t see how the flexibility of money makes any difference? Isn’t it frustratingly difficult to predict which uses of money will actually be useful for AI safety? In that case, you still have the same problem.
I have to make important career decisions right now. It’s hard to know what will be useful in the future, but it seems likely that money will be. I could have made that point clearer.
This is an interesting post, and I’m glad you wrote it.
I agree with these ways, but I think it’s quite hard to manage incentives properly. You mention DARPA, but DARPA is a major bureacracy comprised of people who are aligned to their own incentive structures, and ultimately the most powerful organization in the world (the US government). Such a thing does not exist in AI safety—not even close. Money would certainly help with this, but it certainly can’t just be straightforwardly turned into good research.
It’s unclear to me that having EA people starting an AI startup is more tractable than convincing other people that the work is worth funding. It certainly works faster to convince people who already have money now, vs. creating people who might make money later. I don’t really have a strong opinion, but this doesn’t seem wholly justified.
I don’t see how the flexibility of money makes any difference? Isn’t it frustratingly difficult to predict which uses of money will actually be useful for AI safety? In that case, you still have the same problem.
Yeah, this is unclear to me to. But you can encourage lots of people to pursue earn-to-give paths (maybe a few will succeed). Not many are in a position to persuade people, and more people having this as an explicit goal seems dangerous.
Also, as an undergraduate student with short timelines, the startup path seems like a better fit.
I have to make important career decisions right now. It’s hard to know what will be useful in the future, but it seems likely that money will be. I could have made that point clearer.