Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I’d put this tax at 50% (rough order of magnitude number).
If Anthropic were solely funded by EA money, and didn’t capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.
I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you’d need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?
I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that’s remotely competitive with OpenAI, which means there’s no external target for this Amazon investment. It’s not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI’s earlier GPT scaling successes.
What do you think the bottleneck for this alternate AI company’s competitiveness would be? If it’s talent, why is it insurmountable? E.g. what would prevent them from hiring away people from the current top labs?
There are alternatives—x.AI and Inflection. Arguably they only got going because the race was pushed to fever pitch by Anthropic splitting from OpenAI.
It seems more likely to me that they would have gotten started anyway once ChatGPT came out. Although I was interpreting the counterfactual as being if Anthropic had declined to partner with Amazon, rather than if Anthropic had not existed.
I’m not sure if they would’ve ramped up quite so quick (i.e. getting massive investment) if it wasn’t for the race heating up with Anthropic entering. Either way, it’s all bad, and a case of which is worse.
This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their “responsible scaling” policy is anything but (it’s basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).
Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I’d put this tax at 50% (rough order of magnitude number).
If Anthropic were solely funded by EA money, and didn’t capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.
I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you’d need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?
I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that’s remotely competitive with OpenAI, which means there’s no external target for this Amazon investment. It’s not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI’s earlier GPT scaling successes.
What do you think the bottleneck for this alternate AI company’s competitiveness would be? If it’s talent, why is it insurmountable? E.g. what would prevent them from hiring away people from the current top labs?
There are alternatives—x.AI and Inflection. Arguably they only got going because the race was pushed to fever pitch by Anthropic splitting from OpenAI.
It seems more likely to me that they would have gotten started anyway once ChatGPT came out. Although I was interpreting the counterfactual as being if Anthropic had declined to partner with Amazon, rather than if Anthropic had not existed.
I’m not sure if they would’ve ramped up quite so quick (i.e. getting massive investment) if it wasn’t for the race heating up with Anthropic entering. Either way, it’s all bad, and a case of which is worse.
This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their “responsible scaling” policy is anything but (it’s basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).