If this is true, I will update even further in the direction of the creation of anthropic being a net negative to the world.
Amazon is a massive multinational driven by profit almost alone, that will be continuously pushing for more and more, while paying less and less attention to safety.
It surprised me a bit that anthropic would allow this to happen.
Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I’d put this tax at 50% (rough order of magnitude number).
If Anthropic were solely funded by EA money, and didn’t capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.
I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you’d need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?
I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that’s remotely competitive with OpenAI, which means there’s no external target for this Amazon investment. It’s not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI’s earlier GPT scaling successes.
What do you think the bottleneck for this alternate AI company’s competitiveness would be? If it’s talent, why is it insurmountable? E.g. what would prevent them from hiring away people from the current top labs?
There are alternatives—x.AI and Inflection. Arguably they only got going because the race was pushed to fever pitch by Anthropic splitting from OpenAI.
It seems more likely to me that they would have gotten started anyway once ChatGPT came out. Although I was interpreting the counterfactual as being if Anthropic had declined to partner with Amazon, rather than if Anthropic had not existed.
I’m not sure if they would’ve ramped up quite so quick (i.e. getting massive investment) if it wasn’t for the race heating up with Anthropic entering. Either way, it’s all bad, and a case of which is worse.
This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their “responsible scaling” policy is anything but (it’s basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).
I was going to reply to this comment, but after seeing the comments here, I’ve decided to abstain from sharing information on this specific post. The confidence that people here have about this being bad news, rather than uncertain news, indicates very dangerous levels of incompetence, narrow-mindedness, and even unfamiliarity with race dynamics (e.g. how one of the main risks of accelerating AI, even early on, comes from the creation of executives and AI engineers who neurotically pursue AI acceleration).
NickLaing is just one person and if one person doesn’t have a complete picture then that’s not a big deal, that’s random error and it happens to everyone. When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that’s a very serious issue. I now have a better sense of why Yudkowsky became apprehensive about writing about AI publicly, or why Dustin Moscovitz throws his weight behind Anthropic and insists that they’re the good guys. If the people here would like to attempt to develop a perspective on race dynamics, they can start with the Yudkowsky Christiano debate which is balanced, or Yudkowsky’s List of Lethalities and Christiano’s response. Johnswentworth just put up a great post relevant to the topic. Or just read Christiano’s response or Holden’s Cold Takes series, the important thing here isn’t balance, it’s about having any perspective at all on race dynamics before you decide whether to tear into Anthropic’s reputation.
Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a “very serious issue,” and lacking “any perspective at all.”
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.
This makes sense and it changed my mind, rudeness should stay on Lesswrong where Bayes Points rule the scene. Also, at the time I’m leaving this comment, the distribution of support on this page has shifted such that the ratio of opposition to the deal to uncertainty about the deal is less terrible; it was pretty bad when I wrote this comment.
I still think that people are too harsh on Anthropic, and that has consequences. I was definitely concerned as well when I first found out about this; Amazon plays hardball, and is probably much more capable of doing cultural investigations and appearing harmless than Anthropic thinks. Nickliang’s comment might have been far more carefully worded than I thought. But at the same time, if Dustin opposes the villainization of Anthropic and Yudkowsky is silent on the matter, that seems like mobbing Anthropic is the wrong move with serious real-life consequences.
I consider this sort of “oh, I have a take but you guys aren’t good enough for it” type perspective deeply inappropriate for the Forum—and I say that as someone who is considerably less “anti-Anthropic” than some of the comments here.
That’s plausibly good for community-building, but from the infosec approach, you don’t really know what kinds of people are reading the comments, or what kind of person they will be in a year or so. In an extreme scenario, people could start getting turned. But the more likely outcome is that people hired by various bigcorps (and possibly intelligence agencies) are utilizing EAforum for open-source intelligence; this is far more prevalent than most people think.
Hey Trevor thanks for the reply, personally I think the downvoting is a bit harsh. It’s true I’m not an AI expert in any sense, and that this is a hot take without a deep look into the situation. You aren’t wrong there.
To be fair on myself, I didn’t take an aggressive stage on anthropic, just said that I was updating more towards them being net negative.
I do agree there is enormous uncertainty here, but I think that should men we are less harsh on hot takes from all ends of the spectrum, and more willing to engage with a wide range of perspectives.
I don’t agree with this
“When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that’s a very serious issue.”
For me this isn’t be a “very serious issue”, it should just give you an idea of what many people s initial reactions are, and show you the arguments you need to refute or add nuance to. Why is this so serious?
I don’t think it’s at all obvious whether this development is good or bad (though I would lean towards bad), but both here and on LessWrong you have not made a coherent attempt to support your argument. Your concept of “redundancy” in AI labs is confusing and the implied connection to safety is tenuous.
Your concept of “redundancy” in AI labs is confusing and the implied connection to safety is tenuous.
Sorry to nitpick, but I think this specific sentence isn’t true at all; my concept of “redundancy” wasn’t confusing and the implied connection to safety isn’t tenuous.
If this is true, I will update even further in the direction of the creation of anthropic being a net negative to the world.
Amazon is a massive multinational driven by profit almost alone, that will be continuously pushing for more and more, while paying less and less attention to safety.
It surprised me a bit that anthropic would allow this to happen.
Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I’d put this tax at 50% (rough order of magnitude number).
If Anthropic were solely funded by EA money, and didn’t capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.
I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you’d need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?
I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that’s remotely competitive with OpenAI, which means there’s no external target for this Amazon investment. It’s not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI’s earlier GPT scaling successes.
What do you think the bottleneck for this alternate AI company’s competitiveness would be? If it’s talent, why is it insurmountable? E.g. what would prevent them from hiring away people from the current top labs?
There are alternatives—x.AI and Inflection. Arguably they only got going because the race was pushed to fever pitch by Anthropic splitting from OpenAI.
It seems more likely to me that they would have gotten started anyway once ChatGPT came out. Although I was interpreting the counterfactual as being if Anthropic had declined to partner with Amazon, rather than if Anthropic had not existed.
I’m not sure if they would’ve ramped up quite so quick (i.e. getting massive investment) if it wasn’t for the race heating up with Anthropic entering. Either way, it’s all bad, and a case of which is worse.
This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their “responsible scaling” policy is anything but (it’s basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).
Yeah, not sure how much this is good news and the level of interference and vested interests that will inevitably come up.
I was going to reply to this comment, but after seeing the comments here, I’ve decided to abstain from sharing information on this specific post. The confidence that people here have about this being bad news, rather than uncertain news, indicates very dangerous levels of incompetence, narrow-mindedness, and even unfamiliarity with race dynamics (e.g. how one of the main risks of accelerating AI, even early on, comes from the creation of executives and AI engineers who neurotically pursue AI acceleration).
NickLaing is just one person and if one person doesn’t have a complete picture then that’s not a big deal, that’s random error and it happens to everyone. When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that’s a very serious issue. I now have a better sense of why Yudkowsky became apprehensive about writing about AI publicly, or why Dustin Moscovitz throws his weight behind Anthropic and insists that they’re the good guys. If the people here would like to attempt to develop a perspective on race dynamics, they can start with the Yudkowsky Christiano debate which is balanced, or Yudkowsky’s List of Lethalities and Christiano’s response. Johnswentworth just put up a great post relevant to the topic. Or just read Christiano’s response or Holden’s Cold Takes series, the important thing here isn’t balance, it’s about having any perspective at all on race dynamics before you decide whether to tear into Anthropic’s reputation.
Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a “very serious issue,” and lacking “any perspective at all.”
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.
This makes sense and it changed my mind, rudeness should stay on Lesswrong where Bayes Points rule the scene. Also, at the time I’m leaving this comment, the distribution of support on this page has shifted such that the ratio of opposition to the deal to uncertainty about the deal is less terrible; it was pretty bad when I wrote this comment.
I still think that people are too harsh on Anthropic, and that has consequences. I was definitely concerned as well when I first found out about this; Amazon plays hardball, and is probably much more capable of doing cultural investigations and appearing harmless than Anthropic thinks. Nickliang’s comment might have been far more carefully worded than I thought. But at the same time, if Dustin opposes the villainization of Anthropic and Yudkowsky is silent on the matter, that seems like mobbing Anthropic is the wrong move with serious real-life consequences.
I consider this sort of “oh, I have a take but you guys aren’t good enough for it” type perspective deeply inappropriate for the Forum—and I say that as someone who is considerably less “anti-Anthropic” than some of the comments here.
That’s plausibly good for community-building, but from the infosec approach, you don’t really know what kinds of people are reading the comments, or what kind of person they will be in a year or so. In an extreme scenario, people could start getting turned. But the more likely outcome is that people hired by various bigcorps (and possibly intelligence agencies) are utilizing EAforum for open-source intelligence; this is far more prevalent than most people think.
Hey Trevor thanks for the reply, personally I think the downvoting is a bit harsh. It’s true I’m not an AI expert in any sense, and that this is a hot take without a deep look into the situation. You aren’t wrong there.
To be fair on myself, I didn’t take an aggressive stage on anthropic, just said that I was updating more towards them being net negative.
I do agree there is enormous uncertainty here, but I think that should men we are less harsh on hot takes from all ends of the spectrum, and more willing to engage with a wide range of perspectives.
I don’t agree with this “When a dozen or more people each have an incomplete picture and confidently take aggressive stances against Anthropic, then that’s a very serious issue.”
For me this isn’t be a “very serious issue”, it should just give you an idea of what many people s initial reactions are, and show you the arguments you need to refute or add nuance to. Why is this so serious?
I don’t think it’s at all obvious whether this development is good or bad (though I would lean towards bad), but both here and on LessWrong you have not made a coherent attempt to support your argument. Your concept of “redundancy” in AI labs is confusing and the implied connection to safety is tenuous.
Sorry to nitpick, but I think this specific sentence isn’t true at all; my concept of “redundancy” wasn’t confusing and the implied connection to safety isn’t tenuous.