Anthropic has been getting flak from some EAs for distancing itself from EA. I think some of the critique is fair, but overall, I think that the distancing is a pretty safe move.
Compare this to FTX. SBF wouldn’t shut up about EA. He made it a key part of his self-promotion. I think he broadly did this for reasons of self-interest for FTX, as it arguably helped the brand at that time.
I know that at that point several EAs were privately upset about this. They saw him as using EA for PR, and thus creating a key liability that could come back and bite EA.
And come back and bite EA it did, about as poorly as one could have imagined.
So back to Anthropic. They’re taking the opposite approach. Maintaining about as much distance from EA as they semi-honestly can. I expect that this is good for Anthropic, especially given EA’s reputation post-FTX.
And I think it’s probably also safe for EA.
I’d be a lot more nervous if Anthropic were trying to tie its reputation to EA. I could easily see Anthropic having a scandal in the future, and it’s also pretty awkward to tie EA’s reputation to an AI developer.
To be clear, I’m not saying that people from Anthropic should actively lie or deceive. So I have mixed feelings about their recent quotes for Wired. But big-picture, I feel decent about their general stance to keep distance. To me, this seems likely in the interest of both parties.
I hope my post was clear enough that distance itself is totally fine (and you give compelling reasons for that here). It’s ~implicitly denying present knowledge or past involvement in order to get distance that seems bad for all concerned. The speaker looks shifty and EA looks like something toxic you want to dodge.
Responding to a direct question by saying “We’ve had some overlap and it’s a nice philosophy for the most part, but it’s not a guiding light of what we’re doing here” seems like it strictly dominates.
Do you think that distancing is ever not in the interest of both parties? If so, what is special about Anthropic/EA?
(I think it’s plausible that the answer is that distancing is always good; the negative risks of tying your reputation to someone always exceed the positive. But I’m not sure.)
Arguably, around FTX, it was better. EA and FTX both had strong brands for a while. And there were worlds in which the risk of failure was low.
I think it’s generally quite tough to get this aspect right though. I believe that traditionally, charities are reluctant to get their brands associated with large companies, due to the risks/downsides. We don’t often see partnerships between companies and charities (or say, highly-ideological groups) - I think that one reason why is that it’s rarely in the interests of both parties.
Typically companies want to tie their brands to very top charities, if anyone. But now EA has a reputational challenge, so I’d expect that few companies/orgs want to touch “EA” as a thing.
Arguably influencers are a often a safer option—note that EA groups like GiveWell and 80k are already doing partnerships with influencers. As in, there’s a decent variety of smart YouTube channels and podcasts that hold advertisements for 80k/GiveWell. I feel pretty good about much of this.
Arguably influencers are crafted in large part to be safe bets. As in, they’re very incentivized to not go crazy, and they have limited risks to worry about (given they represent very small operations).
Arguably influencers are a often a safer option—note that EA groups like GiveWell and 80k are already doing partnerships with influencers. As in, there’s a decent variety of smart YouTube channels and podcasts that hold advertisements for 80k/GiveWell. I feel pretty good about much of this.
This feels different to me. In most cases, there is a cultural understanding of the advertiser-ad seller relationship that limits the reputational risk. (I have not seen the “partnerships” in question, but assume there is money flowing in one direction and promotional consideration in the other.) To be sure, activists will demand for companies to pull their ads from a certain TV show when it does something offensive, to stop sponsoring a certain sports team, or so on. However, I don’t think consumers generally hold prior ad spend against a brand when it promptly cuts the relationship upon learning of the counterparty’s new and problematic conduct.
In contrast, people will perceive something like FTX/EA or Anthropic/EA as a deeper relationship rather than a mostly transactional relationship involving the exchange of money for eyeballs. Deeper relationships can have a sense of authenticity that increases the value of the partnership—the partners aren’t just in it for business reasons—but that depth probably increases the counterparty risks to each partner.
Anthropic has been getting flak from some EAs for distancing itself from EA. I think some of the critique is fair, but overall, I think that the distancing is a pretty safe move.
Compare this to FTX. SBF wouldn’t shut up about EA. He made it a key part of his self-promotion. I think he broadly did this for reasons of self-interest for FTX, as it arguably helped the brand at that time.
I know that at that point several EAs were privately upset about this. They saw him as using EA for PR, and thus creating a key liability that could come back and bite EA.
And come back and bite EA it did, about as poorly as one could have imagined.
So back to Anthropic. They’re taking the opposite approach. Maintaining about as much distance from EA as they semi-honestly can. I expect that this is good for Anthropic, especially given EA’s reputation post-FTX.
And I think it’s probably also safe for EA.
I’d be a lot more nervous if Anthropic were trying to tie its reputation to EA. I could easily see Anthropic having a scandal in the future, and it’s also pretty awkward to tie EA’s reputation to an AI developer.
To be clear, I’m not saying that people from Anthropic should actively lie or deceive. So I have mixed feelings about their recent quotes for Wired. But big-picture, I feel decent about their general stance to keep distance. To me, this seems likely in the interest of both parties.
I hope my post was clear enough that distance itself is totally fine (and you give compelling reasons for that here). It’s ~implicitly denying present knowledge or past involvement in order to get distance that seems bad for all concerned. The speaker looks shifty and EA looks like something toxic you want to dodge.
Responding to a direct question by saying “We’ve had some overlap and it’s a nice philosophy for the most part, but it’s not a guiding light of what we’re doing here” seems like it strictly dominates.
I agree.
I didn’t mean to suggest your post suggested otherwise—I was just focusing on another part of this topic.
Do you think that distancing is ever not in the interest of both parties? If so, what is special about Anthropic/EA?
(I think it’s plausible that the answer is that distancing is always good; the negative risks of tying your reputation to someone always exceed the positive. But I’m not sure.)
Arguably, around FTX, it was better. EA and FTX both had strong brands for a while. And there were worlds in which the risk of failure was low.
I think it’s generally quite tough to get this aspect right though. I believe that traditionally, charities are reluctant to get their brands associated with large companies, due to the risks/downsides. We don’t often see partnerships between companies and charities (or say, highly-ideological groups) - I think that one reason why is that it’s rarely in the interests of both parties.
Typically companies want to tie their brands to very top charities, if anyone. But now EA has a reputational challenge, so I’d expect that few companies/orgs want to touch “EA” as a thing.
Arguably influencers are a often a safer option—note that EA groups like GiveWell and 80k are already doing partnerships with influencers. As in, there’s a decent variety of smart YouTube channels and podcasts that hold advertisements for 80k/GiveWell. I feel pretty good about much of this.
Arguably influencers are crafted in large part to be safe bets. As in, they’re very incentivized to not go crazy, and they have limited risks to worry about (given they represent very small operations).
This feels different to me. In most cases, there is a cultural understanding of the advertiser-ad seller relationship that limits the reputational risk. (I have not seen the “partnerships” in question, but assume there is money flowing in one direction and promotional consideration in the other.) To be sure, activists will demand for companies to pull their ads from a certain TV show when it does something offensive, to stop sponsoring a certain sports team, or so on. However, I don’t think consumers generally hold prior ad spend against a brand when it promptly cuts the relationship upon learning of the counterparty’s new and problematic conduct.
In contrast, people will perceive something like FTX/EA or Anthropic/EA as a deeper relationship rather than a mostly transactional relationship involving the exchange of money for eyeballs. Deeper relationships can have a sense of authenticity that increases the value of the partnership—the partners aren’t just in it for business reasons—but that depth probably increases the counterparty risks to each partner.