Anthropic is planning to grow. They’re aiming to be one of the “top players”, competitive with OpenAI and Deepmind, working with a similar level of advanced models. They have received outsideinvestment, because keeping up with state of the art is expensive, and going to get moreso. They’ve recently been hiring for a product team, in order to get more red-teaming of models and eventually have more independent revenue streams.
I think Anthropic believes that this is the most promising route to making AGI turn out well for humanity, so it’s worth taking the risk of being part of the competition and perhaps contributing to accelerating capabilities. Alternatively stated, Anthropic leadership believes that you can’t solve the problem of aligning AGI independently from developing AGI.
If being serious, I don’t feel it when thinking of the phrase “Google just invested into Anthropic to advance AI safety”. Just don’t feel it.
Don’t know why, maybe because of how Google handled it ethics team? Or when they said “were not gonna be doing weapons” and then like, started doing it? Just seems like something rather likely if we consider their character inferred from their previous actions that they want their own chat bot, to show everyone how smart they are (regardless of the consequences)
Once a prof told me how he sees the ML field: people there don’t do it for “humanity” or “knowledge”, he told me it’s because they want to show how their stuff is superior to someone else’s and show off.
Not everyone’s like this, of course, but ML/tech has this vibe—people from the first row of seats from school who don’t know anything about the real world and instead try to impress the teacher, living of petty drama between same people on the front row.
A lot of people like this in ML
Saying this as ex one of those people.
To sum up, here’s my personal story as one who was in the field, and as in another reply, I invite you to form your own understanding based on whatever you like.
I can’t convince you, I only have a personal story as an AIS beginning researcher, I don’t have statistics and expected value calculations people here seem to want.
So rather than a specific claim about specific activities being done by Anthropic, would you say that:
from your experiences, it’s very common for people to join the arms race under the guise of safety
you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise
the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?
Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.
My crux here is whether or not I think Anthropic has joined the arms race.
Why do you believe that it has?
See for example this summary of someone who spent quite a lot of time trying to understand and pass the ITT of Anthropic’s strategy: https://www.lesswrong.com/posts/MNpBCtmZmqD7yk4q8/my-understanding-of-anthropic-strategy
Do you believe in it?
Just seems weird if someone said “to be safe from a deadly disease, what we really need is to develop it as soon as we can”
I get that the metaphore has holes, just, seems a bit “out there”.
I’d say that “to have safe agi, we need to do agi engineering the fastest way possible” is a very extraordinary claim.
It requires very extraordinary evidence to support it.
My thing which is “can we ask them to explain it” seems like a very ordinary claim to me.
So it doesn’t require much evidence at all.
Here we have Microsoft’s CEO saying they’re “gonna make Google dance” with my comments about how Microsoft’s CEO sounds like a comic book villain
https://twitter.com/sergia_ch/status/1624438579412799488?s=20
If being serious, I don’t feel it when thinking of the phrase “Google just invested into Anthropic to advance AI safety”. Just don’t feel it.
Don’t know why, maybe because of how Google handled it ethics team? Or when they said “were not gonna be doing weapons” and then like, started doing it? Just seems like something rather likely if we consider their character inferred from their previous actions that they want their own chat bot, to show everyone how smart they are (regardless of the consequences)
Once a prof told me how he sees the ML field: people there don’t do it for “humanity” or “knowledge”, he told me it’s because they want to show how their stuff is superior to someone else’s and show off.
Not everyone’s like this, of course, but ML/tech has this vibe—people from the first row of seats from school who don’t know anything about the real world and instead try to impress the teacher, living of petty drama between same people on the front row.
A lot of people like this in ML
Saying this as ex one of those people.
To sum up, here’s my personal story as one who was in the field, and as in another reply, I invite you to form your own understanding based on whatever you like.
I can’t convince you, I only have a personal story as an AIS beginning researcher, I don’t have statistics and expected value calculations people here seem to want.
Thank you
So rather than a specific claim about specific activities being done by Anthropic, would you say that:
from your experiences, it’s very common for people to join the arms race under the guise of safety
you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise
the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?
Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.