If being serious, I don’t feel it when thinking of the phrase “Google just invested into Anthropic to advance AI safety”. Just don’t feel it.
Don’t know why, maybe because of how Google handled it ethics team? Or when they said “were not gonna be doing weapons” and then like, started doing it? Just seems like something rather likely if we consider their character inferred from their previous actions that they want their own chat bot, to show everyone how smart they are (regardless of the consequences)
Once a prof told me how he sees the ML field: people there don’t do it for “humanity” or “knowledge”, he told me it’s because they want to show how their stuff is superior to someone else’s and show off.
Not everyone’s like this, of course, but ML/tech has this vibe—people from the first row of seats from school who don’t know anything about the real world and instead try to impress the teacher, living of petty drama between same people on the front row.
A lot of people like this in ML
Saying this as ex one of those people.
To sum up, here’s my personal story as one who was in the field, and as in another reply, I invite you to form your own understanding based on whatever you like.
I can’t convince you, I only have a personal story as an AIS beginning researcher, I don’t have statistics and expected value calculations people here seem to want.
So rather than a specific claim about specific activities being done by Anthropic, would you say that:
from your experiences, it’s very common for people to join the arms race under the guise of safety
you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise
the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?
Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.
Here we have Microsoft’s CEO saying they’re “gonna make Google dance” with my comments about how Microsoft’s CEO sounds like a comic book villain
https://twitter.com/sergia_ch/status/1624438579412799488?s=20
If being serious, I don’t feel it when thinking of the phrase “Google just invested into Anthropic to advance AI safety”. Just don’t feel it.
Don’t know why, maybe because of how Google handled it ethics team? Or when they said “were not gonna be doing weapons” and then like, started doing it? Just seems like something rather likely if we consider their character inferred from their previous actions that they want their own chat bot, to show everyone how smart they are (regardless of the consequences)
Once a prof told me how he sees the ML field: people there don’t do it for “humanity” or “knowledge”, he told me it’s because they want to show how their stuff is superior to someone else’s and show off.
Not everyone’s like this, of course, but ML/tech has this vibe—people from the first row of seats from school who don’t know anything about the real world and instead try to impress the teacher, living of petty drama between same people on the front row.
A lot of people like this in ML
Saying this as ex one of those people.
To sum up, here’s my personal story as one who was in the field, and as in another reply, I invite you to form your own understanding based on whatever you like.
I can’t convince you, I only have a personal story as an AIS beginning researcher, I don’t have statistics and expected value calculations people here seem to want.
Thank you
So rather than a specific claim about specific activities being done by Anthropic, would you say that:
from your experiences, it’s very common for people to join the arms race under the guise of safety
you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise
the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?
Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.