In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.
The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.
What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.
We’re not the one community concerned. Many epistemically diverse communities are looking at the actions by “AGI” labs and are saying that this gotta stop.
Unfortunately, in the past core people in our community have inadvertently supported the start-up of these labs. These were actions they chose to make by themselves.
If anything, unilateralist actions were taken by the accelerationists, as tacitly supported by core AI Safety folks who gave labs like DeepMind, OpenAI and Anthropic leeway to take these actions.
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
Respect for this comment.
In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.
The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.
What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.
We’re not the one community concerned. Many epistemically diverse communities are looking at the actions by “AGI” labs and are saying that this gotta stop.
Unfortunately, in the past core people in our community have inadvertently supported the start-up of these labs. These were actions they chose to make by themselves.
If anything, unilateralist actions were taken by the accelerationists, as tacitly supported by core AI Safety folks who gave labs like DeepMind, OpenAI and Anthropic leeway to take these actions.
The rest of the world did not consent to this.
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
This is an incisive description, Geoff. I couldn’t put it better.
I’m confused what the two crosses are doing on your comment.
Maybe the people who disagreed can clarify.