From a cooperativeness perspective, people probably should not unilaterally create for-profit AGI companies.
(Note: Anthropic is a for-profit company that raised $704M according to Crunchbase, and is looking for engineers who want to build “large scale ML systems”, but I wouldn’t call them an “AGI company”.)
Well, I wouldn’t say that MIRI decided not to send drafts to DM etc. out of revenge, to punish them for making a strategic decision that seems extremely bad to me. What I’d say is that the norm ‘savvy people freely talk about mistakes they think AGI orgs are making, without a bunch of friction’ tends to save the world more often than the norm ‘savvy people are unusually cautious about criticizing AGI orgs’ does.
Indeed, I’d say this regardless of whether it was a good idea for someone to found the relevant AGI orgs in the first place. (I think it was a bad idea to create DM and to create OpenAI, but I don’t think it’s always a bad idea to make an AGI org, since that would be tantamount to saying that humanity should never build AGI.)
And we aren’t totally helpless to follow the more world-destroying norm just because we think other people expect us to follow it; we can notice the problem and act to try to fix it, rather than contributing to a norm that isn’t good. The pool of people who need to deliberately select the more-reasonable norm is not actually that large; it’s a smallish professional network, not a giant slice of society.
From a cooperativeness perspective, people probably should not unilaterally create for-profit AGI companies.
(Note: Anthropic is a for-profit company that raised $704M according to Crunchbase, and is looking for engineers who want to build “large scale ML systems”, but I wouldn’t call them an “AGI company”.)
Well, I wouldn’t say that MIRI decided not to send drafts to DM etc. out of revenge, to punish them for making a strategic decision that seems extremely bad to me. What I’d say is that the norm ‘savvy people freely talk about mistakes they think AGI orgs are making, without a bunch of friction’ tends to save the world more often than the norm ‘savvy people are unusually cautious about criticizing AGI orgs’ does.
Indeed, I’d say this regardless of whether it was a good idea for someone to found the relevant AGI orgs in the first place. (I think it was a bad idea to create DM and to create OpenAI, but I don’t think it’s always a bad idea to make an AGI org, since that would be tantamount to saying that humanity should never build AGI.)
And we aren’t totally helpless to follow the more world-destroying norm just because we think other people expect us to follow it; we can notice the problem and act to try to fix it, rather than contributing to a norm that isn’t good. The pool of people who need to deliberately select the more-reasonable norm is not actually that large; it’s a smallish professional network, not a giant slice of society.