I understand that this topic gets people excited, but commenters are confusing a Pause policy with a Pause movement with the organisation called PauseAI.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I’ve never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.
I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out.
As an example: If you don’t think shrimp can suffer, then that’s a strong argument against the Shrimp Welfare Project. However, that criticism doesn’t belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.
I understand that this topic gets people excited, but commenters are confusing a Pause policy with a Pause movement with the organisation called PauseAI.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I’ve never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious.
I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.
I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out.
As an example: If you don’t think shrimp can suffer, then that’s a strong argument against the Shrimp Welfare Project. However, that criticism doesn’t belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.