Some people have criticised the timing. I think there’s some validity to this, but the trigger has been pulled and cannot be unpulled. You might say that we could try write another similar letter a bit further down the track, but it’s hard to get people to do the same thing twice and even harder to get people to pay attention.
So I guess we really have the choice to get behind this or not. I think we should get behind this as I see this letter as really opening up the Overton Window. I think it would be a mistake to wait for a theoretical perfectly timed letter to sign, as opposed to signing what we have in front of us.
I think it’s great timing. I’ve been increasingly thinking that now is the time for a global moratorium. In fact, I was up until the early hours drafting a post on why we need such a moratorium! Great to wake up and see this :)
I expect there will be much more public discussion on regulating AI and much more political willingness to do ambitious things about AI in the coming years when economic and cultural impacts become more apparent, so I’m spontaneously wary of investing significant reputation on something (potentially) not sufficiently well thought through.
Also, it’s not a binary of signing vs. not-signing. E.g. risk reducers can also enter the discussion caused by the letter and make constructive suggestions what will contribute more to longterm safety.
(Trying to understand the space better, not being acusatory.)
How is it that there is not a well-thought-out response right now?
E.g. it seems that it has probably been clear to people in AI safety / governance for some time that there would be an increase in the Overton window in which placing some demands would be more feasilbe than at other times, so I am surprised there isn’t a letter like this that is more thought through / endorsed by people that are not happy with the current letter.
Good question. I’m still relatively new to thinking about AI governance, but would guess that two puzzle pieces are
a) broader public advocacy has not been particularly prioritized so far
there’s uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns
b) there are not enough people working on AI governance issues to be so prepared for things
not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people
Also, the letter just came out. I’d not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.
Have important “crying wolf” cases have happen in real life? About societal issues? I mean, yeah, it is a possibility but the alternatives seem so much worse.
How do we know when we are close enough to the precipice for other people to be able to see it and to ask to stop the race to it? General audiences lately have been talking about how surprised they are about AI so it seems like perfect timing for me.
Also, if people get used to benefit and work in narrow and safe AIs they could put themselves against stopping/ slowing them down.
Even if more people could agree on decelerate in the future it would take more time to stop/ going really slow with more stakeholders going at a higher speed. And of course, after that we would be closer to the precipice that if we started the deceleration before.
Some people have criticised the timing. I think there’s some validity to this, but the trigger has been pulled and cannot be unpulled. You might say that we could try write another similar letter a bit further down the track, but it’s hard to get people to do the same thing twice and even harder to get people to pay attention.
So I guess we really have the choice to get behind this or not. I think we should get behind this as I see this letter as really opening up the Overton Window. I think it would be a mistake to wait for a theoretical perfectly timed letter to sign, as opposed to signing what we have in front of us.
I think it’s great timing. I’ve been increasingly thinking that now is the time for a global moratorium. In fact, I was up until the early hours drafting a post on why we need such a moratorium! Great to wake up and see this :)
I expect there will be much more public discussion on regulating AI and much more political willingness to do ambitious things about AI in the coming years when economic and cultural impacts become more apparent, so I’m spontaneously wary of investing significant reputation on something (potentially) not sufficiently well thought through.
Also, it’s not a binary of signing vs. not-signing. E.g. risk reducers can also enter the discussion caused by the letter and make constructive suggestions what will contribute more to longterm safety.
(Trying to understand the space better, not being acusatory.)
How is it that there is not a well-thought-out response right now?
E.g. it seems that it has probably been clear to people in AI safety / governance for some time that there would be an increase in the Overton window in which placing some demands would be more feasilbe than at other times, so I am surprised there isn’t a letter like this that is more thought through / endorsed by people that are not happy with the current letter.
Good question. I’m still relatively new to thinking about AI governance, but would guess that two puzzle pieces are
a) broader public advocacy has not been particularly prioritized so far
there’s uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns
b) there are not enough people working on AI governance issues to be so prepared for things
not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people
Also, the letter just came out. I’d not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.
Why is it considered bad timing?
Some people are worried that this will come off as “crying wolf”.
Have important “crying wolf” cases have happen in real life? About societal issues? I mean, yeah, it is a possibility but the alternatives seem so much worse.
How do we know when we are close enough to the precipice for other people to be able to see it and to ask to stop the race to it? General audiences lately have been talking about how surprised they are about AI so it seems like perfect timing for me.
Also, if people get used to benefit and work in narrow and safe AIs they could put themselves against stopping/ slowing them down.
Even if more people could agree on decelerate in the future it would take more time to stop/ going really slow with more stakeholders going at a higher speed. And of course, after that we would be closer to the precipice that if we started the deceleration before.