I expect there will be much more public discussion on regulating AI and much more political willingness to do ambitious things about AI in the coming years when economic and cultural impacts become more apparent, so I’m spontaneously wary of investing significant reputation on something (potentially) not sufficiently well thought through.
Also, it’s not a binary of signing vs. not-signing. E.g. risk reducers can also enter the discussion caused by the letter and make constructive suggestions what will contribute more to longterm safety.
(Trying to understand the space better, not being acusatory.)
How is it that there is not a well-thought-out response right now?
E.g. it seems that it has probably been clear to people in AI safety / governance for some time that there would be an increase in the Overton window in which placing some demands would be more feasilbe than at other times, so I am surprised there isn’t a letter like this that is more thought through / endorsed by people that are not happy with the current letter.
Good question. I’m still relatively new to thinking about AI governance, but would guess that two puzzle pieces are
a) broader public advocacy has not been particularly prioritized so far
there’s uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns
b) there are not enough people working on AI governance issues to be so prepared for things
not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people
Also, the letter just came out. I’d not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.
I expect there will be much more public discussion on regulating AI and much more political willingness to do ambitious things about AI in the coming years when economic and cultural impacts become more apparent, so I’m spontaneously wary of investing significant reputation on something (potentially) not sufficiently well thought through.
Also, it’s not a binary of signing vs. not-signing. E.g. risk reducers can also enter the discussion caused by the letter and make constructive suggestions what will contribute more to longterm safety.
(Trying to understand the space better, not being acusatory.)
How is it that there is not a well-thought-out response right now?
E.g. it seems that it has probably been clear to people in AI safety / governance for some time that there would be an increase in the Overton window in which placing some demands would be more feasilbe than at other times, so I am surprised there isn’t a letter like this that is more thought through / endorsed by people that are not happy with the current letter.
Good question. I’m still relatively new to thinking about AI governance, but would guess that two puzzle pieces are
a) broader public advocacy has not been particularly prioritized so far
there’s uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns
b) there are not enough people working on AI governance issues to be so prepared for things
not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people
Also, the letter just came out. I’d not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.