Good question. I’m still relatively new to thinking about AI governance, but would guess that two puzzle pieces are
a) broader public advocacy has not been particularly prioritized so far
there’s uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns
b) there are not enough people working on AI governance issues to be so prepared for things
not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people
Also, the letter just came out. I’d not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.
Good question. I’m still relatively new to thinking about AI governance, but would guess that two puzzle pieces are
a) broader public advocacy has not been particularly prioritized so far
there’s uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns
b) there are not enough people working on AI governance issues to be so prepared for things
not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people
Also, the letter just came out. I’d not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.