Thanks! Here just another recent example:
https://mobile.twitter.com/fchollet/status/1473656408713441285
Thanks! Here just another recent example:
https://mobile.twitter.com/fchollet/status/1473656408713441285
AGI concerns are outside the overtone-window and are often considered actively harmful. The narrative “The whole debate about existential risks AI poses to humanity in the far off future is a huge distraction” (as illustrated in this post https://www.skynettoday.com/editorials/dont-worry-agi/) is quite wide-spread in the AI policy community.
In this situation, actors who raise AGI concerns thus additionally risk being portrayed as working against the public interest.
I worry people will wrongly think they are not a good fit after these exercises. Regulatory texts such as the AI act are written in complicated language and their logic is hard to understand. It takes time. For everyone. Even hearing refer to a lot of context that needs time to get used to. So please don’t think “oh I’m too stupid for this.”