Thank you for the write-up. This was very helpful in getting a better understanding of the reactions from the academic field.
Donât start with X-risk or alignment, start with a technical problem statement such as âuncontrollabilityâ or âinterpretabilityâ and work from there.
Karl von Wendt makes a similar point in Letâs talk about uncontrollable AI where he argues âthat we talk about the risks of âuncontrollable AIâ instead of AGI or superintelligenceâ. His aim is âto raise awareness of the problem and encourage further research, in particular in Germany and the EUâ. Do you think this could be a better framing? Do you think there is some framing that might be better suited for different cultural contexts, like in Germany, or does that seem neglectable?
I have talked to Karl about this and we both had similar observations.
Iâm not sure if this is a cultural thing or not but most of the PhDs I talked to came from Europe. I think it also depends on the actor in the government, e.g. I could imagine defense people to be more open to existential risk as a serious threat. I have no experience in governance, so this is highly speculative and I would defer to people with more experience.
Thank you for the write-up. This was very helpful in getting a better understanding of the reactions from the academic field.
Karl von Wendt makes a similar point in Letâs talk about uncontrollable AI where he argues âthat we talk about the risks of âuncontrollable AIâ instead of AGI or superintelligenceâ. His aim is âto raise awareness of the problem and encourage further research, in particular in Germany and the EUâ. Do you think this could be a better framing? Do you think there is some framing that might be better suited for different cultural contexts, like in Germany, or does that seem neglectable?
I have talked to Karl about this and we both had similar observations.
Iâm not sure if this is a cultural thing or not but most of the PhDs I talked to came from Europe. I think it also depends on the actor in the government, e.g. I could imagine defense people to be more open to existential risk as a serious threat. I have no experience in governance, so this is highly speculative and I would defer to people with more experience.