Thank you for the write-up. This was very helpful in getting a better understanding of the reactions from the academic field.
Don’t start with X-risk or alignment, start with a technical problem statement such as “uncontrollability” or “interpretability” and work from there.
Karl von Wendt makes a similar point in Let’s talk about uncontrollable AI where he argues “that we talk about the risks of “uncontrollable AI” instead of AGI or superintelligence”. His aim is “to raise awareness of the problem and encourage further research, in particular in Germany and the EU”. Do you think this could be a better framing? Do you think there is some framing that might be better suited for different cultural contexts, like in Germany, or does that seem neglectable?
I have talked to Karl about this and we both had similar observations.
I’m not sure if this is a cultural thing or not but most of the PhDs I talked to came from Europe. I think it also depends on the actor in the government, e.g. I could imagine defense people to be more open to existential risk as a serious threat. I have no experience in governance, so this is highly speculative and I would defer to people with more experience.
Thank you for the write-up. This was very helpful in getting a better understanding of the reactions from the academic field.
Karl von Wendt makes a similar point in Let’s talk about uncontrollable AI where he argues “that we talk about the risks of “uncontrollable AI” instead of AGI or superintelligence”. His aim is “to raise awareness of the problem and encourage further research, in particular in Germany and the EU”. Do you think this could be a better framing? Do you think there is some framing that might be better suited for different cultural contexts, like in Germany, or does that seem neglectable?
I have talked to Karl about this and we both had similar observations.
I’m not sure if this is a cultural thing or not but most of the PhDs I talked to came from Europe. I think it also depends on the actor in the government, e.g. I could imagine defense people to be more open to existential risk as a serious threat. I have no experience in governance, so this is highly speculative and I would defer to people with more experience.