I’m a fan of the profile, especially the section on ” What do we think are the best arguments we’re wrong?”. I thought this was well done and clearly explained.
One important category that I don’t remember seeing is on wider arguments against existential risk being a priority. E.g. in my experience with 16-18 year olds in the UK, a very common response to Will MacAskill’s Ted talk (that they saw in the application process) was disagreement that the future was actually on track to be positive (and hence worth saving).
More anecdotally, something that I’ve experienced in numerous conversations, with these people and others, is that they don’t expect/believe they could be motivated to work on this problem. (e.g. due to it feeling more abstract, less visceral than other plausible priorities.)
Maybe you didn’t cover these because they’re relevant to much work on x-risks, rather than AI safety specifically?
I’m a fan of the profile, especially the section on ” What do we think are the best arguments we’re wrong?”. I thought this was well done and clearly explained.
One important category that I don’t remember seeing is on wider arguments against existential risk being a priority. E.g. in my experience with 16-18 year olds in the UK, a very common response to Will MacAskill’s Ted talk (that they saw in the application process) was disagreement that the future was actually on track to be positive (and hence worth saving).
More anecdotally, something that I’ve experienced in numerous conversations, with these people and others, is that they don’t expect/believe they could be motivated to work on this problem. (e.g. due to it feeling more abstract, less visceral than other plausible priorities.)
Maybe you didn’t cover these because they’re relevant to much work on x-risks, rather than AI safety specifically?