Re 80K’s 2017 take on the risk level: You could also say that the AI safety field is crazy and people in it are very wrong, as part of a case for lower risk probabilities. There are some very unhealthy scientific fields out there. Also, technology forecasting is hard. A career-evaluating group could investigate a field like climate change, decide that researchers in the field are very confused about the expected impact of climate change, but still think it’s an important enough problem to warrant sending lots of people to work on the problem. But in that case, I’d still want 80K to explicitly argue that point, and note the disagreement.
I think there is a tenable view that considers an AI catastrophe less likely than what AI safety researchers think but is not committed to anything nearly as strong as the field being “crazy” or people in it being “very wrong”:
We might simply think that people are more likely to work on AI safety if they consider an AI catastrophe more likely. When considering their beliefs as evidence we’d then need to correct for that selection effect.
[ETA: I thought I should maybe add that even the direction of the update doesn’t seem fully clear. It depends on assumptions about the underlying population. E.g. if we think that everyone’s credence is determined by an unbiased but noisy process, then people with high credences will self-select into AI safety because of noise, and we should think the ‘correct’ credence is lower than what they say. On the other hand, if we think that there are differences in how people form their beliefs, then it at least could be the case that some people are simply better at predicting AI catastrophes, or are fast at picking up ‘warning signs’, and if AI risk is in fact high then we would see a ‘vanguard’ of people self-selecting into AI safety early who also will have systematically more accurate beliefs about AI risk than the general population.]
(I am sympathetic to “I’d still want 80K to explicitly argue that point, and note the disagreement.”, though haven’t checked to what extent they might do that elsewhere.)
Though in the world where the credible range of estimates is 1-10%, and 80% of the field believed the probability were >10% (my prediction from upthread), that would start to get into ‘something’s seriously wrong with the field’ territory from my perspective; that’s not a small disagreement.
(I’m assuming here, as I did when I made my original prediction, that they aren’t all clustered around 15% or whatever; rather, I’d have expected a lot of the field to give a much higher probability than 10%.)
Re 80K’s 2017 take on the risk level: You could also say that the AI safety field is crazy and people in it are very wrong, as part of a case for lower risk probabilities. There are some very unhealthy scientific fields out there. Also, technology forecasting is hard. A career-evaluating group could investigate a field like climate change, decide that researchers in the field are very confused about the expected impact of climate change, but still think it’s an important enough problem to warrant sending lots of people to work on the problem. But in that case, I’d still want 80K to explicitly argue that point, and note the disagreement.
I previously complained about this on LessWrong.
I think there is a tenable view that considers an AI catastrophe less likely than what AI safety researchers think but is not committed to anything nearly as strong as the field being “crazy” or people in it being “very wrong”:
We might simply think that people are more likely to work on AI safety if they consider an AI catastrophe more likely. When considering their beliefs as evidence we’d then need to correct for that selection effect.
[ETA: I thought I should maybe add that even the direction of the update doesn’t seem fully clear. It depends on assumptions about the underlying population. E.g. if we think that everyone’s credence is determined by an unbiased but noisy process, then people with high credences will self-select into AI safety because of noise, and we should think the ‘correct’ credence is lower than what they say. On the other hand, if we think that there are differences in how people form their beliefs, then it at least could be the case that some people are simply better at predicting AI catastrophes, or are fast at picking up ‘warning signs’, and if AI risk is in fact high then we would see a ‘vanguard’ of people self-selecting into AI safety early who also will have systematically more accurate beliefs about AI risk than the general population.]
(I am sympathetic to “I’d still want 80K to explicitly argue that point, and note the disagreement.”, though haven’t checked to what extent they might do that elsewhere.)
Yeah, I like this correction.
Though in the world where the credible range of estimates is 1-10%, and 80% of the field believed the probability were >10% (my prediction from upthread), that would start to get into ‘something’s seriously wrong with the field’ territory from my perspective; that’s not a small disagreement.
(I’m assuming here, as I did when I made my original prediction, that they aren’t all clustered around 15% or whatever; rather, I’d have expected a lot of the field to give a much higher probability than 10%.)