There are a few obvious flaws in interpreting the survey as is:
sample size and sampling bias: 13% respondents amount to 700 ish people surveyed. That is fairly small. Secondly, with MIRI and AI impacts’s respective logos on the front page of the survey document, it introduces a bias into who is taking the survey, very likely it is just the people familiar with lesswrong etc who has heard of these organizations. Here’s why a lot of serious AI researchers don’t engage with MIRI et al:
MIRI hasn’t shipped a SoTA in AI alignment or AI research in the last 5-6 years
A quick look at the publications on their website shows they don’t publish at top ML conferences ( ICML, ICLR, NeurIPS, etc). Less wrong “research” is not research, it is usually philosophy and not backed by performing experiments ( “thought experiments” don’t count).
appeal to authority fallacy: a lot of people saying something doesn’t make it true, so I’d advise people to not confuse between “AI research is bad” and “a proportion of surveyed people think AI research could be bad”. Some moral outrage in the comment section takes some people feeling this way as evidence for defacto truths, while the reality is that they are contested claims.
modeling the future: Human beings are notoriously bad at modeling the future. Imagine if we ran a survey in Oct among EAs about FTX health. Not just that modeling the future is hard, but modeling the far future is exponentially harder and existential risk analyses are often incomplete because:
New improvements in AI safety research is not accounted in these projections
Multiagent dynamics of a world with multiple AIs is not modeled in catastrophic scenario projections
Multiagent dynamics with governments/stakeholders is not modeled
Phased deployment: accelerate, then align to use case, then accelerate again, as we are doing today is also not modeled. AI deployment is currently also accelerating alignment research because alignment is needed to build a useful product- a gaslighting chatbot is a bad product compared to a harmless, helpful one.
New research produces new knowledge previously not known.
Anthropic’s write up, afaik, is a nuanced take and may be a reasonable starting point for an informed and calibrated take towards AI research. I usually don’t engage with AI takes here because it is a huge echo chamber in here but these are my two cents!
I think sampling bias will likely be small/inconsequential, given the sample size.
Ideally, yes, not having any logos would be great, but if I got an email asking me, “fill out this survey, and trust me, I am from a legitimate organization,” I probably wouldn’t fill out the survey.
“very likely it is just the people familiar with lesswrong etc who has heard of these organizations”
Hard to say if this is true
It is wrong to assume that familiarity with LW would lead people to answer a certain way. However, I can imagine people who dislike LW/similar would also be keen to complete the survey.
The survey questions don’t seem to prime respondents one way or the other.
These 700 people did publish in ICML/NeurIPS; they have some degree of legitimacy.
Nitpicky, but “a proportion of ML researchers who have published in top conferences such as ICML and NeurIPS think AI research could be bad” is probably a more accurate statement. I agree that this doesn’t make the statements they are commenting on = truth; however, I think their opinion has a lot of value.
There are a few obvious flaws in interpreting the survey as is:
sample size and sampling bias: 13% respondents amount to 700 ish people surveyed. That is fairly small. Secondly, with MIRI and AI impacts’s respective logos on the front page of the survey document, it introduces a bias into who is taking the survey, very likely it is just the people familiar with lesswrong etc who has heard of these organizations. Here’s why a lot of serious AI researchers don’t engage with MIRI et al:
MIRI hasn’t shipped a SoTA in AI alignment or AI research in the last 5-6 years
A quick look at the publications on their website shows they don’t publish at top ML conferences ( ICML, ICLR, NeurIPS, etc). Less wrong “research” is not research, it is usually philosophy and not backed by performing experiments ( “thought experiments” don’t count).
appeal to authority fallacy: a lot of people saying something doesn’t make it true, so I’d advise people to not confuse between “AI research is bad” and “a proportion of surveyed people think AI research could be bad”. Some moral outrage in the comment section takes some people feeling this way as evidence for defacto truths, while the reality is that they are contested claims.
modeling the future: Human beings are notoriously bad at modeling the future. Imagine if we ran a survey in Oct among EAs about FTX health. Not just that modeling the future is hard, but modeling the far future is exponentially harder and existential risk analyses are often incomplete because:
New improvements in AI safety research is not accounted in these projections
Multiagent dynamics of a world with multiple AIs is not modeled in catastrophic scenario projections
Multiagent dynamics with governments/stakeholders is not modeled
Phased deployment: accelerate, then align to use case, then accelerate again, as we are doing today is also not modeled. AI deployment is currently also accelerating alignment research because alignment is needed to build a useful product- a gaslighting chatbot is a bad product compared to a harmless, helpful one.
New research produces new knowledge previously not known.
Anthropic’s write up, afaik, is a nuanced take and may be a reasonable starting point for an informed and calibrated take towards AI research.
I usually don’t engage with AI takes here because it is a huge echo chamber in here but these are my two cents!
Disagree-voting for the following reasons:
700 people — that is a substantial sample size.
I think sampling bias will likely be small/inconsequential, given the sample size.
Ideally, yes, not having any logos would be great, but if I got an email asking me, “fill out this survey, and trust me, I am from a legitimate organization,” I probably wouldn’t fill out the survey.
“very likely it is just the people familiar with lesswrong etc who has heard of these organizations”
Hard to say if this is true
It is wrong to assume that familiarity with LW would lead people to answer a certain way. However, I can imagine people who dislike LW/similar would also be keen to complete the survey.
The survey questions don’t seem to prime respondents one way or the other.
These 700 people did publish in ICML/NeurIPS; they have some degree of legitimacy.
Nitpicky, but “a proportion of ML researchers who have published in top conferences such as ICML and NeurIPS think AI research could be bad” is probably a more accurate statement. I agree that this doesn’t make the statements they are commenting on = truth; however, I think their opinion has a lot of value.
I think Metaculus does a decent job at forecasting, and their forecasts are updated based on recent advances, but yes, predicting the future is hard.