I think sampling bias will likely be small/âinconsequential, given the sample size.
Ideally, yes, not having any logos would be great, but if I got an email asking me, âfill out this survey, and trust me, I am from a legitimate organization,â I probably wouldnât fill out the survey.
âvery likely it is just the people familiar with lesswrong etc who has heard of these organizationsâ
Hard to say if this is true
It is wrong to assume that familiarity with LW would lead people to answer a certain way. However, I can imagine people who dislike LW/âsimilar would also be keen to complete the survey.
The survey questions donât seem to prime respondents one way or the other.
These 700 people did publish in ICML/âNeurIPS; they have some degree of legitimacy.
Nitpicky, but âa proportion of ML researchers who have published in top conferences such as ICML and NeurIPS think AI research could be badâ is probably a more accurate statement. I agree that this doesnât make the statements they are commenting on = truth; however, I think their opinion has a lot of value.
Disagree-voting for the following reasons:
700 people â that is a substantial sample size.
I think sampling bias will likely be small/âinconsequential, given the sample size.
Ideally, yes, not having any logos would be great, but if I got an email asking me, âfill out this survey, and trust me, I am from a legitimate organization,â I probably wouldnât fill out the survey.
âvery likely it is just the people familiar with lesswrong etc who has heard of these organizationsâ
Hard to say if this is true
It is wrong to assume that familiarity with LW would lead people to answer a certain way. However, I can imagine people who dislike LW/âsimilar would also be keen to complete the survey.
The survey questions donât seem to prime respondents one way or the other.
These 700 people did publish in ICML/âNeurIPS; they have some degree of legitimacy.
Nitpicky, but âa proportion of ML researchers who have published in top conferences such as ICML and NeurIPS think AI research could be badâ is probably a more accurate statement. I agree that this doesnât make the statements they are commenting on = truth; however, I think their opinion has a lot of value.
I think Metaculus does a decent job at forecasting, and their forecasts are updated based on recent advances, but yes, predicting the future is hard.