[Not meant to express an overall view.] I don’t think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There’s also risk of survey fatigue if EA researchers all double down on surveys.
Strong upvote for two good points that, in retrospect, I feel should’ve been obvious to me!
In light of those points as well as what I mentioned above, my new, quickly adjusted, bottom-line view, would be that:
People considering running these surveys should take into account that cost and that risk which you mention.
I probably still think most EA research organisations should run such a survey at least once.
In many cases, it may make the most sense to just send it to some particular group of people, or post it in some place more targeted to their target audience than the EA forum as a whole. This would reduce the risk of survey fatigue somewhat, in that not all these surveys are being publicised to basically all EAs.
In many cases, it may make sense for the survey to be even shorter than my one.
In many cases, it may make sense to run the survey only once, rather than something like annually.
Probably no/very few individual researchers who are working at organisations who are themselves running surveys should run their own, relatively publicly advertised individual surveys (even if it’s at a different time to the org’s survey).
This is because those individuals survey would probably provide relatively little marginal value, while still having roughly the same time costs and survey fatigue risk.
But maybe this doesn’t hold if the org only does a survey once, and the researcher is considering running a survey more than a year later.
And maybe it doesn’t hold for surveys sent out in a more targeted manner.
Even among individual researchers who work independently, or whose org isn’t running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys.
The exceptions may tend to be those who wrote a large number of outputs, on a wide range of topics, for relatively broad audiences. (For the reasons alluded to in my parent comment.)
I could definitely imagine shifting my views on this again, though.
This all seems reasonable to me though I haven’t thought much about my overall take.
I think the details matter a lot for “Even among individual researchers who work independently, or whose org isn’t running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys”
A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it’s often possible to close a survey after a certain number of responses.
A counterargument is that the people who respond earliest might be unrepresentative. But for a lot of purposes, it’s not obvious to me you need a representative sample. “Among the people who are making the most use of my research, how is it useful” can be pretty informative on its own.
A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities.
Agreed.
This sort of thing is part of why I wrote “relatively publicly advertised”, and added “And maybe it doesn’t hold for surveys sent out in a more targeted manner.” But good point that someone could run a relatively publicly advertised survey and then just close it after a small-ish number of responses; I hadn’t considered that option.
[Not meant to express an overall view.] I don’t think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There’s also risk of survey fatigue if EA researchers all double down on surveys.
Strong upvote for two good points that, in retrospect, I feel should’ve been obvious to me!
In light of those points as well as what I mentioned above, my new, quickly adjusted, bottom-line view, would be that:
People considering running these surveys should take into account that cost and that risk which you mention.
I probably still think most EA research organisations should run such a survey at least once.
In many cases, it may make the most sense to just send it to some particular group of people, or post it in some place more targeted to their target audience than the EA forum as a whole. This would reduce the risk of survey fatigue somewhat, in that not all these surveys are being publicised to basically all EAs.
In many cases, it may make sense for the survey to be even shorter than my one.
In many cases, it may make sense to run the survey only once, rather than something like annually.
Probably no/very few individual researchers who are working at organisations who are themselves running surveys should run their own, relatively publicly advertised individual surveys (even if it’s at a different time to the org’s survey).
This is because those individuals survey would probably provide relatively little marginal value, while still having roughly the same time costs and survey fatigue risk.
But maybe this doesn’t hold if the org only does a survey once, and the researcher is considering running a survey more than a year later.
And maybe it doesn’t hold for surveys sent out in a more targeted manner.
Even among individual researchers who work independently, or whose org isn’t running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys.
The exceptions may tend to be those who wrote a large number of outputs, on a wide range of topics, for relatively broad audiences. (For the reasons alluded to in my parent comment.)
I could definitely imagine shifting my views on this again, though.
This all seems reasonable to me though I haven’t thought much about my overall take.
I think the details matter a lot for “Even among individual researchers who work independently, or whose org isn’t running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys”
A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it’s often possible to close a survey after a certain number of responses.
A counterargument is that the people who respond earliest might be unrepresentative. But for a lot of purposes, it’s not obvious to me you need a representative sample. “Among the people who are making the most use of my research, how is it useful” can be pretty informative on its own.
Agreed.
This sort of thing is part of why I wrote “relatively publicly advertised”, and added “And maybe it doesn’t hold for surveys sent out in a more targeted manner.” But good point that someone could run a relatively publicly advertised survey and then just close it after a small-ish number of responses; I hadn’t considered that option.