I realize that maybe the other people here in the thread have so little trust in the survey designers that they’re worried that, if they answer with the low-probability, higher-EV option, the survey designers will write takeaways like “more EAs are in favor of donating to speculative AI risk.”
I’m one of the people who agreed with @titotal’s comment, and it was because of something like this.
It’s not that I’m worried per se that the survey designers will write a takeaway that puts a spin on this question (last time they just reported it neutrally). It’s more that I expect this question[1] to be taken by other orgs/people as a proxy metric for the EA community’s support for hits-based interventions. And because of the practicalities of how information is acted on the subtlety of the wording of the question might be lost in the process (e.g. in an organisation someone might raise the issue at some point, but it would eventually end up as a number in a spreadsheet or BOTEC, and there is no principled way to adjust for the issue that titotal describes).
I wonder if, next time, the survey makers could write something to reassure us that they’re not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big ‘divide’ within EA, but worded as an abstract thought experiment.)
I’m one of the people who agreed with @titotal’s comment, and it was because of something like this.
It’s not that I’m worried per se that the survey designers will write a takeaway that puts a spin on this question (last time they just reported it neutrally). It’s more that I expect this question[1] to be taken by other orgs/people as a proxy metric for the EA community’s support for hits-based interventions. And because of the practicalities of how information is acted on the subtlety of the wording of the question might be lost in the process (e.g. in an organisation someone might raise the issue at some point, but it would eventually end up as a number in a spreadsheet or BOTEC, and there is no principled way to adjust for the issue that titotal describes).
And one other about supporting low-probability/high-impact interventions
That makes sense; I understand that concern.
I wonder if, next time, the survey makers could write something to reassure us that they’re not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big ‘divide’ within EA, but worded as an abstract thought experiment.)