I made a comment on Wiblin’s fbook thread I thought I would make it again here so it didn’t get lost there:
FWIW, I think the “cause” category is confusing because some of the “causes” are best understood as ends in themselves (animal welfare, enviro, AI, non-AI far future, poverty), two of the causes are about which means you want to use (politics, rationality) but don’t mention your desired ends. The final two causes (cause prio, meta) could be understood as either means or ends (you might be uncertain about the ends). On this analysis, this question isn’t asking people to choose between the same sorts of options and so isn’t ideal.
To improve the survey, you could have one question on the end people are interested in and then another on their preferred means to reaching it (e.g. politics, charity, research). You could also ask people’s population axiology, at least roughly (“I want to make people happy, not make happy people” vs “I want to maximise the total happiness in the history of the universe”). People might support near-stuff causes even though they’re implicitly total utilitarians because they’re sceptical of the likelihood of X-risks or their potential to avert them. I’ve often wanted to know how much this is the case.
There’s a discussion about the most informative way to slice and dice the cause categories in next year’s survey here: https://www.facebook.com/robert.wiblin/posts/796476424745?comment_id=796476838915
I made a comment on Wiblin’s fbook thread I thought I would make it again here so it didn’t get lost there:
FWIW, I think the “cause” category is confusing because some of the “causes” are best understood as ends in themselves (animal welfare, enviro, AI, non-AI far future, poverty), two of the causes are about which means you want to use (politics, rationality) but don’t mention your desired ends. The final two causes (cause prio, meta) could be understood as either means or ends (you might be uncertain about the ends). On this analysis, this question isn’t asking people to choose between the same sorts of options and so isn’t ideal.
To improve the survey, you could have one question on the end people are interested in and then another on their preferred means to reaching it (e.g. politics, charity, research). You could also ask people’s population axiology, at least roughly (“I want to make people happy, not make happy people” vs “I want to maximise the total happiness in the history of the universe”). People might support near-stuff causes even though they’re implicitly total utilitarians because they’re sceptical of the likelihood of X-risks or their potential to avert them. I’ve often wanted to know how much this is the case.
Thanks, I replied there.