Thatās interesting- I guess Iām expecting so much diversity in responses that one fixed response question would probably raise more questions than it answered (i.e. āwhich second-order consideration?ā). An alternative would be to send out a short survey afterwards to a randomised group of voters from across the spectrum. Depending on the content of peopleās comments maybe we could also categorise them and do some kind of basic analysis (i.e. without sending a survey out).
Makes senseāone use case for me is that Iād be more inclined to defer to community judgment based on certain grounds than on others in allocating my own (much more limited!) funds.
E.g., if perspective X already gets a lot of weight from major funders, or if I think Iām in a fairly good position to weigh X relative to others, then Iād probably defer less. On the other hand, there are some potential cruxes on which various factors point toward more deference.
The specific statement I was reacting to was that people might vote based on their views about what happens after a singularity. For various reasons, I would not be inclined to defer to GH/āanimal welfare funding splits that were promised on that kind of reasoning. (Not that the reasoning is somehow invalid, itās just not the kind of data that would materially update how I donate.)
Thatās interesting- I guess Iām expecting so much diversity in responses that one fixed response question would probably raise more questions than it answered (i.e. āwhich second-order consideration?ā). An alternative would be to send out a short survey afterwards to a randomised group of voters from across the spectrum. Depending on the content of peopleās comments maybe we could also categorise them and do some kind of basic analysis (i.e. without sending a survey out).
Makes senseāone use case for me is that Iād be more inclined to defer to community judgment based on certain grounds than on others in allocating my own (much more limited!) funds.
E.g., if perspective X already gets a lot of weight from major funders, or if I think Iām in a fairly good position to weigh X relative to others, then Iād probably defer less. On the other hand, there are some potential cruxes on which various factors point toward more deference.
The specific statement I was reacting to was that people might vote based on their views about what happens after a singularity. For various reasons, I would not be inclined to defer to GH/āanimal welfare funding splits that were promised on that kind of reasoning. (Not that the reasoning is somehow invalid, itās just not the kind of data that would materially update how I donate.)