Hi James, I do think it would be interesting to see what a true global citizen’s assembly with complete free rein would decide. I would prefer that the experiment were not done with Open Phil’s money as the opportunity cost would be very high. A citizen’s assembly with longtermist aims would also be interesting, but would be different to what is proposed in the article. Pre-setting the aims of such an assembly seems undemocratic.
I would be pretty pessimistic about convincing lots of people of something like longtermism in a citizen’s assembly—at least I think funding for things like AI, engineered viruses and nuclear war would fall a fair amount. The median global citizen is someone who is strongly religious, probably has strong nationalist and socialist beliefs (per the literature on voter preferences in rich countries, which is probably true in poorer countries), unwilling to pay high carbon taxes, homophobic etc.
For what it’s worth, I wasn’t genuinely saying we should hold a citizen’s assembly to decide what we do with all of Open Phil’s money, I just thought it was an interesting thought experiment. I’m not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen’s assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).
To play devil’s advocate, I’m not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can’t see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn’t affect their taxes).
Also, I don’t think you’ve given much convincing evidence that a citizen’s assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can’t say I have much evidence myself except for the studies (1, 2, 3 to a degree) provided in the report, would suggest the exact opposite, in that a diverse group of actors performs better than an higher-ability solo actor. In addition, if we base the success of the citizen’s assembly on how well they match our current decisions (e.g. the same amount of biorisk, nuclear and AI funding), I think we’re missing the point a bit. This assumes we’ve got it all perfectly allocated currently which I think is a central challenge of the paper above, in that it’s probably allocated perfectly according to a select few people but this by no means leads to it actually being true.
I’m not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk
I vaguely remember reading something about religious people worrying less about extinction, but I don’t remember whether that was just intuition or an actual study. They may also be predisposed to care less about certain kinds of risk, e.g. not worrying about AI as they perceive it to be impossible.
Hi James, I do think it would be interesting to see what a true global citizen’s assembly with complete free rein would decide. I would prefer that the experiment were not done with Open Phil’s money as the opportunity cost would be very high. A citizen’s assembly with longtermist aims would also be interesting, but would be different to what is proposed in the article. Pre-setting the aims of such an assembly seems undemocratic.
I would be pretty pessimistic about convincing lots of people of something like longtermism in a citizen’s assembly—at least I think funding for things like AI, engineered viruses and nuclear war would fall a fair amount. The median global citizen is someone who is strongly religious, probably has strong nationalist and socialist beliefs (per the literature on voter preferences in rich countries, which is probably true in poorer countries), unwilling to pay high carbon taxes, homophobic etc.
For what it’s worth, I wasn’t genuinely saying we should hold a citizen’s assembly to decide what we do with all of Open Phil’s money, I just thought it was an interesting thought experiment. I’m not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen’s assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).
To play devil’s advocate, I’m not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can’t see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn’t affect their taxes).
Also, I don’t think you’ve given much convincing evidence that a citizen’s assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can’t say I have much evidence myself except for the studies (1, 2, 3 to a degree) provided in the report, would suggest the exact opposite, in that a diverse group of actors performs better than an higher-ability solo actor. In addition, if we base the success of the citizen’s assembly on how well they match our current decisions (e.g. the same amount of biorisk, nuclear and AI funding), I think we’re missing the point a bit. This assumes we’ve got it all perfectly allocated currently which I think is a central challenge of the paper above, in that it’s probably allocated perfectly according to a select few people but this by no means leads to it actually being true.
I vaguely remember reading something about religious people worrying less about extinction, but I don’t remember whether that was just intuition or an actual study. They may also be predisposed to care less about certain kinds of risk, e.g. not worrying about AI as they perceive it to be impossible.
(these are pretty minor points though)