Just read the paper and you are correct, my questions do differ. I should just make a post of my own about this I guess.
Firstly, I am skeptical that the future is best represented by creating special institutions. If people lose trust that their government cares about their interests the risks to democracy and state capacity are large, and introducing a new interest group endangers that trust. The alternative to directly representing the future is to consider which institutional arrangements create policies most beneficial to future people. They acknowledge a similar critique on page 15.
Future’s assemblies—the analogy to the Irish assemblies is interesting. However, the Irish assemblies were selected for each issue separately, not for life. Here are several reasons they shouldn’t be selected for life.
1. Selecting representatives for life greatly increases the benefits to actors that capture members.
2. Factions that by chance are under-represented in the future assembly must wait a long time for a change, so exiting is a more appealing option to them.
3. Are we sure why the Irish assemblors chose to think about the issue One possible explanation is that because they were only asked to make 1 decision, selecting an ideology and selecting a position on the issue were equally as cognitively expensive (they didn’t have to think that hard). If that hypothesis is true, then when we ask the same set of assemblers to make many decisions they might realize that adopting an ideology makes the decision easier and they feel just as “right” afterward.
Well that’s enough for now, I should get back to work. Thanks for sending the paper, hopefully I can write a full post on it.
I guess Tyler, Will, etc are approaching governance from a general, and highly idealised perspective, in discussing hypothetical institutions.
In contrast, folks like GovAI are approaching things from a more targeted, and only moderately idealised perspective. I expect a bunch of their questions will relate to how to bring existing institutions to bear on mitigating AI risks. Do your questions also differ from theirs?
The question of how best to represent the interests of future persons is a good core question. My problem is more with their method of answering it. There’s a great tradition of political philosophers thinking “what would be the ideal institution according to X moral philosophy” and then designing an institution backward from that. I consider this approach both crowded and low-leverage (John and McAskill are more in a middle position).
The alternative is to look at how institutions work in practice then judge them against different ethical objectives, which is a bit more neglected. I also think the second approach is more effective. So writing at the same questions as John and McAskill could have good added value.
If I have time I will take a look at Gov AI
Just read the paper and you are correct, my questions do differ. I should just make a post of my own about this I guess.
Firstly, I am skeptical that the future is best represented by creating special institutions. If people lose trust that their government cares about their interests the risks to democracy and state capacity are large, and introducing a new interest group endangers that trust. The alternative to directly representing the future is to consider which institutional arrangements create policies most beneficial to future people. They acknowledge a similar critique on page 15.
Future’s assemblies—the analogy to the Irish assemblies is interesting. However, the Irish assemblies were selected for each issue separately, not for life. Here are several reasons they shouldn’t be selected for life.
1. Selecting representatives for life greatly increases the benefits to actors that capture members.
2. Factions that by chance are under-represented in the future assembly must wait a long time for a change, so exiting is a more appealing option to them.
3. Are we sure why the Irish assemblors chose to think about the issue One possible explanation is that because they were only asked to make 1 decision, selecting an ideology and selecting a position on the issue were equally as cognitively expensive (they didn’t have to think that hard). If that hypothesis is true, then when we ask the same set of assemblers to make many decisions they might realize that adopting an ideology makes the decision easier and they feel just as “right” afterward.
Well that’s enough for now, I should get back to work. Thanks for sending the paper, hopefully I can write a full post on it.
I guess Tyler, Will, etc are approaching governance from a general, and highly idealised perspective, in discussing hypothetical institutions.
In contrast, folks like GovAI are approaching things from a more targeted, and only moderately idealised perspective. I expect a bunch of their questions will relate to how to bring existing institutions to bear on mitigating AI risks. Do your questions also differ from theirs?
The question of how best to represent the interests of future persons is a good core question. My problem is more with their method of answering it. There’s a great tradition of political philosophers thinking “what would be the ideal institution according to X moral philosophy” and then designing an institution backward from that. I consider this approach both crowded and low-leverage (John and McAskill are more in a middle position). The alternative is to look at how institutions work in practice then judge them against different ethical objectives, which is a bit more neglected. I also think the second approach is more effective. So writing at the same questions as John and McAskill could have good added value. If I have time I will take a look at Gov AI