The objective of the contest isn’t to farm prestige or credibility with some hypothetical third party, it’s to inform OpenPhil’s work, and it seems very likely OpenPhil is by far the best judge of that.
The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk.
There is a growing consensus among social scientists that diversity of approaches and perspectives is essential to reaching truth and avoiding bias. Most theorists now believe that deliberation among homogenous groups is likely to lead to groupthink, polarization, extremism, and other forms of failed group deliberation. They emphasize that the risk is especially strong in self-selecting groups, as well as in groups where discourse is heavily concentrated on the internet.
Many social scientists would tend to think that effective altruists fit most of the risk factors for failures of group deliberation. They would tend to think that if effective altruists are concerned with finding the truth about risks posed by future developments in artificial intelligence, effective altruists would do well to draw from a wider range of perspectives and approaches. They would tend to see developments such as this contest as primarily confirmatory, unlikely to substantially shift group views and quite likely to reinforce them. They would suggest that those developments could be redesigned in a more truth-seeking way by incorporating a broader range of perspectives in deliberation.
These seem like arguments for OpenPhil to hire people with a broad range of perspectives, and to solicit contest submissions from a broad range of people, but not to adjust the judges. It doesn’t benefit OpenPhil at all if, having put e.g. a social conservative on the board of judges, the winner does so by appealing to her with arguments that OpenPhil does not find compelling. OpenPhil is uniquely qualified to judge what arguments they have found informative.
It might be worth considering whether the goal of this contest is to produce arguments that OpenPhil finds compelling and informative, or to produce arguments that are compelling and informative.
These would not be arguments in favor of the conclusion that a broader range of perspectives is a useful way to produce arguments that OpenPhil finds compelling and informative. The best way for OpenPhil to produce arguments that OpenPhil finds compelling and informative would be to select judges exclusively from its own membership, and that is what they have done.
They would instead be arguments in favor of the conclusion that a broader range of perspectives is a useful way to produce arguments that actually are compelling and informative, as well as to avoid a number of known biases and failure modes in group deliberation.
What procedure would you recommend for how Open Phil chooses between allocating money to AI versus allocating it to other causes? Would you recommend essentially the same procedure for: - A university deciding whether to fund a new department
-A local council deciding what to budget cuts to make, after an unexpected loss of central government funding
Ordinarily, a philanthropic foundation offering a prize meant to advance scientific understanding of some topic X would put at mot one or two of its own members on the prize panel. The rest of the panel would be composed of leading scientists, academics, industry professionals, and perhaps a few policymakers. They might also consider inviting leaders of relevant foundations. Most members of the panel would be chosen for specific expertise in topic X combined with broad respect and experience within their fields, although a few panelists might be chosen to represent generalist constituencies (for example, a university president). Members would typically be at mid- or late-career stages, and have substantial research records of their own as well as the esteem of their peers. They might, as appropriate, draw on a broader pool of peer reviewers or nominators in early rounds of the selection process.
Prizes would typically be broadly advertised, and left open for a sufficient period to allow original research (at least six months). They would encourage submissions of a standard length for original research contributions, rather than discouraging submissions greater than 5,000 words. If the focus was solely on the individual piece of submitted work, the review process would be double- or triple-blinded and announced as such.
I could go on, but I take it that all of the above are fairly standard.
The objective of the contest isn’t to farm prestige or credibility with some hypothetical third party, it’s to inform OpenPhil’s work, and it seems very likely OpenPhil is by far the best judge of that.
There is a growing consensus among social scientists that diversity of approaches and perspectives is essential to reaching truth and avoiding bias. Most theorists now believe that deliberation among homogenous groups is likely to lead to groupthink, polarization, extremism, and other forms of failed group deliberation. They emphasize that the risk is especially strong in self-selecting groups, as well as in groups where discourse is heavily concentrated on the internet.
Many social scientists would tend to think that effective altruists fit most of the risk factors for failures of group deliberation. They would tend to think that if effective altruists are concerned with finding the truth about risks posed by future developments in artificial intelligence, effective altruists would do well to draw from a wider range of perspectives and approaches. They would tend to see developments such as this contest as primarily confirmatory, unlikely to substantially shift group views and quite likely to reinforce them. They would suggest that those developments could be redesigned in a more truth-seeking way by incorporating a broader range of perspectives in deliberation.
These seem like arguments for OpenPhil to hire people with a broad range of perspectives, and to solicit contest submissions from a broad range of people, but not to adjust the judges. It doesn’t benefit OpenPhil at all if, having put e.g. a social conservative on the board of judges, the winner does so by appealing to her with arguments that OpenPhil does not find compelling. OpenPhil is uniquely qualified to judge what arguments they have found informative.
It might be worth considering whether the goal of this contest is to produce arguments that OpenPhil finds compelling and informative, or to produce arguments that are compelling and informative.
These would not be arguments in favor of the conclusion that a broader range of perspectives is a useful way to produce arguments that OpenPhil finds compelling and informative. The best way for OpenPhil to produce arguments that OpenPhil finds compelling and informative would be to select judges exclusively from its own membership, and that is what they have done.
They would instead be arguments in favor of the conclusion that a broader range of perspectives is a useful way to produce arguments that actually are compelling and informative, as well as to avoid a number of known biases and failure modes in group deliberation.
What procedure would you recommend for how Open Phil chooses between allocating money to AI versus allocating it to other causes? Would you recommend essentially the same procedure for:
- A university deciding whether to fund a new department
-A local council deciding what to budget cuts to make, after an unexpected loss of central government funding
- A CEO setting corporate strategy
?
Ordinarily, a philanthropic foundation offering a prize meant to advance scientific understanding of some topic X would put at mot one or two of its own members on the prize panel. The rest of the panel would be composed of leading scientists, academics, industry professionals, and perhaps a few policymakers. They might also consider inviting leaders of relevant foundations. Most members of the panel would be chosen for specific expertise in topic X combined with broad respect and experience within their fields, although a few panelists might be chosen to represent generalist constituencies (for example, a university president). Members would typically be at mid- or late-career stages, and have substantial research records of their own as well as the esteem of their peers. They might, as appropriate, draw on a broader pool of peer reviewers or nominators in early rounds of the selection process.
Prizes would typically be broadly advertised, and left open for a sufficient period to allow original research (at least six months). They would encourage submissions of a standard length for original research contributions, rather than discouraging submissions greater than 5,000 words. If the focus was solely on the individual piece of submitted work, the review process would be double- or triple-blinded and announced as such.
I could go on, but I take it that all of the above are fairly standard.
Point taken: I have a better idea what you mean you make it concrete in that way.