I recently wrote a comment criticizing the Metaculus definition of AGI. I don’t think it is a good starting point for bets as I believe that a system that is clearly not a general intelligence could pass all conditions listed in the question. There is a risk that either the question is resolved based on a non-AGI system that technically fulfills the Metaculus definition (favoring people with shorter timelines), or that the question is never resolved due to its ambiguity (favoring those with longer timelines).
Hi. I agree with the points you make in that comment. However, the question from Metaculus I mention in the last section of the post is about superintelligent AI, and the operationalisation of this does require a very high level of intelligence and generality.
“Superintelligent Artificial Intelligence” (SAI) is defined for the purposes of this question as an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain. The SAI may be able to perform these tasks themselves, or be capable of designing sub-agents with these capabilities (for instance the SAI may design robots capable of beating professional football players which are not successful brain surgeons, and design top brain surgeons which are not football players). Tasks include (but are not limited to): performing in top ranks among professional e-sports leagues, performing in top ranks among physical sports, preparing and serving food, providing emotional and psychotherapeutic support, discovering scientific insights which could win 2021 Nobel prizes, creating original art and entertainment, and having professional-level software design and AI design capabilities.
You are right, I mistook which Metaculus question you linked to. However, it seems that even that question is somewhat ill-defined due to referencing the “weak AGI” definition of the other question. For that reason alone, I wouldn’t bet large sums of money on it. But it is not as problematic as the weak AGI question itself.
My bet does not depend on Metaculus’ definition of “weak AGI”. I rely on Metaculus’ definition of SAI given in a question about the time from “weak AGI” until SAI. However, the bet I suggested is just about the date of SAI.
Your bet proposal talks about the Metaculus question “resolving non-ambiguously”. Since the question is about the duration of time between the “weak AGI” and “superintelligent AI”, it is possible that it cannot be resolved “non-ambiguously” due to the definition of weak AGI being ambiguous even if SAI is invented. This might discourage people who believe in short SAI timelines from accepting the bet.
The bet is neutral for both parties if the Metaculus’ question resolves ambiguously. In this case, no transfer of money would happen. A higher probability of the question resolving ambiguously decreases the expected value of the bet for both parties, but this could be mitigated by increasing the potential benefits.
I recently wrote a comment criticizing the Metaculus definition of AGI. I don’t think it is a good starting point for bets as I believe that a system that is clearly not a general intelligence could pass all conditions listed in the question. There is a risk that either the question is resolved based on a non-AGI system that technically fulfills the Metaculus definition (favoring people with shorter timelines), or that the question is never resolved due to its ambiguity (favoring those with longer timelines).
Hi. I agree with the points you make in that comment. However, the question from Metaculus I mention in the last section of the post is about superintelligent AI, and the operationalisation of this does require a very high level of intelligence and generality.
You are right, I mistook which Metaculus question you linked to. However, it seems that even that question is somewhat ill-defined due to referencing the “weak AGI” definition of the other question. For that reason alone, I wouldn’t bet large sums of money on it. But it is not as problematic as the weak AGI question itself.
My bet does not depend on Metaculus’ definition of “weak AGI”. I rely on Metaculus’ definition of SAI given in a question about the time from “weak AGI” until SAI. However, the bet I suggested is just about the date of SAI.
Your bet proposal talks about the Metaculus question “resolving non-ambiguously”. Since the question is about the duration of time between the “weak AGI” and “superintelligent AI”, it is possible that it cannot be resolved “non-ambiguously” due to the definition of weak AGI being ambiguous even if SAI is invented. This might discourage people who believe in short SAI timelines from accepting the bet.
The bet is neutral for both parties if the Metaculus’ question resolves ambiguously. In this case, no transfer of money would happen. A higher probability of the question resolving ambiguously decreases the expected value of the bet for both parties, but this could be mitigated by increasing the potential benefits.