Hi. I agree with the points you make in that comment. However, the question from Metaculus I mention in the last section of the post is about superintelligent AI, and the operationalisation of this does require a very high level of intelligence and generality.
āSuperintelligent Artificial Intelligenceā (SAI) is defined for the purposes of this question as an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain. The SAI may be able to perform these tasks themselves, or be capable of designing sub-agents with these capabilities (for instance the SAI may design robots capable of beating professional football players which are not successful brain surgeons, and design top brain surgeons which are not football players). Tasks include (but are not limited to): performing in top ranks among professional e-sports leagues, performing in top ranks among physical sports, preparing and serving food, providing emotional and psychotherapeutic support, discovering scientific insights which could win 2021 Nobel prizes, creating original art and entertainment, and having professional-level software design and AI design capabilities.
You are right, I mistook which Metaculus question you linked to. However, it seems that even that question is somewhat ill-defined due to referencing the āweak AGIā definition of the other question. For that reason alone, I wouldnāt bet large sums of money on it. But it is not as problematic as the weak AGI question itself.
My bet does not depend on Metaculusā definition of āweak AGIā. I rely on Metaculusā definition of SAI given in a question about the time from āweak AGIā until SAI. However, the bet I suggested is just about the date of SAI.
Your bet proposal talks about the Metaculus question āresolving non-ambiguouslyā. Since the question is about the duration of time between the āweak AGIā and āsuperintelligent AIā, it is possible that it cannot be resolved ānon-ambiguouslyā due to the definition of weak AGI being ambiguous even if SAI is invented. This might discourage people who believe in short SAI timelines from accepting the bet.
The bet is neutral for both parties if the Metaculusā question resolves ambiguously. In this case, no transfer of money would happen. A higher probability of the question resolving ambiguously decreases the expected value of the bet for both parties, but this could be mitigated by increasing the potential benefits.
Hi. I agree with the points you make in that comment. However, the question from Metaculus I mention in the last section of the post is about superintelligent AI, and the operationalisation of this does require a very high level of intelligence and generality.
You are right, I mistook which Metaculus question you linked to. However, it seems that even that question is somewhat ill-defined due to referencing the āweak AGIā definition of the other question. For that reason alone, I wouldnāt bet large sums of money on it. But it is not as problematic as the weak AGI question itself.
My bet does not depend on Metaculusā definition of āweak AGIā. I rely on Metaculusā definition of SAI given in a question about the time from āweak AGIā until SAI. However, the bet I suggested is just about the date of SAI.
Your bet proposal talks about the Metaculus question āresolving non-ambiguouslyā. Since the question is about the duration of time between the āweak AGIā and āsuperintelligent AIā, it is possible that it cannot be resolved ānon-ambiguouslyā due to the definition of weak AGI being ambiguous even if SAI is invented. This might discourage people who believe in short SAI timelines from accepting the bet.
The bet is neutral for both parties if the Metaculusā question resolves ambiguously. In this case, no transfer of money would happen. A higher probability of the question resolving ambiguously decreases the expected value of the bet for both parties, but this could be mitigated by increasing the potential benefits.