[Question] Are there superforecasts for existential risk?

The clos­est thing I could find was the Me­tac­u­lus Rag­narök Ques­tion Series, but I’m not sure how to in­ter­pret it be­cause:

  • The an­swers seem in­con­sis­tent (eg. a 1% chance of >95% of hu­mans be­ing kil­led by 2100, but a 2% chance of hu­mans go­ing ex­tinct by 2100). Maybe this isn’t all that prob­le­matic but I’m not sure

  • The in­cen­tives for ac­cu­racy seem weird. Th­ese ques­tions only re­solve by 2100, and, if there is a catas­tro­phe, no­body will care about their Brier score. Again, this might not be a prob­lem but I’m not sure

  • The ‘com­mu­nity pre­dic­tion’ (the me­dian) was much higher than the ‘Me­tac­u­lus pre­dic­tion’ (some weighted com­bi­na­tion of each user’s pre­dic­tion). Is that be­cause more ac­cu­rate fore­cast­ers were less wor­ried about ex­is­ten­tial risk, or be­cause there’s some­thing that makes a good near-term fore­caster that makes peo­ple un­der­es­ti­mate ex­is­ten­tial risk?

Re­lated: here’s a list of database of ex­is­ten­tial risk es­ti­mates, and here’s a list of AI-risk pre­dic­tion mar­ket ques­tion sug­ges­tions.

I won­der if ques­tions around ex­is­ten­tial risk would bet­ter be es­ti­mated by a smaller group of fore­cast­ers, rather than a pre­dic­tion mar­ket or some­thing like Me­tac­u­lus (for the above rea­sons and other rea­sons).

Error

The value
  NIL
is not of type
  LOCAL-TIME:TIMESTAMP