Delegate a forecast

EDIT: We’ve stopped an­swer­ing ques­tions for now, sorry if we didn’t get to your ques­tion! We’re still re­ally in­ter­ested in what kinds of ques­tions peo­ple want fore­casted, feed­back on how use­ful it is to del­e­gate fore­casts, and Elicit as a tool, so feel free to keep com­ment­ing these thoughts. We also fore­casted ques­tions on the LessWrong ver­sion of this post.

Hi ev­ery­one! We, Ought, have been work­ing on Elicit, a tool to ex­press be­liefs in prob­a­bil­ity dis­tri­bu­tions. This is an ex­ten­sion of our pre­vi­ous work on del­e­gat­ing rea­son­ing. We’re ex­per­i­ment­ing with break­ing down the rea­son­ing pro­cess in fore­cast­ing into smaller steps and build­ing tools that sup­port and au­to­mate these steps.

In this spe­cific post, we’re ex­plor­ing the dy­nam­ics of Q&A with dis­tri­bu­tions by offer­ing to make a fore­cast for a ques­tion you want an­swered. Our goal is to learn:

  1. Whether peo­ple would ap­pre­ci­ate del­e­gat­ing pre­dic­tions to a third party, and what types of pre­dic­tions they want to delegate

  2. Whether a dis­tri­bu­tion can more effi­ciently con­vey in­for­ma­tion (or con­vey differ­ent types of in­for­ma­tion) than text-based interactions

  3. Whether con­vers­ing in dis­tri­bu­tions iso­lates dis­agree­ments or as­sump­tions that may be ob­scured in text

  4. How to trans­late the ques­tions peo­ple care about or think about nat­u­rally into more pre­cise dis­tri­bu­tions (and what gets lost in that trans­la­tion)

We also think that mak­ing fore­casts is quite fun. In that spirit, you can ask us (mainly Amanda Ngo and Eli Lifland) to fore­cast any con­tin­u­ous ques­tion that you want an­swered. Just make a com­ment on this post with a ques­tion, and we’ll make a dis­tri­bu­tion to an­swer it.

Some ex­am­ples of ques­tions you could ask:

We’ll spend <=1 hour on each one, so you should ex­pect about that much rigor and in­for­ma­tion den­sity. If there’s con­text on you or the ques­tion that we won’t be able to find on­line, you can in­clude it in the com­ment to help us out.

We’ll an­swer as many ques­tions as we can from now un­til Mon­day 83. We ex­pect to spend about 10-15 hours on this, so we may not get to all the ques­tions. We’ll post our dis­tri­bu­tions in the com­ments be­low. If you dis­agree or think we missed some­thing, you can re­spond with your own dis­tri­bu­tion for the ques­tion.

We’d love to hear peo­ple’s thoughts and feed­back on out­sourc­ing fore­casts, pro­vid­ing be­liefs in prob­a­bil­ity dis­tri­bu­tion, or Elicit gen­er­ally as a tool. If you’re in­ter­ested in more of what we’re work­ing on, you can also check out the com­pe­ti­tion we’re cur­rently run­ning on LessWrong to am­plify Ro­hin Shah’s fore­cast on when the ma­jor­ity of AGI re­searchers will agree with safety con­cerns.