Itâs fairly context dependent, but I generally remain a fan.
Thereâs a mix of ancillary issues:
There could be a âwhy should we care what you think?â if EA estimates diverge from consensus estimates, although I imagine folks tend to gravitate to neglected topics etc.
There might be less value in ârelative to self-ishâ accounts of resilience: major estimates in a front facing report Iâd expect to be fairly resilient, and so less âmight shift significantly if we spent another hour on itâ.
Relative to some quasi-ideal seems valuable though: E.g. âOur view re. X is resilient, but we have a lot of knightian uncertainty, so weâre only 60% sure weâd be within an order of magnitude of X estimated by a hypothetical expert panel/âliquid prediction market/âetc.â
There might be better or worse ways to package this given people are often sceptical of any quantitative assessment of uncertainty (at least in some domains). Perhaps something like âsubjective confidence intervalsâ (cf.), although these arenât perfect.
But ultimately, if you want to tell someone an important number you arenât sure about, it seems worth taking pains to be precise, both on it and its uncertainty.
I tend to agree. This feels a bit like a âbe the change you want to see in the worldâ thing. Ordinary communication norms would push us towards just using verbal claims like âlikelyâ but for the reasons you mention, I pretty strongly think we should quantify and accept any short-term weirdness hit.
Itâs fairly context dependent, but I generally remain a fan.
Thereâs a mix of ancillary issues:
There could be a âwhy should we care what you think?â if EA estimates diverge from consensus estimates, although I imagine folks tend to gravitate to neglected topics etc.
There might be less value in ârelative to self-ishâ accounts of resilience: major estimates in a front facing report Iâd expect to be fairly resilient, and so less âmight shift significantly if we spent another hour on itâ.
Relative to some quasi-ideal seems valuable though: E.g. âOur view re. X is resilient, but we have a lot of knightian uncertainty, so weâre only 60% sure weâd be within an order of magnitude of X estimated by a hypothetical expert panel/âliquid prediction market/âetc.â
There might be better or worse ways to package this given people are often sceptical of any quantitative assessment of uncertainty (at least in some domains). Perhaps something like âsubjective confidence intervalsâ (cf.), although these arenât perfect.
But ultimately, if you want to tell someone an important number you arenât sure about, it seems worth taking pains to be precise, both on it and its uncertainty.
I tend to agree. This feels a bit like a âbe the change you want to see in the worldâ thing. Ordinary communication norms would push us towards just using verbal claims like âlikelyâ but for the reasons you mention, I pretty strongly think we should quantify and accept any short-term weirdness hit.