I’ve included a series of forecasting questions, in case people excited about forecasting on global catastrophic risks want a fast-feedback way to test their data gathering or calibration.
People offering forecasting questions like this is really cool, but is there any way to resolve these questions later and give people track records? Or at that point are we just re-inventing Metaculus too much?
Probably a question for Aaron Gertler / the EA Dev team. Semi-relatedly, is there a way to tag Aaron? That might be another good feature.
You can tag me with a quick DM for now — totally fine if you just literally send the URL of a comment and nothing else, if you want to optimize for speed/ease.
Tagging users to ping them is a much-discussed feature internally, with an uncertain future.
Another feature request: Is it possible to make other people’s predictions invisible by default and then reveal them if you’d like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.)
I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn’t end up doing it.
I’m also interested in people’s predictions had the codes been anonymous (not been personalized). In this case, individual reputational risk would be low, so it would mostly be a matter of community reputational risk, and we’d learn more about if EAs or LWers would stab each other in the back (well, inconvenience each other) if they could get away with it.
As of this comment: 40%, 38%, 37%, 5%. I haven’t taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (1−(1−1396)200). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there’s a 5% chance that there’s a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, and he knows himself best.
I think the EA forum is a little bit, but not vastly, more likely to initiate a launch, because the EA Forum hasn’t done Petrov day before and qualitatively people seem to be having a bit more fun and irreverance over here, so I’m giving 3% of the no-MAD probability to EA Forum staying up and 2% to Lesswrong staying up.
Also, the reference class of launches doesn’t fully represent the current situation: last launch was more of a self-destruct. This time, it’s harming another website/community, which seems more prohibitive. So I think the prior is lower than 40%.
I’ve included a series of forecasting questions, in case people excited about forecasting on global catastrophic risks want a fast-feedback way to test their data gathering or calibration.
(Note that there is a relation between these questions—the sum of the last three probabilities is twice the first)
People offering forecasting questions like this is really cool, but is there any way to resolve these questions later and give people track records? Or at that point are we just re-inventing Metaculus too much?
Probably a question for Aaron Gertler / the EA Dev team. Semi-relatedly, is there a way to tag Aaron? That might be another good feature.
You can tag me with a quick DM for now — totally fine if you just literally send the URL of a comment and nothing else, if you want to optimize for speed/ease.
Tagging users to ping them is a much-discussed feature internally, with an uncertain future.
edit: Feature already exists, thanks Ruby!
Another feature request: Is it possible to make other people’s predictions invisible by default and then reveal them if you’d like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.)
I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn’t end up doing it.
There’s a user setting that lets you do this.
I agree this would be good to see!
I’m also interested in people’s predictions had the codes been anonymous (not been personalized). In this case, individual reputational risk would be low, so it would mostly be a matter of community
reputationalrisk, and we’d learn more about if EAs or LWers would stab each other in the back (well, inconvenience each other) if they could get away with it.I mean, having a website shut down is also annoying.
As of this comment: 40%, 38%, 37%, 5%. I haven’t taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (1−(1−1396)200). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there’s a 5% chance that there’s a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, and he knows himself best.
I think the EA forum is a little bit, but not vastly, more likely to initiate a launch, because the EA Forum hasn’t done Petrov day before and qualitatively people seem to be having a bit more fun and irreverance over here, so I’m giving 3% of the no-MAD probability to EA Forum staying up and 2% to Lesswrong staying up.
Also, the reference class of launches doesn’t fully represent the current situation: last launch was more of a self-destruct. This time, it’s harming another website/community, which seems more prohibitive. So I think the prior is lower than 40%.
There is a chance to remove MAD by removing Peter’s launch codes’ validity, per my request.
My forecasts just before the button appeared: 25%, 17%, 19%, 14%.