i’ll add $250, with exactly the same commentary as austin :)
to the extent that others are also interested in contributing to the prize pool, you might consider making a manifund page. if you’re not sure how to do this or just want help getting started, let me (or austin/rachel) know!
also, you might adjust the “prize pool” amount at the top of the metaculus page — it currently reads “$0.”
[epistemic status: i’ve spent about 5-20 hours thinking by myself and talking with rai about my thoughts below. however, i spent fairly little time actually writing this, so the literal text below might not map to my views as well as other comments of mine.]
IMO, Sentinel is one of the most impactful uses of marginal forecasting money.
some specific things i like about the team & the org thus far:
nuno’s blog is absolutely fantastic — deeply excellent, there are few that i’d recommend higher
rai is responsive (both in terms of time and in terms of feedback) and extremely well-calibrated across a variety of interpersonal domains
samotsvety is, far and away, the best forecasting team in the world
sentinel’s weekly newsletter is my ~only news source
why would i seek anything but takes from the best forecasters in the world?
i think i’d be willing to pay at least $5/week for this, though i expect many folks in the EA community would be happy to pay 5x-10x that. their blog is currently free (!!)
i’d recommend skimming whatever their latest newsletter was to get a sense of the content/scope/etc
linch’s piece sums up my thoughts around strategy pretty well
i have the highest crux-uncertainty and -elasticity around the following, in (extremely rough) order of impact on my thought process:
do i have higher-order philosophical commitments that swamp whatever Sentinel does? (for ex: short timelines, animal suffering, etc)
will Sentinel be able to successfully scale up?
conditional on Sentinel successfully forecasting a relevant GCR, will Sentinel successfully prevent or mitigate the GCR?
will Sentinel be able to successfully forecast a relevant GCR?
how likely are the category of GCRs that sentinel might mitigate to actually come about? (vs no GCRS or GCRS that are totally unpredictable/unmitigateable)