You could make a bet about whether PauseAI will have any salient successes, or otherwise be able to point to why it did achieve a reduction in existential risk of, say, half a basis point, in the next five years, according to an external judge such as myself.
No offense to forecasting, which is good and worthwhile, but I think trying to come with a bet in this case is a guaranteed time suck that will muddy the waters instead of clarifying them. There are very few crisp falsifiable hypotheses that also get at the cruxes of whether it’s better to donate to PauseAI or animal welfare given that that’s not already clear to Vasco that I think would make good bets unfortunately.
@Holly Elmore ⏸️ 🔸[1], here is an alternative bet I am open to. If, until the end of 2028, Metaculus’ question about superintelligent AI:
Resolves with a date, I transfer to you 10 k 2025-January-$.
Does not resolve, you transfer to me 10 k 2025-January-$.
Resolves ambiguously, nothing happens.
The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2028, and the probability of me paying you if you win (p1) was the same as the probability of you paying me if I win (p2). Under your views, I think p2 is slightly higher than p1 because of higher extinction risk if you win than if I win. So it makes sense for you to move the resolution date of the bet a little bit forward to account for this.
You could make a bet about whether PauseAI will have any salient successes, or otherwise be able to point to why it did achieve a reduction in existential risk of, say, half a basis point, in the next five years, according to an external judge such as myself.
No offense to forecasting, which is good and worthwhile, but I think trying to come with a bet in this case is a guaranteed time suck that will muddy the waters instead of clarifying them. There are very few crisp falsifiable hypotheses that also get at the cruxes of whether it’s better to donate to PauseAI or animal welfare given that that’s not already clear to Vasco that I think would make good bets unfortunately.
https://x.com/ilex_ulmus/status/1776724461636735244
That is a perspective you could inhabit, but it also seems contradictory with the vibe in “Hmm, I wonder what we would bet on”
Well if someone has a great suggestion that’s the objection it has to overcome
@Holly Elmore ⏸️ 🔸[1], here is an alternative bet I am open to. If, until the end of 2028, Metaculus’ question about superintelligent AI:
Resolves with a date, I transfer to you 10 k 2025-January-$.
Does not resolve, you transfer to me 10 k 2025-January-$.
Resolves ambiguously, nothing happens.
The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2028, and the probability of me paying you if you win (p1) was the same as the probability of you paying me if I win (p2). Under your views, I think p2 is slightly higher than p1 because of higher extinction risk if you win than if I win. So it makes sense for you to move the resolution date of the bet a little bit forward to account for this.
I am tagging you because I clarified a little the bet.