Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open tobetsagainst short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?
I was considering hypothetical scenarios of the type āimagine this offer from MIRI arrived, would a lab acceptā ; clearly MIRI is not making the offer because the labs donāt have good alignment plans and they are obviously high integrity enough to not be corrupted by relatively tiny incentives like $3b
I would guess there are ways to operationalise the hypothethicals, and try to have, for example, Dan Hendrycks guess what would xAI do, him being an advisor.
With your bets about timelinesāI did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the āconfident about short timelinesā do you expect I should take. Iām happy to bet on some operationalization of your overall thinking and posting about the topic of AGI being bad, e.g. something like ā3 smartest available AIs in 2035 compare all what we wrote in 2026 on EAF, LW and Twitter about AI and judge who was more confused, overconfident and miscalibratedā.
I was considering hypothetical scenarios of the type āimagine this offer from MIRI arrived, would a lab acceptā
When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.
With your bets about timelinesāI did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the āconfident about short timelinesā do you expect I should take.
I was guessing I would have longer timelines. What is your median date of superintelligent AI as defined by Metaculus?
Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?
I was considering hypothetical scenarios of the type āimagine this offer from MIRI arrived, would a lab acceptā ; clearly MIRI is not making the offer because the labs donāt have good alignment plans and they are obviously high integrity enough to not be corrupted by relatively tiny incentives like $3b
I would guess there are ways to operationalise the hypothethicals, and try to have, for example, Dan Hendrycks guess what would xAI do, him being an advisor.
With your bets about timelinesāI did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the āconfident about short timelinesā do you expect I should take. Iām happy to bet on some operationalization of your overall thinking and posting about the topic of AGI being bad, e.g. something like ā3 smartest available AIs in 2035 compare all what we wrote in 2026 on EAF, LW and Twitter about AI and judge who was more confused, overconfident and miscalibratedā.
When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.
I was guessing I would have longer timelines. What is your median date of superintelligent AI as defined by Metaculus?