Thanks for sharing. Are you open to a bet like the one I linked above, but with a resolution date of mid 2029? I should disclaim some have argued it would be better for people with your views to instead ask banks for loans (see comments in the post about my bet).
If I had ten grand (or one) to throw around I’d be putting that into my org or donating it to an AI Safety org. Do you think there are ways that a bet could be more useful than a donation for AI Safety? I’m struggling to see them.
I propose bets like this to increase my donations to animal welfare interventions, as I do not think their marginal cost-effectiveness will go down that much over the next few years.
I think I’ll pass for now but I might change my mind later. As you said, I’m not sure if betting on ASI makes sense given all the uncertainty about whether we’re even alive post-ASI, the value of money, property rights, and whether agreements are upheld. But thanks for offering, I think it’s epistemically virtuous.
@Nikola[1], here is an alternative bet I am open to you may prefer. If, until the end of 2029, Metaculus’ question about superintelligent AI:
Resolves with a date, I transfer to you 10 k 2025-January-$.
Does not resolve, you transfer to me 10 k 2025-January-$.
Resolves ambiguously, nothing happens.
The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2029, and the probability of me paying you if you win (p1) was the same as the probability of you paying me if I win (p2). Under your views, I think p2 is slightly higher than p1 because of higher extinction risk if you win than if I win. So it makes sense for you to move the resolution date of the bet a little bit forward to account for this. Your median date of superintelligent AI is mid 2029, which is 6 months before my proposed resolution date, so I think the bet above may already be good for you (under your views).
Thanks for sharing. Are you open to a bet like the one I linked above, but with a resolution date of mid 2029? I should disclaim some have argued it would be better for people with your views to instead ask banks for loans (see comments in the post about my bet).
If I had ten grand (or one) to throw around I’d be putting that into my org or donating it to an AI Safety org. Do you think there are ways that a bet could be more useful than a donation for AI Safety? I’m struggling to see them.
Hi Yanni,
I propose bets like this to increase my donations to animal welfare interventions, as I do not think their marginal cost-effectiveness will go down that much over the next few years.
Ah ok that makes sense :)
And you don’t mind taking money from ai safety causes to fund that? Or maybe you think that is a really good thing?
I guess AI safety interventions are less cost-effective than GiveWell’s top charities, whereas I estimate:
Broiler welfare and cage-free campaigns are 168 and 462 times as cost-effective as GiveWell’s top charities.
The Shrimp Welfare Project is 64.3 k as cost-effectivene as GiveWell’s top charities.
I think I’ll pass for now but I might change my mind later. As you said, I’m not sure if betting on ASI makes sense given all the uncertainty about whether we’re even alive post-ASI, the value of money, property rights, and whether agreements are upheld. But thanks for offering, I think it’s epistemically virtuous.
Also I think people working on AI safety should likely not go into debt for security clearance reasons.
@Nikola[1], here is an alternative bet I am open to you may prefer. If, until the end of 2029, Metaculus’ question about superintelligent AI:
Resolves with a date, I transfer to you 10 k 2025-January-$.
Does not resolve, you transfer to me 10 k 2025-January-$.
Resolves ambiguously, nothing happens.
The resolution date of the bet can be moved such that it would be good for you. I think the bet above would be neutral for you in terms of purchasing power if your median date of superintelligent AI as defined by Metaculus was the end of 2029, and the probability of me paying you if you win (p1) was the same as the probability of you paying me if I win (p2). Under your views, I think p2 is slightly higher than p1 because of higher extinction risk if you win than if I win. So it makes sense for you to move the resolution date of the bet a little bit forward to account for this. Your median date of superintelligent AI is mid 2029, which is 6 months before my proposed resolution date, so I think the bet above may already be good for you (under your views).
I am tagging you because I clarified a little the bet.