Whilst I think AGI is 0-5 years away and p(doom|AGI) is ~90%
Assuming you believe there is a 75 % chance of AGI within the next 5 years, the above suggests your median time from now until doom is 3.70 years (= 0.5*(5 ā 0)/ā0.75/ā0.9). Is your median time from now until human extinction also close to 3.70 years? If so, we can set up a bet similar to the one between Bryan Caplan and Eliezer Yudkowsky:
I send you 10 k 2023-$ in the next few days.
If humans do not go extinct in 3.70 years, or until the end of 2027 for simplicity, you send me 19 k 2023-$.
The expected profit is:
For you, 500 2023-$ (= 10*10^3 ā 0.5*19*10^3).
For me, 9.00 k 2023-$ (= ā10*10^3 + 19*10^3), as I think the chance of humans going extinct until the end of 2027 is basically negligible. I would guess around 10^-7 per year.
The expected profit is quite positive for both of us, so we would agree on the bet as long as my (your) marginal earnings after loosing 10 k 2023-$ (19 k 2023-$) would still go towards donations, which arguably do not have much diminishing returns.
I guess my marginal earnings after loosing 10 k 2023-$ would still go towards donations, so I am happy to take the bet.
Hi Vasco, sorry for the delay getting back to you. I have actually had a similar bet offer up on X for nearly a year (offering to go up to $250k) with only one taker for ~$30 so far! My one is you give x now and I give 2x in 5 years, which is pretty similar. Anyway, happy to go ahead with what youāve suggested.
I would donate the $10k to PauseAI (I would say $10k to PauseAI in 2024 is much greater EV than $19k to PauseAI at end of 2027).
[BTW, I have tried to get Bryan Caplan interested too, to no availāif anyone is in contact with him, please ask him about it.]
As much as I may appreciate a good wager, I would feel remiss not to ask if you could get a better result for amount of home equity at risk by getting a HELOC and having a bank be the counterparty? Maybe not at lower dollar amounts due to fixed costs/āfees, but likely so nearer the $250K pointāespecially with the expectation that interest rates will go down later in the year.
I donāt have a stable income so I canāt get bank loans (I have tried to get a mortgage for the property before and failedāthey donāt care if you have millions in assets, all they care about is your income[1], and I just have a relatively small, irregular rental income (Airbnb). But I can get crypto-backed smart contract loans, and do have one out already on Aave, which I could extend.).
Also, the signalling value of the wager is pretty important too imo. I want people to put their money where their mouth is if they are so sure that AI x-risk isnāt a near term problem. And I want to put my money where my mouth is too, to show how serious I am about this.
I think this is probably because they donāt want to go through the hassle of actually having to repossess your house, so if this seems at all likely they wonāt bother with the loan in the first place.
Thanks for following up, Greg! Strongly upvoted. I will try to understand how I can set up a contract describing the bet with your house as collateral.
Could you link to the post on X you mentioned?
I will send you a private message with Bryanās email.
Definitely seek legal advice in the country and subdivision (e.g., US state) where Greg lives!
You may think of this as a bet, but Iāll propose an alternative possible paradigm: itās may be a plain old promissory note backed by a mortgage. That is, a home-equity loan with an unconditional balloon payment in five years. Donāt all contracts in which one party must perform in the future include a necessarily implied clause that performance is not necessary in the event that the human race goes extinct by that time? At least, I donāt plan on performing any of my future contractual obligations if that happens . . . .
So even assuming this wouldnāt be unenforceable as gambling, it might run afoul of the rules for mortgage lending (e.g., because the implied interest rate [~14.4%?] is seen as usurious, or because it didnāt comply with local or national laws regulating mortgage lending). That is a pretty regulated industry in general. It would definitely need to follow all the formalities for secured lending against real property: we require those formalities to make sure the borrower knows what he is getting into, and to give notice to other would-be lenders that they would be further back in line on repayment.
I should also note that it is pretty difficult in many places to force a sale on someoneās primary residence if you hold certain types of security interests (as opposed to, e.g., a primary mortgage). So you might be holding a lien that doesnāt have much practical value unless/āuntil Greg decides to sell and there is value after paying off whoever is ahead in line on payment. Again, I can only advise seeking legal counsel in the right jurisdiction.
The off-the-wall thought I have is that Greg might be able to get around some difficulties by delivering a promissory note backed by a recorded security interest to an unrelated charity. But at the risk of sounding like a broken record, everyone would need legal advice from someone licensed in the jurisdiction before embarking on any approach in this rather unusual and interesting scenario.
Thanks! Could you also clarify where is your house, whether you live there or elsewhere, and how much cash you expect to have by the end of 2027 (feel free to share the 5th percentile, median and 95th percentile)?
Itās in Manchester, UK. I live elsewhereārenting currently, but shortly moving into another owned house that is currently being renovated (Iāve got a company managing the would-be-collateral house as an Airbnb, so no long term tenants either). Will send you more details via DM.
Cash is a tricky one, because I rarely hold much of it. Iām nearly always fully invested. But that includes plenty of liquid assets like crypto. Net worth wise, in 2027, assuming no AI-related craziness, I would be expect it to be in the 7-8 figure range, 5-95% maybe $500k-$100M).
Greg can presumably also just take out a loan? I think this will likely dominate the bet you proposed given that your implied interest rates are very high.
As I say above, Iāve been offering a similar bet for a while already. The symbolism is a big part of it.
I can currently only take out crypto-backed loans, which have been quite high interest lately (donāt have a stable income so canāt get bank loans or mortgages), and have considered this but not done it yet.
Thanks for the suggestion, Ryan. As I side note, I would be curious to know how my comment could be improved, as I see it was downvoted. I guess it is too adversarial.
Greg can presumably also just take out a loan? I think this will likely dominate the bet you proposed given that your implied interest rates are very high.
I feel like there is a nice point there, but I am not sure I got it. By taking a loan, Greg would loose purchasing power in expectation (meanwhile, I have replaced ā$ā by ā2023-$ā in my comment), but he would gain it by taking the bet. People still take loans because they could value additional purchasing power now more than in the future, but this is why I said the bet would only make sense if my and Gregās marginal earnings would continue to go towards donations if we lost the bet. To ensure this, I would consider the bet a risky investment, and move some of my investments from stocks to bonds to offset at least part of the increase in risk. Even then, I would want to set up an agreement with signatures from both of us, and a 3rd party before going ahead with the bet.
The bet might be nice symbolism though.
Yes, I think symbolism would plausibly dominate the benefits for Greg.
I feel like there is a nice point there, but I am not sure I got it.
The key thing is that you donāt have to pay off loans if weāre all dead. So all loans are implicitly bets about whether society will collapse by some point.
Re risk, as per my offer on X, Iām happy to put my house up as collateral if you can be bothered to get the paperwork done. Otherwise happy to just trade on reputation (you can trash mine publicly if I donāt pay up).
I think the chance of humans going extinct until the end of 2027 is basically negligible. I would guess around 10^-7 per year.
Would be interested to see your reasoning for this, if you have it laid out somewhere. Is it mainly because you think itās ~impossible for AGI/āASI to happen in that time? Or because itās ~impossible for AGI/āASI to cause human extinction?
Would be interested to see your reasoning for this, if you have it laid out somewhere.
I have not engaged so much with AI risk, but my views about it are informed by considerations in the 2 comments in this thread. Mammal species usually last 1 M years, and I am not convinced by arguments for extinction risk being much higher (I would like to see a detailed quantitative model), so I start from a prior of 10^-6 extinction risk per year. Then I guess the risk is around 10 % as high as that because humans currently have tight control of AI development.
Is it mainly because you think itās ~impossible for AGI/āASI to happen in that time? Or because itās ~impossible for AGI/āASI to cause human extinction?
To be consistent with 10^-7 extinction risk, I would guess 0.1 % chance of gross world product growing at least 30 % in 1 year until 2027, due to bottlenecks whose effects are not well modelled in Tom Davidsonās model, and 0.01 % chance of human extinction conditional on that.
Interesting. Obviously I donāt want to discourage you from the bet, but Iām surprised you are so confident based on this! I donāt think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasnāt been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).
On the question of priors, I liked AGI Catastrophe and Takeover: Some Reference Class-Based Priors. It is unclear to me whether extinction risk has increased in the last 100 years. I estimated an annual nuclear extinction risk of 5.93*10^-12, which is way lower than the prior for wild mammals of 10^-6.
I see in your comment on that post, you say āhuman extinction would not necessarily be an existential catastropheā and āSo, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?ā. To be clear: what Iām interested in here is human extinction (not any broader conception of āexistential catastropheā), and the bet is about that.
To be clear: what Iām interested in here is human extinction (not any broader conception of āexistential catastropheā), and the bet is about that.
See my comment on that post for why I donāt agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]).
But too be clear, global catastrophic /ā civilisational collapse risk from nuclear is relatively high (these often get conflated with āextinctionā).
āI think AGI is 0-5 years awayā != āI am certain AGI will happen in within five years.ā I think it is best read as implying somewhere between 51 and 100% confidence, at least standing alone. Depending on where it is set, you probably should offer another ~12-18 months.
Thanks for sharing your thoughts, Greg!
Assuming you believe there is a 75 % chance of AGI within the next 5 years, the above suggests your median time from now until doom is 3.70 years (= 0.5*(5 ā 0)/ā0.75/ā0.9). Is your median time from now until human extinction also close to 3.70 years? If so, we can set up a bet similar to the one between Bryan Caplan and Eliezer Yudkowsky:
I send you 10 k 2023-$ in the next few days.
If humans do not go extinct in 3.70 years, or until the end of 2027 for simplicity, you send me 19 k 2023-$.
The expected profit is:
For you, 500 2023-$ (= 10*10^3 ā 0.5*19*10^3).
For me, 9.00 k 2023-$ (= ā10*10^3 + 19*10^3), as I think the chance of humans going extinct until the end of 2027 is basically negligible. I would guess around 10^-7 per year.
The expected profit is quite positive for both of us, so we would agree on the bet as long as my (your) marginal earnings after loosing 10 k 2023-$ (19 k 2023-$) would still go towards donations, which arguably do not have much diminishing returns.
I guess my marginal earnings after loosing 10 k 2023-$ would still go towards donations, so I am happy to take the bet.
Hi Vasco, sorry for the delay getting back to you. I have actually had a similar bet offer up on X for nearly a year (offering to go up to $250k) with only one taker for ~$30 so far! My one is you give x now and I give 2x in 5 years, which is pretty similar. Anyway, happy to go ahead with what youāve suggested.
I would donate the $10k to PauseAI (I would say $10k to PauseAI in 2024 is much greater EV than $19k to PauseAI at end of 2027).
[BTW, I have tried to get Bryan Caplan interested too, to no availāif anyone is in contact with him, please ask him about it.]
As much as I may appreciate a good wager, I would feel remiss not to ask if you could get a better result for amount of home equity at risk by getting a HELOC and having a bank be the counterparty? Maybe not at lower dollar amounts due to fixed costs/āfees, but likely so nearer the $250K pointāespecially with the expectation that interest rates will go down later in the year.
I donāt have a stable income so I canāt get bank loans (I have tried to get a mortgage for the property before and failedāthey donāt care if you have millions in assets, all they care about is your income[1], and I just have a relatively small, irregular rental income (Airbnb). But I can get crypto-backed smart contract loans, and do have one out already on Aave, which I could extend.).
Also, the signalling value of the wager is pretty important too imo. I want people to put their money where their mouth is if they are so sure that AI x-risk isnāt a near term problem. And I want to put my money where my mouth is too, to show how serious I am about this.
I think this is probably because they donāt want to go through the hassle of actually having to repossess your house, so if this seems at all likely they wonāt bother with the loan in the first place.
Thanks for following up, Greg! Strongly upvoted. I will try to understand how I can set up a contract describing the bet with your house as collateral.
Could you link to the post on X you mentioned?
I will send you a private message with Bryanās email.
Definitely seek legal advice in the country and subdivision (e.g., US state) where Greg lives!
You may think of this as a bet, but Iāll propose an alternative possible paradigm: itās may be a plain old promissory note backed by a mortgage. That is, a home-equity loan with an unconditional balloon payment in five years. Donāt all contracts in which one party must perform in the future include a necessarily implied clause that performance is not necessary in the event that the human race goes extinct by that time? At least, I donāt plan on performing any of my future contractual obligations if that happens . . . .
So even assuming this wouldnāt be unenforceable as gambling, it might run afoul of the rules for mortgage lending (e.g., because the implied interest rate [~14.4%?] is seen as usurious, or because it didnāt comply with local or national laws regulating mortgage lending). That is a pretty regulated industry in general. It would definitely need to follow all the formalities for secured lending against real property: we require those formalities to make sure the borrower knows what he is getting into, and to give notice to other would-be lenders that they would be further back in line on repayment.
I should also note that it is pretty difficult in many places to force a sale on someoneās primary residence if you hold certain types of security interests (as opposed to, e.g., a primary mortgage). So you might be holding a lien that doesnāt have much practical value unless/āuntil Greg decides to sell and there is value after paying off whoever is ahead in line on payment. Again, I can only advise seeking legal counsel in the right jurisdiction.
The off-the-wall thought I have is that Greg might be able to get around some difficulties by delivering a promissory note backed by a recorded security interest to an unrelated charity. But at the risk of sounding like a broken record, everyone would need legal advice from someone licensed in the jurisdiction before embarking on any approach in this rather unusual and interesting scenario.
Thanks for sharing your thoughts, Jason!
Cool, thanks. I link to one post in the comment above. But see also.
Thanks! Could you also clarify where is your house, whether you live there or elsewhere, and how much cash you expect to have by the end of 2027 (feel free to share the 5th percentile, median and 95th percentile)?
Itās in Manchester, UK. I live elsewhereārenting currently, but shortly moving into another owned house that is currently being renovated (Iāve got a company managing the would-be-collateral house as an Airbnb, so no long term tenants either). Will send you more details via DM.
Cash is a tricky one, because I rarely hold much of it. Iām nearly always fully invested. But that includes plenty of liquid assets like crypto. Net worth wise, in 2027, assuming no AI-related craziness, I would be expect it to be in the 7-8 figure range, 5-95% maybe $500k-$100M).
Update. I bet Greg Colbourn 10 k⬠that AI will not kill us all by the end of 2027.
Greg can presumably also just take out a loan? I think this will likely dominate the bet you proposed given that your implied interest rates are very high.
The bet might be nice symbolism though.
As I say above, Iāve been offering a similar bet for a while already. The symbolism is a big part of it.
I can currently only take out crypto-backed loans, which have been quite high interest lately (donāt have a stable income so canāt get bank loans or mortgages), and have considered this but not done it yet.
Thanks for the suggestion, Ryan. As I side note, I would be curious to know how my comment could be improved, as I see it was downvoted. I guess it is too adversarial.
I feel like there is a nice point there, but I am not sure I got it. By taking a loan, Greg would loose purchasing power in expectation (meanwhile, I have replaced ā$ā by ā2023-$ā in my comment), but he would gain it by taking the bet. People still take loans because they could value additional purchasing power now more than in the future, but this is why I said the bet would only make sense if my and Gregās marginal earnings would continue to go towards donations if we lost the bet. To ensure this, I would consider the bet a risky investment, and move some of my investments from stocks to bonds to offset at least part of the increase in risk. Even then, I would want to set up an agreement with signatures from both of us, and a 3rd party before going ahead with the bet.
Yes, I think symbolism would plausibly dominate the benefits for Greg.
The key thing is that you donāt have to pay off loans if weāre all dead. So all loans are implicitly bets about whether society will collapse by some point.
Re risk, as per my offer on X, Iām happy to put my house up as collateral if you can be bothered to get the paperwork done. Otherwise happy to just trade on reputation (you can trash mine publicly if I donāt pay up).
(I didnāt downvote it.)
Would be interested to see your reasoning for this, if you have it laid out somewhere. Is it mainly because you think itās ~impossible for AGI/āASI to happen in that time? Or because itās ~impossible for AGI/āASI to cause human extinction?
I have not engaged so much with AI risk, but my views about it are informed by considerations in the 2 comments in this thread. Mammal species usually last 1 M years, and I am not convinced by arguments for extinction risk being much higher (I would like to see a detailed quantitative model), so I start from a prior of 10^-6 extinction risk per year. Then I guess the risk is around 10 % as high as that because humans currently have tight control of AI development.
To be consistent with 10^-7 extinction risk, I would guess 0.1 % chance of gross world product growing at least 30 % in 1 year until 2027, due to bottlenecks whose effects are not well modelled in Tom Davidsonās model, and 0.01 % chance of human extinction conditional on that.
Interesting. Obviously I donāt want to discourage you from the bet, but Iām surprised you are so confident based on this! I donāt think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasnāt been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).
On the question of priors, I liked AGI Catastrophe and Takeover: Some Reference Class-Based Priors. It is unclear to me whether extinction risk has increased in the last 100 years. I estimated an annual nuclear extinction risk of 5.93*10^-12, which is way lower than the prior for wild mammals of 10^-6.
I see in your comment on that post, you say āhuman extinction would not necessarily be an existential catastropheā and āSo, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?ā. To be clear: what Iām interested in here is human extinction (not any broader conception of āexistential catastropheā), and the bet is about that.
Agreed.
See my comment on that post for why I donāt agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]).
But too be clear, global catastrophic /ā civilisational collapse risk from nuclear is relatively high (these often get conflated with āextinctionā).
Not only do I think it will kill every last human, I think itās quite likely it will wipe out all known carbon-based life.
I upvoted this offer. I have an alert for bet proposals on the forum, and this is the first genuine one Iāve seen in a while.
āI think AGI is 0-5 years awayā != āI am certain AGI will happen in within five years.ā I think it is best read as implying somewhere between 51 and 100% confidence, at least standing alone. Depending on where it is set, you probably should offer another ~12-18 months.
Nice point, Jason! I have adjusted the numbers above to account for that. I have also replaced ā$ā by ā2023-$ā to account for inflation.