Unless you plan to spend all of your money before you would owe money back
This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.
Or unless you’re betting on high rates of returns to capital
There is some element of this for me (I hope to more than 2x my capital in worlds where we survive). But it’s not the main reason.
The main reason it’s good for me is that it helps reduce the likelihood of doom. That is my main goal for the next few years. If the interest this is getting gets even one more person to take near-term AI doom as seriously as I do, then that’s a win. Also the $x to PauseAI now is worth >>$2x to PauseAI in 2028.
you can probably borrow cheaply. E.g. if you have $2X in investments, you can sell them, invest $X at 2X leverage, and effectively borrow the other $X.
This is not without risk (of being margin called in a 50% drawdown)[1]. Else why wouldn’t people be doing this as standard? I’ve not really heard of anyone doing it.
I think it’s slightly bad when people publicly make negative EV (on a financial level) bets that are framed as object-level epistemic decisions, when in reality they primarily are hoping to make up for the negative financial EV via PR/marketing benefits for their cause[1]. The general pattern is one of several pathologies that I was worried about re:prediction markets, especially low-liquidity ones.
But at least this particular example is unusually public, so I commend you for that.
I really wish we didn’t break the fourth wall on this, but EA can’t help itself;
“The phrase that comes closest to describing this phenomenon is:”The Disclosive Corruption of Motive”
This phrase, coined by philosopher Bernard Williams, suggests that revealing too much about our motivations or reasons for acting can actually corrupt or undermine the very motives we initially had.
Williams argued that some motives are inherently “opaque” and that excessive transparency can damage their moral value. By revealing too much, we can inadvertently transform our actions from genuine expressions of care or kindness into mere calculations, thereby diminishing their moral worth.”
But I don’t even think it’s negative financial EV (see above—because I’m 50% on not having to pay it back at all because doom, and I also think the EV of my investments is >2x over the timeframe).
I remain a non-doomer (and am considering such bets more recently), but support this comment. I don’t think the above criticisms make sense, but with a couple of caveats:
1) Zach Stein-Perlman’s above borrowing in general seems reasonable. If your response is that it’s high risk, it seems like making a bet is de-facto asking the better to shoulder that risk for you
2) ‘This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.’ - I know you were responding to his ‘can’t possibly be good for you’ comment (emph mine), but I don’t see why this isn’t a rational behaviour if you think the world is going to end in <4 years. Both from a selfish perspective—is it selfishly rational to be concerned about a couple of years of reduced reputation vs extinction beyond that?; and from an altruistic perspective—if you think the world is almost certainly doomed, that the counterfactual world in which we survive is extremely +EV, and that spending the extra money could move the needle on preventing doom, it seems crazy not to just spend it and figure out the reputational details on the slim chance we survive.
The second is one of the main sources of counterparty risk that makes me wary of such bets—it seems like it would be irrational for anyone to accept them with me in good faith.
it seems crazy not to just spend it and figure out the reputational details on the slim chance we survive.
Even if I thought it was 90%+ doomed, it’s this kind of attitude that has got us into this whole mess in the first place! People burning the commons for short term gain is directly leading to massive amounts of x-risk.
Reading the Eliezer thread, I think I agree with him that there’s no obvious financial gain for you if you hard-lock the money you’d have to pay back.
I don’t follow this comment. You’re saying Vasco gives you X now, 2X to be paid back after k years. You plan to spend X/2 now, and lock up X/2, but somehow borrow 3/(2X) money now, such that you can pay the full amount back in k years? I’m presumably misunderstanding—I don’t see why you’d make the bet now if you could just borrow that much, or why anyone would be willing to lend to you based on money that you were legally/technologically committed to giving away in k years.
One version that makes more sense to me is planning to pay back in installments, on the understanding that you’d be making enough money to do so at the agreed rate—though a) that comes with obviously increased counterparty risk, and b) it still doesn’t make much sense if your moneymaking strategy is investing money which you have rather than selling service/labour, since, again, it seems irrational for you to have any money at the end of the k-year period.
I don’t follow this comment. You’re saying Vasco gives you X now, 2X to be paid back after k years. You plan to spend X/2 now, and lock up X/2, but somehow borrow 3/(2X) money now, such that you can pay the full amount back in k years?
Nitpick. (3/2) X, not 3/(2 X).
If one expects investments to grow more (in real terms) than the product “cost-effectiveness of altruistic spending conditional on survival”*”probability of survival” will decrease, it makes sense to invest as much as possible now, and then donate as much as possible later. Funders of altruistic interventions should try to equalise the product I just mentioned across years (otherwise, they should move their spending from the worst to the best years).
On the 1st point, I think one should borrow money from banks until the conditions for borrowing additional money become as good as those of the available bets, and then get money both ways afterwards. Refusing a bet which is beneficial relative to nothing because there are loans with better conditions suggests one should be asking for more loans.
On the 2nd point, I wondered about the possibility of Greg not fulfilling the bet in order to decrease AI risk further, but I believe the world will look roughly the same way as now in terms of risk. So I expect Greg will not be much more worried than now, and therefore will fulfill the bet.
This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.
There is some element of this for me (I hope to more than 2x my capital in worlds where we survive). But it’s not the main reason.
The main reason it’s good for me is that it helps reduce the likelihood of doom. That is my main goal for the next few years. If the interest this is getting gets even one more person to take near-term AI doom as seriously as I do, then that’s a win. Also the $x to PauseAI now is worth >>$2x to PauseAI in 2028.
This is not without risk (of being margin called in a 50% drawdown)[1]. Else why wouldn’t people be doing this as standard? I’ve not really heard of anyone doing it.
And it could also be costly in borrowing fees for the leverage.
I think it’s slightly bad when people publicly make negative EV (on a financial level) bets that are framed as object-level epistemic decisions, when in reality they primarily are hoping to make up for the negative financial EV via PR/marketing benefits for their cause[1]. The general pattern is one of several pathologies that I was worried about re:prediction markets, especially low-liquidity ones.
But at least this particular example is unusually public, so I commend you for that.
An even more hilariously unsound example is this Balaji/James bet (https://www.forbes.com/sites/brandonkochkodin/2023/05/02/balaji-srinivasan-concedes-bet-that-bitcoin-will-reach-1-million-in-90-days/?sh=2d43759d76c6).
I really wish we didn’t break the fourth wall on this, but EA can’t help itself;
“The phrase that comes closest to describing this phenomenon is:”The Disclosive Corruption of Motive”
This phrase, coined by philosopher Bernard Williams, suggests that revealing too much about our motivations or reasons for acting can actually corrupt or undermine the very motives we initially had.
Williams argued that some motives are inherently “opaque” and that excessive transparency can damage their moral value. By revealing too much, we can inadvertently transform our actions from genuine expressions of care or kindness into mere calculations, thereby diminishing their moral worth.”
Agree with this. I think doing weird signaling stuff with bets worsens the signal that bets have on understanding people’s actual epistemic states.
But I don’t even think it’s negative financial EV (see above—because I’m 50% on not having to pay it back at all because doom, and I also think the EV of my investments is >2x over the timeframe).
I remain a non-doomer (and am considering such bets more recently), but support this comment. I don’t think the above criticisms make sense, but with a couple of caveats:
1) Zach Stein-Perlman’s above borrowing in general seems reasonable. If your response is that it’s high risk, it seems like making a bet is de-facto asking the better to shoulder that risk for you
2) ‘This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.’ - I know you were responding to his ‘can’t possibly be good for you’ comment (emph mine), but I don’t see why this isn’t a rational behaviour if you think the world is going to end in <4 years. Both from a selfish perspective—is it selfishly rational to be concerned about a couple of years of reduced reputation vs extinction beyond that?; and from an altruistic perspective—if you think the world is almost certainly doomed, that the counterfactual world in which we survive is extremely +EV, and that spending the extra money could move the needle on preventing doom, it seems crazy not to just spend it and figure out the reputational details on the slim chance we survive.
The second is one of the main sources of counterparty risk that makes me wary of such bets—it seems like it would be irrational for anyone to accept them with me in good faith.
I think it’s maybe 60% doomed.
Even if I thought it was 90%+ doomed, it’s this kind of attitude that has got us into this whole mess in the first place! People burning the commons for short term gain is directly leading to massive amounts of x-risk.
Reading the Eliezer thread, I think I agree with him that there’s no obvious financial gain for you if you hard-lock the money you’d have to pay back.
I don’t follow this comment. You’re saying Vasco gives you X now, 2X to be paid back after k years. You plan to spend X/2 now, and lock up X/2, but somehow borrow 3/(2X) money now, such that you can pay the full amount back in k years? I’m presumably misunderstanding—I don’t see why you’d make the bet now if you could just borrow that much, or why anyone would be willing to lend to you based on money that you were legally/technologically committed to giving away in k years.
One version that makes more sense to me is planning to pay back in installments, on the understanding that you’d be making enough money to do so at the agreed rate—though a) that comes with obviously increased counterparty risk, and b) it still doesn’t make much sense if your moneymaking strategy is investing money which you have rather than selling service/labour, since, again, it seems irrational for you to have any money at the end of the k-year period.
Where I say “some of which I borrow against now (with 100% interest over 5 years)”, I’m referring to the bet.
Nitpick. (3/2) X, not 3/(2 X).
If one expects investments to grow more (in real terms) than the product “cost-effectiveness of altruistic spending conditional on survival”*”probability of survival” will decrease, it makes sense to invest as much as possible now, and then donate as much as possible later. Funders of altruistic interventions should try to equalise the product I just mentioned across years (otherwise, they should move their spending from the worst to the best years).
Nice points, Sasha!
On the 1st point, I think one should borrow money from banks until the conditions for borrowing additional money become as good as those of the available bets, and then get money both ways afterwards. Refusing a bet which is beneficial relative to nothing because there are loans with better conditions suggests one should be asking for more loans.
On the 2nd point, I wondered about the possibility of Greg not fulfilling the bet in order to decrease AI risk further, but I believe the world will look roughly the same way as now in terms of risk. So I expect Greg will not be much more worried than now, and therefore will fulfill the bet.
As previously referred to, I can’t get bank loans (no stable income).