Unless you plan to spend all of your money before you would owe money back
People seem to think what matters is ∫bankroll when what actually matters is ∫consumption?
Or unless you’re betting on high rates of returns to capital, not really on doom
Good news: you can probably borrow cheaply. E.g. if you have $2X in investments, you can sell them, invest $X at 2x leverage, and effectively borrow the other $X.
Unless you plan to spend all of your money before you would owe money back
This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.
Or unless you’re betting on high rates of returns to capital
There is some element of this for me (I hope to more than 2x my capital in worlds where we survive). But it’s not the main reason.
The main reason it’s good for me is that it helps reduce the likelihood of doom. That is my main goal for the next few years. If the interest this is getting gets even one more person to take near-term AI doom as seriously as I do, then that’s a win. Also the $x to PauseAI now is worth >>$2x to PauseAI in 2028.
you can probably borrow cheaply. E.g. if you have $2X in investments, you can sell them, invest $X at 2X leverage, and effectively borrow the other $X.
This is not without risk (of being margin called in a 50% drawdown)[1]. Else why wouldn’t people be doing this as standard? I’ve not really heard of anyone doing it.
I think it’s slightly bad when people publicly make negative EV (on a financial level) bets that are framed as object-level epistemic decisions, when in reality they primarily are hoping to make up for the negative financial EV via PR/marketing benefits for their cause[1]. The general pattern is one of several pathologies that I was worried about re:prediction markets, especially low-liquidity ones.
But at least this particular example is unusually public, so I commend you for that.
I really wish we didn’t break the fourth wall on this, but EA can’t help itself;
“The phrase that comes closest to describing this phenomenon is:”The Disclosive Corruption of Motive”
This phrase, coined by philosopher Bernard Williams, suggests that revealing too much about our motivations or reasons for acting can actually corrupt or undermine the very motives we initially had.
Williams argued that some motives are inherently “opaque” and that excessive transparency can damage their moral value. By revealing too much, we can inadvertently transform our actions from genuine expressions of care or kindness into mere calculations, thereby diminishing their moral worth.”
But I don’t even think it’s negative financial EV (see above—because I’m 50% on not having to pay it back at all because doom, and I also think the EV of my investments is >2x over the timeframe).
For anyone who wants to bet on doom:
I claim it can’t possibly be good for you
Unless you plan to spend all of your money before you would owe money back
People seem to think what matters is ∫bankroll when what actually matters is ∫consumption?
Or unless you’re betting on high rates of returns to capital, not really on doom
Good news: you can probably borrow cheaply. E.g. if you have $2X in investments, you can sell them, invest $X at 2x leverage, and effectively borrow the other $X.
This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.
There is some element of this for me (I hope to more than 2x my capital in worlds where we survive). But it’s not the main reason.
The main reason it’s good for me is that it helps reduce the likelihood of doom. That is my main goal for the next few years. If the interest this is getting gets even one more person to take near-term AI doom as seriously as I do, then that’s a win. Also the $x to PauseAI now is worth >>$2x to PauseAI in 2028.
This is not without risk (of being margin called in a 50% drawdown)[1]. Else why wouldn’t people be doing this as standard? I’ve not really heard of anyone doing it.
And it could also be costly in borrowing fees for the leverage.
I think it’s slightly bad when people publicly make negative EV (on a financial level) bets that are framed as object-level epistemic decisions, when in reality they primarily are hoping to make up for the negative financial EV via PR/marketing benefits for their cause[1]. The general pattern is one of several pathologies that I was worried about re:prediction markets, especially low-liquidity ones.
But at least this particular example is unusually public, so I commend you for that.
An even more hilariously unsound example is this Balaji/James bet (https://www.forbes.com/sites/brandonkochkodin/2023/05/02/balaji-srinivasan-concedes-bet-that-bitcoin-will-reach-1-million-in-90-days/?sh=2d43759d76c6).
I really wish we didn’t break the fourth wall on this, but EA can’t help itself;
“The phrase that comes closest to describing this phenomenon is:”The Disclosive Corruption of Motive”
This phrase, coined by philosopher Bernard Williams, suggests that revealing too much about our motivations or reasons for acting can actually corrupt or undermine the very motives we initially had.
Williams argued that some motives are inherently “opaque” and that excessive transparency can damage their moral value. By revealing too much, we can inadvertently transform our actions from genuine expressions of care or kindness into mere calculations, thereby diminishing their moral worth.”
Agree with this. I think doing weird signaling stuff with bets worsens the signal that bets have on understanding people’s actual epistemic states.
But I don’t even think it’s negative financial EV (see above—because I’m 50% on not having to pay it back at all because doom, and I also think the EV of my investments is >2x over the timeframe).