Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Greg_Colbourn
It’s no secret that AI Safety / EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:
Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety”—two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties—fossil fuel community parties—and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.
This is how AI safety works now.
Going to flag that a big chunk of the major funders and influencers in the EA/Longtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Species aren’t lazy (those who are—or would be—are outcompeted by those who aren’t).
The pets scenario is basically an existential catastrophe by other means (who wants to be a pet that is a caricature of a human like a pug is to a wolf?). And obviously so is the torture/dystopia one (i.e. not an “OK outcome”). What mechanism would allow us to get alignment right on the first try?
This seems like a very unstable equilibrium. All that is needed is for one of the experts to be as good as Ilya Sutskever at AI Engineering, to get past that bottleneck in short order (speed and millions of instances run at once) and foom to ASI.
It would also need to stop all other AGIs who are less cautious, and be ahead of them when self-improvement becomes possible. Seems unlikely given current race dynamics. And even if this does happen, unless it was very aligned to humanity it still spells doom for us due to the speed advantage of the AGI and it’s different substrate needs (i.e. it’s ideal operating environment isn’t survivable for us).
o1 is further evidence that we are living in a short timelines world, that timelines are short, p(doom) is high: a global stop to frontier AI development until x-safety consensus is our only reasonable hope.
One high leverage thing people could do right now is encourage letter writing to California’s Governor Newsom requesting he signs SB 1047. This would be a much needed precedent for enabling US federal legislation and then global regulation.
Sent a letter
No. I can only get crypto-backed loans (e.g. Aave). Currently on ~10% interest; no guarantee they won’t go above 15% over 5 years + counterparty risk to my collateral.
But I don’t even think it’s negative financial EV (see above—because I’m 50% on not having to pay it back at all because doom, and I also think the EV of my investments is >2x over the timeframe).
I mean, in terms of signalling it’s not great to bet people (or people from a community) who are basically on your side, i.e. think AI x-risk is a problem, but just not that big a problem; as opposed to people who think the whole thing is nonsense and are actively hostile to you and dismissive of your concerns.
I’ve been getting a few offers from EAs recently. I might accept some. What I’d really like to do though is bet against an e/acc.
Unless you plan to spend all of your money before you would owe money back
This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.
Or unless you’re betting on high rates of returns to capital
There is some element of this for me (I hope to more than 2x my capital in worlds where we survive). But it’s not the main reason.
The main reason it’s good for me is that it helps reduce the likelihood of doom. That is my main goal for the next few years. If the interest this is getting gets even one more person to take near-term AI doom as seriously as I do, then that’s a win. Also the $x to PauseAI now is worth >>$2x to PauseAI in 2028.
you can probably borrow cheaply. E.g. if you have $2X in investments, you can sell them, invest $X at 2X leverage, and effectively borrow the other $X.
This is not without risk (of being margin called in a 50% drawdown)[1]. Else why wouldn’t people be doing this as standard? I’ve not really heard of anyone doing it.
- ^
And it could also be costly in borrowing fees for the leverage.
- ^
Unfortunately it seems as though bets like this[1] (for significant sums of money) might be truly unprecedented. Still working on trying to establish workable mechanisms for trust / ensuring the payout (but I think having it in terms of donations makes things easier).
- ^
Peer-to-peer, between people who don’t know each other.
- ^
Yes. Also another consideration is that I expect my high risk investing strategy to return >200% over the time in question, in worlds where we survive (all it would take is 1⁄10 of my start-up investments to blow up, for example, or crypto going up another >2x).
Note that I made an exception on timelines for this bet, to get it to happen as the first of it’s kind. I’d be more comfortable making it 5 years from the date of the bet. Anyway, open to doing $10k, especially if it’s done in terms of donations (makes things easier in terms of trust, I think). But I’m also going to say that I think the signalling value would be higher if it was against someone who wasn’t EA or concerned about AI x-risk.
Do you know a way of securing such a bet against a house in a water-tight way? I’ve been told by someone who’s consulted a lawyer that such a civil contract would not be enforceable if it came down to enforcement being necessary.
So far I am in discussion with a couple of others for similar amounts to the bet in OP, but the problem of guaranteeing my payout is proving quite a difficult one[1]. I guess making it direct donations makes things a lot easier.
- ^
My preferred method is trading on my public reputation (which I value higher than the bet amounts); but I can’t expect people to take my word for it.
It could also be done by drawing up a contract with my house as collateral, but I’ve been told that this likely wouldn’t be enforceable.
Then there is using escrow (but the escrow needs to be trusted; and the funds tied up in the escrow—and I prefer to hold crypto, which complicates things).
Then there are crypto smart contracts, but these need to be trusted, and also massively overcollateralised to factor in deep market drawdowns (with the opportunity cost that brings).
- ^
There are massive conflicts of interest. We need a divestment movement within AI Safety / EA.