Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Greg_Colbourn
Not done any research (but asking here, now :)). I guess 1 week is more of a sweet spot, but we have hosted weekend events before at CEEALAR. In the last year, CEEALAR has hosted retreats for a few orgs (ALLFED, Orthogonal, [another AI Safety org], PauseAI (upcoming)) and a couple of bootcamps (ML4G), all of which we have charged for. So we know there is at least some demand. Because of hosting grantees long term, CEEALAR isn’t able to run long events (e.g. 10-12 week courses or start-up accelerators), so demand there is untested. But I think given the cost competitiveness, there would be some demand there.
Re fanciness, this is especially aimed at the budget (cost effectiveness) conscious. Costs would be 3-10x less than what is typical for UK venues. And there would be another bonus of having an EA community next door.
Points against for me:
- The hassle of the purchase and getting it up and running (on top of lots of other things I’ve got going on already).
- Short timelines could make it all irrelevant (unless we get a Pause on AGI).
- If it doesn’t work out and I end up selling the building again, it could end up quite a bad investment relative to the counterfactual (of holding crypto). [This goes both ways though.]
(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK?
I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could be: workshops, conferences, unconferences, retreats, summer schools, coding/data science bootcamps, EtG accelerators, EA charity accelerators, intro to EA bootcamps, AI Safety bootcamps, etc.
This would be next door to CEEALAR (the building is potentially coming on the market), but most likely run by a separate, but close, limited company (which would charge, and funnel profits to CEEALAR, but also subsidise use where needed). Note that being in Blackpool in a low cost building would mean that the rates charged by such a company would be significantly less than elsewhere in the UK (e.g. £300/day for use of the building: 15 bedrooms and communal space downstairs to match that capacity). Maybe think of it as Whytham Abbey, but at the other end of the Monopoly board: only 1% of the cost! (A throwback to the humble beginnings of EA?)From the early days of the EA Hotel (when we first started hosting unconferences and workshops), I have thought that it would be good to have a building dedicated to events, bootcamps and retreats, where everyone is in and out as a block, so as to minimise overcrowding during events, and inefficiencies of usage of the building either side of them (from needing it mostly empty for the events); CEEALAR is still suffering from this with it’s event hosting. The yearly calendar could be filled up with e.g. 4 10-12 week bootcamps/study programs, punctuated by 4 1-3 week conferences or retreats in between.
This needn’t happen straight away, but if I don’t get the building now, the option will be lost for years. Having it next door in the terrace means that the building can be effectively joined to CEEALAR, making logistics much easier (and another option for the building could be a further expansion of CEEALAR proper[1]). Note that this is properly viewed as an investment to take into account a time-limited opportunity, and shouldn’t be seen as fungible with donations (to CEEALAR or anything else); if nothing happens I can just sell the building again and recoup most/all of the costs (selling shouldn’t be that difficult, given property prices are rising again in the area due to a massive new development in the town centre).
- ^
CEEALAR has already expanded once. When I bought the second building it also wasn’t ideal timing, but it never is; I didn’t want to lose option value.
- ^
Congrats Holden! Just going to quote you from a recent post:
There’s a serious (>10%) risk that we’ll see transformative AI2 within a few years.
In that case it’s not realistic to have sufficient protective measures for the risks in time.
Sufficient protective measures would require huge advances on a number of fronts, including information security that could take years to build up and alignment science breakthroughs that we can’t put a timeline on given the nascent state of the field, so even decades might or might not be enough time to prepare, even given a lot of effort.
If it were all up to me, the world would pause now
Please don’t lose sight of this in your new role. Public opinion is on your side here, and PauseAI are gaining momentum. It’s possible for this to happen. Please push for it in your new role! (And reduce your conflict of interest if possible!)
Idk, I’d put GPT-5 at a ~1% x-risk, or crossing-the-point-of-no-return risk (unacceptably high).
>in the long run
What if we don’t have very long? You aren’t really factoring in the time crunch we are in (the whole reason that PauseAI is happening now is short timelines).
I see in your comment on that post, you say “human extinction would not necessarily be an existential catastrophe” and “So, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?”. To be clear: what I’m interested in here is human extinction (not any broader conception of “existential catastrophe”), and the bet is about that.
See my comment on that post for why I don’t agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]).
Interesting. Obviously I don’t want to discourage you from the bet, but I’m surprised you are so confident based on this! I don’t think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasn’t been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).
I think the chance of humans going extinct until the end of 2027 is basically negligible. I would guess around 10^-7 per year.
Would be interested to see your reasoning for this, if you have it laid out somewhere. Is it mainly because you think it’s ~impossible for AGI/ASI to happen in that time? Or because it’s ~impossible for AGI/ASI to cause human extinction?
I don’t have a stable income so I can’t get bank loans (I have tried to get a mortgage for the property before and failed—they don’t care if you have millions in assets, all they care about is your income[1], and I just have a relatively small, irregular rental income (Airbnb). But I can get crypto-backed smart contract loans, and do have one out already on Aave, which I could extend.).
Also, the signalling value of the wager is pretty important too imo. I want people to put their money where their mouth is if they are so sure that AI x-risk isn’t a near term problem. And I want to put my money where my mouth is too, to show how serious I am about this.- ^
I think this is probably because they don’t want to go through the hassle of actually having to repossess your house, so if this seems at all likely they won’t bother with the loan in the first place.
- ^
It’s in Manchester, UK. I live elsewhere—renting currently, but shortly moving into another owned house that is currently being renovated (I’ve got a company managing the would-be-collateral house as an Airbnb, so no long term tenants either). Will send you more details via DM.
Cash is a tricky one, because I rarely hold much of it. I’m nearly always fully invested. But that includes plenty of liquid assets like crypto. Net worth wise, in 2027, assuming no AI-related craziness, I would be expect it to be in the 7-8 figure range, 5-95% maybe $500k-$100M).
Re risk, as per my offer on X, I’m happy to put my house up as collateral if you can be bothered to get the paperwork done. Otherwise happy to just trade on reputation (you can trash mine publicly if I don’t pay up).
As I say above, I’ve been offering a similar bet for a while already. The symbolism is a big part of it.
I can currently only take out crypto-backed loans, which have been quite high interest lately (don’t have a stable income so can’t get bank loans or mortgages), and have considered this but not done it yet.
Hi Vasco, sorry for the delay getting back to you. I have actually had a similar bet offer up on X for nearly a year (offering to go up to $250k) with only one taker for ~$30 so far! My one is you give x now and I give 2x in 5 years, which is pretty similar. Anyway, happy to go ahead with what you’ve suggested.
I would donate the $10k to PauseAI (I would say $10k to PauseAI in 2024 is much greater EV than $19k to PauseAI at end of 2027).
[BTW, I have tried to get Bryan Caplan interested too, to no avail—if anyone is in contact with him, please ask him about it.]
I’d say it’s more than a vague intuition. It follows from alignment/control/misuse/coordination not being (close to) solved and ASI being much more powerful than humanity. I think it should be possible to formalise it, even. “AGIs will be helping us on a lot of tasks”, “collusion is hard” and “people will get more scared over time” aren’t anywhere close to overcoming it imo.
More like, some people did share their concerns, but those they shared them with didn’t do anything about it (because of worrying about bad PR, but also maybe just as a kind of “ends justify the means” thing re his money going to EA. The latter might actually have been the larger effect.).
Maybe half the community sees it that way. But not the half with all the money and power it seems. There aren’t (yet) large resources being put into playing the “outside game”. And there hasn’t been anything in the way of EA leadership (OpenPhil, 80k) admitting the error afaik.
^I’m going to be lazy and tag a few people: @Joey @KarolinaSarek @Ryan Kidd @Leilani Bellamy @Habryka @IrenaK Not expecting a response, but if you are interested, feel free to comment or DM.