Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Greg_Colbourn
Well, the bottom line is extinction, for all of us. If the COIs block enough people from taking sufficient action, before it’s too late, then that’s what happens. The billions of EA money left in the bank as foom-doom hits will be useless. Might as well never have been accumulated in the first place.
I’ll also note that there are plenty of other potential good investments out there. Crypto has gone up about as much as AI stocks in general over the last year, and some of them (e.g. SOL) have gone up much more than NVDA. There are promising start-ups in many non-AI areas. (Join this group to see more[1]).
To answer your bottom two questions:
1. I think avoiding stock-market-wide index funds is probably going too far (as they are neutral about AI—if AI starts doing badly, e.g. because of regulation, then the composition of the index fund will change to reflect this).
2. I wouldn’t recommend this as a strategy, unless they are already on their way down and heavy regulation looks imminent.- ^
But note that people are still pitching the likes of Anthropic in there! I don’t approve of that.
- ^
Yes, but the COIs extend to altruistic impact too. Like—which EA EtG-er wouldn’t want to be able to give away a billion dollars? Having AI stocks in your DAF still biases you toward supporting the big AI companies, and against trying to stop AGI/ASI development altogether (when that may well actually be the most high impact thing to do, even if it means you never get to give away a billion dollars).
The concern is mainly COIs, then bad PR. The direct demand shift could still be important though, if it catalyses further demand shift (e.g. divestment from apartheid South Africa eventually snowballed into having a large economic effect).
“I donate to AI safety and governance” [but not enough to actually damage the bottom lines of the big AI companies I’ve invested in.]
”Oh, no of course I intend to sell all my AI stock at some point, and donate it all to AI Safety.” [Just not yet; it’s “up only” at the moment!]
”Yes, my timelines stretch into the 2030s.” [Because that’s when I anticipate that I’ll be rich from my AI investments.]
”I would be in favour of a Pause, if I thought it was possible.” [And I could sell my massive amounts of non-publicly-traded Anthropic stock at another 10x gain from here, first.]
(It’s bad because it creates a conflict of interest!)
the wealth of many donors to AI safety is pretty correlated with AI stocks
Unpopular opinion (at least in EA): it not only looks bad, but it is bad that this is the case. Divest!
AI safety donors investing in AI capabilities companies is like climate change donors investing in oil companies or animal welfare donors investing in factory farming (sounds a bit ridiculous when put like that, right? Regardless of mission hedging arguments).
Thanks for the feedback. Would events where people share rooms (e.g. having some dorm rooms) be something you would consider? Also, it would be possible to have some flexibility with number of rooms given CEEALAR’s 30 rooms next door, and 100+ more rooms from other hotels/guest houses within 50m.
We do. It’s used for both. It could just be used for events/retreats, but I’m unsure whether that would push CEEALAR in a too “for profit” direction if it’s run by CEEALAR as such (currently the second building is still owned by me, with exclusive usage rights given to CEEALAR for free; but my intention has been to gift it to CEEALAR, and that may happen soon.)
People might be neglecting measures that would help in very short timelines (e.g. transformative AI in under 3 years), though that might be because most people are unable to do much in these scenarios.
There’s a lot that can be done, especially in terms of public and political advocacy. PauseAI is really gaining momentum now as a hub for the slow/Pause/Stop AGI/ASI movement (which is largely independent of EA). Lots of projects happening in the Discord, and see here for a roadmap of what they could do with more funding.
^I’m going to be lazy and tag a few people: @Joey @KarolinaSarek @Ryan Kidd @Leilani Bellamy @Habryka @IrenaK Not expecting a response, but if you are interested, feel free to comment or DM.
Not done any research (but asking here, now :)). I guess 1 week is more of a sweet spot, but we have hosted weekend events before at CEEALAR. In the last year, CEEALAR has hosted retreats for a few orgs (ALLFED, Orthogonal, [another AI Safety org], PauseAI (upcoming)) and a couple of bootcamps (ML4G), all of which we have charged for. So we know there is at least some demand. Because of hosting grantees long term, CEEALAR isn’t able to run long events (e.g. 10-12 week courses or start-up accelerators), so demand there is untested. But I think given the cost competitiveness, there would be some demand there.
Re fanciness, this is especially aimed at the budget (cost effectiveness) conscious. Costs would be 3-10x less than what is typical for UK venues. And there would be another bonus of having an EA community next door.
Points against for me:
- The hassle of the purchase and getting it up and running (on top of lots of other things I’ve got going on already).
- Short timelines could make it all irrelevant (unless we get a Pause on AGI).
- If it doesn’t work out and I end up selling the building again, it could end up quite a bad investment relative to the counterfactual (of holding crypto). [This goes both ways though.]
(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK?
I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could be: workshops, conferences, unconferences, retreats, summer schools, coding/data science bootcamps, EtG accelerators, EA charity accelerators, intro to EA bootcamps, AI Safety bootcamps, etc.
This would be next door to CEEALAR (the building is potentially coming on the market), but most likely run by a separate, but close, limited company (which would charge, and funnel profits to CEEALAR, but also subsidise use where needed). Note that being in Blackpool in a low cost building would mean that the rates charged by such a company would be significantly less than elsewhere in the UK (e.g. £300/day for use of the building: 15 bedrooms and communal space downstairs to match that capacity). Maybe think of it as Whytham Abbey, but at the other end of the Monopoly board: only 1% of the cost! (A throwback to the humble beginnings of EA?)From the early days of the EA Hotel (when we first started hosting unconferences and workshops), I have thought that it would be good to have a building dedicated to events, bootcamps and retreats, where everyone is in and out as a block, so as to minimise overcrowding during events, and inefficiencies of usage of the building either side of them (from needing it mostly empty for the events); CEEALAR is still suffering from this with it’s event hosting. The yearly calendar could be filled up with e.g. 4 10-12 week bootcamps/study programs, punctuated by 4 1-3 week conferences or retreats in between.
This needn’t happen straight away, but if I don’t get the building now, the option will be lost for years. Having it next door in the terrace means that the building can be effectively joined to CEEALAR, making logistics much easier (and another option for the building could be a further expansion of CEEALAR proper[1]). Note that this is properly viewed as an investment to take into account a time-limited opportunity, and shouldn’t be seen as fungible with donations (to CEEALAR or anything else); if nothing happens I can just sell the building again and recoup most/all of the costs (selling shouldn’t be that difficult, given property prices are rising again in the area due to a massive new development in the town centre).
- ^
CEEALAR has already expanded once. When I bought the second building it also wasn’t ideal timing, but it never is; I didn’t want to lose option value.
- ^
Congrats Holden! Just going to quote you from a recent post:
There’s a serious (>10%) risk that we’ll see transformative AI2 within a few years.
In that case it’s not realistic to have sufficient protective measures for the risks in time.
Sufficient protective measures would require huge advances on a number of fronts, including information security that could take years to build up and alignment science breakthroughs that we can’t put a timeline on given the nascent state of the field, so even decades might or might not be enough time to prepare, even given a lot of effort.
If it were all up to me, the world would pause now
Please don’t lose sight of this in your new role. Public opinion is on your side here, and PauseAI are gaining momentum. It’s possible for this to happen. Please push for it in your new role! (And reduce your conflict of interest if possible!)
Idk, I’d put GPT-5 at a ~1% x-risk, or crossing-the-point-of-no-return risk (unacceptably high).
>in the long run
What if we don’t have very long? You aren’t really factoring in the time crunch we are in (the whole reason that PauseAI is happening now is short timelines).
I see in your comment on that post, you say “human extinction would not necessarily be an existential catastrophe” and “So, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?”. To be clear: what I’m interested in here is human extinction (not any broader conception of “existential catastrophe”), and the bet is about that.
See my comment on that post for why I don’t agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]).
Interesting. Obviously I don’t want to discourage you from the bet, but I’m surprised you are so confident based on this! I don’t think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasn’t been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).
See also: Call for Attorneys for OpenAI Employees and Ex-Employees