Former CTO and co-founder of earn-to-give fintech Mast.
Henry Stanley šø
Flock ā work in pubĀlic with friends (beta testers wanted)
I might be missing the point, but Iām not sure I see the parallels with FTX.
With FTX, EA orgs and the movement more generally relied on the huge amount of funding that was coming down the pipe from FTX Foundation and SBF. When all that money suddenly vanished, a lot of orgs and orgs-to-be were left in the lurch, and the whole thing caused a huge amount of reputational damage.
With the AI bubble popping⦠I guess some money that would have been donated by e.g. Anthropic early employees disappears? But itās not clear that that money has been āearmarkedā in the same way the FTX money was; itās much more speculative and I donāt think there are orgs relying on receiving it.
OpenPhil presumably will continue to exist, although it might have less money to disburse if a lot of it is tied up in Meta stock (though I donāt know that it is). Life will go on. If anything, slowing down AI timelines will probably be a good thing.
I guess I donāt see how EAās future success is contingent on AI being a bubble or not. If it turns out to be a bubble, maybe thatās good. If it turns out not to be a bubble, we sure as hell will have wanted to be on the vanguard of figuring out what a post-AGI world looks like and how to make it as good for humanity as possible.
For effect, I would have pulled in a quote from the Reddit thread on akathisia rather than just linking to it.
Akathisia is a inner restlessness that is as far as I know the most extreme form of mental agitation known to man. This can drive the sufferer to suicide [...] My day today consisted of waking up and feeling like I was exploding from my skin, I had a urge that I needed to die to escape. [...] I screamed, hit myself, threw a few things and sobbed. I canāt get away from it. My family is the only reason why Iām alive. [...] My CNS is literally on fire and food is the last thing I want. My skin burns, my brain on fire. Itās all out survival.
Indeed; seems more like founding to give.
if I had kept at it and pushed harder, maybe the project would have got further⦠but I donāt think I actually wanted to be in that position either!
I think this is a problem with for-profit startups as well. Most of the time they fail. But sometimes they succeed (in the sense of ānot failingā rather than breakout success which is far rarer), and in that case youāre stuck with the thing to see it through to an exit.
I enjoyed this, and I miss bumping into you on the stairs at house parties!
Honestly, I kind of hated doing [GTP].
Are you willing to share why you hated it?
people who have strong conviction in EA start with a radical critique of the status quo (e.g. a lot of things like cancer research or art or politics or volunteering with lonely seniors seem a lot less effective than GiveWell charities or the like, so we should scorn them), then see the rationales for the status quo (e.g. ultimately, society would start to fall apart if tried to divert too many resources to GiveWell charities and the like by taking them away from everything else), and then come full circle back around to some less radical position
I agree that we probably shouldnāt just defund all arts/ācancer/āold people charities overnight, but there are lots of causes that plausibly ādeserveā way less funding on the margin which would be better spent by GiveWell without society falling apart.
I take a Chestertonās fence sorta view here where I imagine a world which has zero arts funding and maybe that ends up being impoverished in a hard-to-quantify way, and that seems worth avoiding. But for the time being Iām happy to tell people to stop donating to the Cancer Research UK and send it to AMF instead.
One thing that occurs to me (as someone considering a career pivot) is the case of who someone isnāt committed to a specific cause area. Here you talk about someone who is essentially choosing between EtG for AI safety or doing AI safety work directly.
But in my case, Iām considering a pivot to AI safety from EtGābut currently I exclusively support animal welfare causes when I donate. Perhaps this is just irrational on my part. My thinking is that Iām unlikely, given my skillset, to be any good at doing direct work in the animal welfare space, but consider it the most important issue of our time. I also think AI safety is important and timely but I might actually have the potential to work on it directly, hence considering the switch.
So in some cases thereās a tradeoff of donations foregone in one area vs direct work done in another, which I guess is trickier to model.
I wonder why this hasnāt attracted more upvotesāseems like a very interesting and high-effort post!
SpitballingāI guess thereās such a lot of math here that many people (including me) wonāt be able to fully engage with the key claims of the post, which limits the surface area of people who are likely to find it interesting.
I note that when I play with the app, the headline numbers donāt change for me when I change the parameters of the model. May be a bug?
Not an answer to your question, but I also think most futures will be net negative for similar reasons, so itās not just you!
I can start by giving my own answer to this (things I might do with my time):
travel widely and without real goals/ātimelines (e.g. Interrailing without too many pre-defined stops, just go where your heart takes you and if you like a place then stay longer)
perhaps directed a little towards where I have friends, where there are EA hubs that are likely to provide fruitful social interactions
do Pieter Levelsā 12 startups in 12 months (maybe with Claude Code this could be 12 startups in 12 weeks, who knows) - spend some time building side projects for the sake of it
these could either be money-making or EA-focused, or just fun
my last startup was found-to-give so making money is still a motivation, although I feel less called to the grind these days
do a silent meditation retreat, and build a daily meditation habit
fix health problems (mental and physical) - spend money and time on this
explore living somewhere newāspend a month in the Bay, Berlin, other cities Iād like to live
write regularlyāI looked into doing Inkhaven and am going to run a free online version of it for the month of November
[Question] How to spend a sabĀbatĀiĀcal?
Are you two talking about different Sams?
IIRC this was basically the thesis behind the EA Hotel (now CEELEAR) - a low-cost space for nascent EAs to do a bunch of thinking without having to worry too much about the basics.
More broadly this is also a benefit of academic tenureābeing able to do your research without having to worry about finding a job (although of course getting funding is still the bottleneck and a big force in directing where research effort is directed).
Surely both things can be true at onceāthat itās been historically very useful and also a shame that itās available to so few?
Itās not the ideal movement (i.e. not what weād design from scratch), but itās the closest weāve got
Interested to hear what such a movement would look like if you were building it from scratch.
Well spotted, thank you!
Probably (even just Amazon price differences, I havenāt looked elsewhere). 6200 is Ā£18, one set of filters Ā£11. 4251 is Ā£20. Maybe itās a false economy, just thinking about cost savings if you wanted to buy a handful of masks for family.
You do address the FTX comparison (by pointing out that it wonāt make funding dry up), thatās fair. My bad.
But I do think youāre make an accusation of some epistemic impropriety that seems very different from FTXāgetting FTX wrong (by not predicting its collapse) was a catastrophe and I donāt think itās the same for AI timelines. Am I missing the point?