AI safety + community health
pete
I began seeking counseling and mental health care when my timelines collapsed (shortened by ~20y over the course of a few months). It is like receiving a terminal diagnosis complete with uncertainty and relative isolation of suffering. Antidepressants helped. I am still saving for retirement but spending more freely on quality of life than I have before. I’m also throwing more parties with loved ones, and donating exclusively to x-risk reduction in addition to pivoting my career to AI.
It could be that I love this because it’s what I’m working on (raising safety awareness in corporate governance) but what a great post. Well structured, great summary at the end.
Unrelated to the broader issue of EA’s lack of demographic diversity, there are several groups for various religions in EA (and other demographic groups / coalitions, like parents). Not sure where to find a centralized list off the top of my head.
Beautiful writing (which I really appreciate, and think we should be more explicit about promoting). I see that AI risk isn’t mentioned here and am curious how that factors into your general sense of the promising future.
We are EAs because we share the experience of being bothered—by suffering, by pandemic risk, by our communities’ failure to prioritize what matters. Before EA, many of us were alone in these frustrations and lacked the support and resources to pursue our dreams of helping. I remember what it was like before EA and I’m never going back. Thank you, each of you, for bringing something beautiful into the world.
Absolutely, profoundly net negative. By EV what I mean is “harm.”
I think this comment helped this post gain attention and made me more likely to engage. Thank you, Markus, for encouraging us to pay attention.
In the coming months, it could be valuable to estimate the total EV of this fraud so we can 1) map actions to consequences and 2) more deeply understand what happened here.
That might include:
EV of total x-risk reduction FTX was expected to fund (% reduction * number of future lives spared)
-Loss of life from direct victims
-Reputation damage to EA and subsequent loss of funding, talent, and reduced growth
-Reduced efficacy of existing EA orgs due to damaged trust, reduced resources, and trauma
Some of these will be difficult to estimate. But I think we should know what we lost, and those engaged in fraud should know what it cost.
I like the sentiment but disagree. We have to know how far this goes. This is a system failure, not just an individual failure.
I cleared my calendar yesterday just to grieve. I relived losing family to the pandemic, thinking of all the resources lost for pandemic prevention.
It’s true that there’s work to be done, but grief is heavy, and if we don’t deal with it honestly we’ll pay dividends later.
This is a significant community tragedy and will be emotionally traumatic, for people who who were involved directly and for others.
In addition to the direct impacts, many of us may experience:
-Loss of trust
-Loss of our idea (s) of the future
-Loss of feelings of safety
-Difficulty articulating what we have lost / minimizing our experience in light of those who have lost more
Others may experience:
-Relived grief from other authority figure failures (for example, I lost family to the pandemic and am now viscerally reliving that experience, in light of reduced resources to future pandemic prevention)
-Loss of community / pulling away from community
-Acute mental health crisis
I’d like to challenge those of us who are treating this dispassionately to check in with ourselves: if you are having trouble clicking away from the updates, if you find yourself distracted or weirdly paralyzed, take note. That’s likely grief.
It sounds like this isn’t the feedback you’re hoping for, and that sucks, but I think people aren’t sold on your model specifically. Check out Charity Entrepreneurship as an example of a nonprofit incubator for innovative / unusual ideas!
I’m sorry, and I really wish you guys the best of luck! It’s super competitive and many great orgs don’t clear the hurdle.
For transparency, though, I personally focus and donate to organizations closer to what 80,000 Hours is talking about, because I think huge public health threats have an outsized impact on poverty and wellbeing.
I’m also surprised to see this—lots and lots of EAs focus on wellbeing / reducing global poverty (see Givewell for helpful summary). Obviously reducing risk of nuclear war, etc, has implications for poverty, but try GiveWell for a more direct focus.
Generally speaking, organizations that tend to do well in EA clear a higher bar of rigor on theory of change. For example, being able to show an ROI comparable to one of GiveWell’s top recommended charities, or have some sort of global multiplier effect (ex: reducing risks of future pandemics.)
Your organization seems off to a great start and will probably continue to thrive in the social impact community—if you’d like to learn more about what EAs tend to care about, take a look at the problem profiles on 80,000 Hours.org.
David, let’s connect—also a management consultant exploring a pivot to AI!
One element of personal fit that’s not mentioned is the choice to have kids / become a primary caregiver for someone — see bessieodell’s great post. Current impact calculations don’t include this by default, which I think creates a cultural undercurrent of “real EAs don’t factor caregiving into their careers.”
Is this in addition to the more frequent (compared to previous annual) polls already being taken by 80k?
Great job, Rocky and signatories. Statements are not programs, but neither are they nothing. They take a ton of courage and hard work to write. Proud of everyone who engaged in good faith to put this forward and to strengthen EA as a community.