Thinking, writing, and tweeting from Berkeley California. Previously, I ran programs at the Institute for Law & AI, worked on the one-on-one advising team at 80,000 Hours in London and as a patent litigator at Sidley Austin in Chicago.
Mjreard
I think having some non-EA housemates would lead to talking about EA *more* because you’d have to give more context and field more questions on what you’ve been up to. I find EA-like topics come up relatively rarely in my all-rat house.
like weekly house meetings or monthly brunch
Hoping you accidently mixed up the frequency of these two events
Mark Zuckerberg’s roommate ✅
Guy who played Mark Zuckerberg in the movie ✅
Actual Mark Zuckerberg when?
I appreciate the effort and ambition you’re putting into this and endorse you doing the kind of outreach you’re most excited about. That said, I doubt this is nearly as valuable as it looks on paper, so groups shouldn’t default to replicating it.
So what we have here is a pledge that says that when you enter the workforce and have a steady income, you will donate 1% of your income to charities that you care about.
[emphasis added]
Based on this and the absence absence of meaningful follow up, I’d guess these pledges are worth ~5% of 300 high-touch pledges.
It seems like people are going to get an email from GWWC at some point in the future (maybe not even that?) which may or may not successfully remind them of this brief interaction, which may or may not motivate them to click through to the site, which is quite unlikely to convince anyone to donate to a highly effective charity.
Shifting some portion of your efforts to follow up seems like the right move. Getting one real EA 1% pledger up to 5% would be worth 80 of these pledges for example and seems doable.
I avoided opening this post because I was worried it’d be a sort of “we’re entitled to Anthropic’s money” vibe I’ve gotten from some other posts, but I’m happy to have been proven wrong. This is a very clear outline of the present problem(s) EA/AIS are facing with creating projects that are worth funding.
I would have predicted the positive press and basically think this would “work” today if these conditions were met:
charismatic criminal (art thieves! maybe hackers like anonymous)
ransom made to a powerful, disliked entity (governments, specific well-known billionaires)
For a well-known cause that’d widely regarded as worthy (hurricane/typhoon relief, childhood cancer research, etc.)
I agree you on the overall downsides though. This sets a bad precedent that will be misused by many and burn a ton of social trust that is ultimately more important.
Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won’t involve animals or suffering at all.
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It’s not cultish if there are no rites/titles/statements of faith/garb/iconography.
I mean defending America from Donald Trump and his forces who are currently waging war against America.
We might just fully agree. I don’t think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
figuring out where you want to specialize, and
building/maintaining your knowledge and motivation around the world’s needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I’m doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
- ^
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren’t really in the room or fully explored, it’s easy to miss crucial considerations. Similarly, what do we mean by “AI” and “settled?” Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don’t buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don’t think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-”EA” collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It’ll be tough re bias, but idk, liberalism conquered the world last century, maybe it’ll do it again this century.
We’re looking at this more differently than I thought. The question “how does EA meet the needs of people with different worldviews” is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I’m pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like “number of participants who go into roles at AIM, MATS, GovAI, etc.” and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I’m impressed with the pushback I get on my takes when I go into these spaces or whether I’m learning new and plausibly very important things about big problems.
The framing of your question suggests EA’s role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world’s needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don’t know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I’m now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they’re excited about which they’ve done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
One tension that CEA laudably attempts to navigate is that EA is actually not self-recommending. There are worlds where dwelling on prioritization and personal-morality questions just aren’t that impactful. We may live in such a world given the urgency of addressing transformative AI and other matters.
My read is that CEA feels compelled to take views and allocate resources based on these considerations. In one part, it’s important for them that users of their programs take jobs or actions from a specific subset of jobs/actions to count as “successes” by CEA’s lights.
My tack is to really tie myself to the mast regarding getting people to engage with EA ideas for their own sake. We’ll pursue this with vigor and be intellectually challenging, but when it comes to what people *do* with these ideas, the chips will fall where they may. I anticipate that I’ll pay my impact bills this way, but I’m not maximizing impact. I’m maximizing EA ideas.
I thought you might catch that last one. I hope you took it personally.
Effective Altruism Will Be Great Again
I’ll need to reread Scott’s post to see how reductive it is,[1] but negotiation and motivated cognition here do feel like a slightly lower level of abstraction in the sense that they are composed or different kinds of (and proportions of) conflicts and mistakes. The dynamics you discuss here follow pretty intuitively from the basic conflict/mistake paradigm.
This is still great analysis and a useful addendum to Scott’s post.
- ^
actually pretty reductive on a skim, but he does have a savings clause at the end: “But obviously both can be true in parts and reality can be way more complicated than either.”
- ^
The most charitable explanation of the tension here is that people just disagree with you about what is most impactful. I appreciate your transparency in considering whether aesthetics and nostalgia for a previous era of EA might be driving your unease.
Ultimately, it is better to debate the merits of specific interventions than general vibes. I think even the Anthropic folks would agree that, e.g., moving to SF is purely instrumental to some more specific theory of change that may or may not have merit.