Thinking, writing, and tweeting from Berkeley California. Previously, I ran programs at the Institute for Law & AI, worked on the one-on-one advising team at 80,000 Hours in London and as a patent litigator at Sidley Austin in Chicago.
Mjreard
Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won’t involve animals or suffering at all.
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It’s not cultish if there are no rites/titles/statements of faith/garb/iconography.
I mean defending America from Donald Trump and his forces who are currently waging war against America.
We might just fully agree. I don’t think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
figuring out where you want to specialize, and
building/maintaining your knowledge and motivation around the world’s needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I’m doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
- ^
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren’t really in the room or fully explored, it’s easy to miss crucial considerations. Similarly, what do we mean by “AI” and “settled?” Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don’t buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don’t think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-”EA” collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It’ll be tough re bias, but idk, liberalism conquered the world last century, maybe it’ll do it again this century.
We’re looking at this more differently than I thought. The question “how does EA meet the needs of people with different worldviews” is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I’m pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like “number of participants who go into roles at AIM, MATS, GovAI, etc.” and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I’m impressed with the pushback I get on my takes when I go into these spaces or whether I’m learning new and plausibly very important things about big problems.
The framing of your question suggests EA’s role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world’s needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don’t know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I’m now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they’re excited about which they’ve done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
One tension that CEA laudably attempts to navigate is that EA is actually not self-recommending. There are worlds where dwelling on prioritization and personal-morality questions just aren’t that impactful. We may live in such a world given the urgency of addressing transformative AI and other matters.
My read is that CEA feels compelled to take views and allocate resources based on these considerations. In one part, it’s important for them that users of their programs take jobs or actions from a specific subset of jobs/actions to count as “successes” by CEA’s lights.
My tack is to really tie myself to the mast regarding getting people to engage with EA ideas for their own sake. We’ll pursue this with vigor and be intellectually challenging, but when it comes to what people *do* with these ideas, the chips will fall where they may. I anticipate that I’ll pay my impact bills this way, but I’m not maximizing impact. I’m maximizing EA ideas.
I thought you might catch that last one. I hope you took it personally.
Effective Altruism Will Be Great Again
I’ll need to reread Scott’s post to see how reductive it is,[1] but negotiation and motivated cognition here do feel like a slightly lower level of abstraction in the sense that they are composed or different kinds of (and proportions of) conflicts and mistakes. The dynamics you discuss here follow pretty intuitively from the basic conflict/mistake paradigm.
This is still great analysis and a useful addendum to Scott’s post.
- ^
actually pretty reductive on a skim, but he does have a savings clause at the end: “But obviously both can be true in parts and reality can be way more complicated than either.”
- ^
Slides are quite good. Maybe this is somewhat played-out, but liberal influencers should take up a tack of commenting on how right-wing influencers claims just don’t square with people’s everyday lives. Like “have you really seen cartels murdering people, in your neighborhood? Or is that just something people on the internet are talking about?”
I agree. Basically anyone not in a politically sensitive role (this category is broader than it might intuitively seem) should be looking to make large donations in this area now and others should be reaching out to EAs focused on US politics if they feel well equipped to run or contribute to a high leverage project.
Unfortunately there is no AMF/GiveDirectly for politics and most things you can donate too are very poorly leveraged. Likewise it is hard to both scope a leveraged project and execute well on it. I know of one general exception at the moment which I’m happy to recommend privately.
I’m also happy to speak to anyone who intends to devote considerable money or work resources to this and pass them along to the people doing the best work here if that makes sense.
Over the last decade, we should have invested more in community growth at the expense of research.
Being very confident on this question because would be questioning a pretty marked success, but it does seems like 1) we’re short of the absolute talent/power threshold big problems demand and 2) like energy/talent/resources have been sucked out of good growth engines multiple times in the past decade.
I agree this is quite bad practice in general, though see my other comment for why I think these are not especially bad cases.
A central error in these cases is assuming audiences will draw the wrong inferences from your true view and do bad things because of that. As far as I can tell, no one has full command of the epistemic dynamics here to be able to say that with confidence and then act on it. If you aren’t explicit and transparent about your reasoning, people can make any number of assumptions, others can poke holes in your less-than-fully-endorsed claim and undermine the claim or undermine your credibility and people can use that to justify all kinds of things.
You need to trust that your audience will understand your true view or that you can communicate it properly. Any alternative assumption is speculation whose consequences you should feel more, not less, responsible for since you decided to mislead people for the sake of the consequences rather than simply being transparent and letting the audience take responsibility for how they react to what you say.
I think people who do the bad version of this often have this ~thought experiment in mind: “my audience would rather I tell them the thing that makes their lives better than the literal content of my thoughts.” As a member of your audience, I agree. I don’t, however, agree with the subtly altered, but more realistic version of the thought experiment: “my audience would rather I tell them the thing that I think makes their lives better than the literal content of my thoughts.”
I agree that people should be doing a better job here. As you say, you can just explain what you’re doing and articulate your confidence in specific claims.
The thing you want to track is confidence*importance. MacAskill and Ball do worse than Piper here. Both of them were making fundamental claims about their primary projects/areas of expertise, and all claims in those two areas are somewhat low confidence and people adjust their expectations that.
MacAskill and Ball both have defenses too. In MackAskill’s case, he’s got a big body of other work that makes it fairly clear DGB was not a comprehensive account of his all-things-considered views. It’d be nice to clear up the confusion by stating how he resolves the tension between different works of his, but the audience can also read them and resolve the tension for themselves. The specific content of William MacAskill’s brain is just not the thing that matters and its fine for him to act that way as long as he’s not being systematically misleading.
Ball looks worse, but I wouldn’t be surprised if he alluded to his true view somewhere public and he merely chose not to emphasize it so as to better navigate an insane political environment. If not, that’s bad, but again there’s a valid move of saying “here are some rationales for doing X” that doesn’t obligate you to disclose the ones you care most about, though this is risky business and a mild negative update on your trustworthiness.
Contrarian marketing like this seems like it would only work well if the thing being opposed was extremely well known, which I don’t think Veganuary is.
I would have predicted the positive press and basically think this would “work” today if these conditions were met:
charismatic criminal (art thieves! maybe hackers like anonymous)
ransom made to a powerful, disliked entity (governments, specific well-known billionaires)
For a well-known cause that’d widely regarded as worthy (hurricane/typhoon relief, childhood cancer research, etc.)
I agree you on the overall downsides though. This sets a bad precedent that will be misused by many and burn a ton of social trust that is ultimately more important.