Thinking, writing, and tweeting from Berkeley California. Previously, I ran programs at the Institute for Law & AI, worked on the one-on-one advising team at 80,000 Hours in London and as a patent litigator at Sidley Austin in Chicago.
Mjreard
No on appalled; No on oversaturated; Yes on being clear that AIS projects are looking for the ultra-talented, but be mindful how hard it is to be well calibrated on who is ultra-talented, including yourself.
In my experience, a sizable majority of applicants in big figures like these are both plainly unqualified and don’t understand the mission of AIS orgs. You shouldn’t assume most or even many resemble you or other people on the EA Forum.
Everyone should be open to not being a fit for many projects or open to the idea that better candidates are out there. I wish for the world’s sake that I become unhirable!
Interesting exchange there. I agree that the vision should be to have EA so in-the-water that most people don’t realize they’re doing “Effective Altruism.” I’m very uncertain about how you get from here to there. I doubt it makes sense to shrink or downplay the existing EA community. My intuition is you want to scale up the memes at every level. Right now we’re selling everything to buy more AI safety memes. It’s going okay, but it’s risky and I’m conscious of the costs to everything else.
Specifically inspired by Mechanize’s piece on technological determinism. It seems overstated, but I wonder what the altruistic thing to do would be if they were right.
I don’t think it’s wise to redefine what should count as an ultra-processed food to suit your agenda. When people talk about UPFs, they’re clearly concerned about chemical additives like emulsifiers, stabilizers, colorings, and artificial sweeteners. As unlike wild chickens as today’s broiler chickens are, they don’t contain those things and don’t pose the same health risks that those things do. In that light, I found this discrediting:
For now I’ll just note that people are confused about processing — and the “experts” are part of the problem. That same 2024 survey of 10,000 Europeans, run by the EU- and FAO-backed EIT Food group, categorized chicken as “unprocessed” and plant-based chicken as “ultra-processed.” But both products come from processing plants. And both start as soybeans, corn, and additives. The main difference is that the chicken version includes an extra layer of “unnatural” processing — inside the stomach of a Franken-chicken confined in an animal factory.
I hope others don’t repeat it. It may well be that the general concept of UPFs is nonsense — and if so, people should argue for that, or that Beyond/Impossible doesn’t qualify under a reasonable definition — but people know what they have in mind when they use the term and will know you’re not respecting their concerns when you try to be clever like this.
12 Theses on EA
As Huw says, the video comes first. I think this puts almost anything you’d be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn’t chosen at random. People like charismatic megafauna.
The relationship between how fun your movement is for participants and its overall effectiveness is non-linear. You need to offer selfish rewards for (most) people to join. Offer too few selfish rewards and you’re going to have a small, ineffective movement no matter how good your ideas are or how qualitatively good the few people you do have are.
I agree that entertaining EAs has no terminal value, but it has huge instrumental value. Few people seem to be trying to make EA interesting and fun on that score. People’s main experience with EA seems to be getting pitched on jobs and then not getting them. Not fun!
There’s real work to be done getting people excited to earn to give and spread the ideas in their spare time. Done well enough, it can even improve your direct work talent pool!
I’m surprised at the “on during presentations, off during 1:1s” advice. My intuition is the opposite because of the volume of droplets and aerosols directed right at you by a speaking person in a 1:1. That seems more dangerous than sitting in a quiet room with many people just lightly breathing through their noses not directed at anyone. If you do all your 1:1s outside, I can see how this flips, but maybe you should say the recommendation depends on that.
This is assuming you go to 3-4 presentations and have ~20 1:1s.
The real solution is of course for ASB to provide us with 500 of those chlorine misters.
Free-Wheeling, Personality-Driven EA Conversations
I keep coming back to Yeats on this topic:
“The best lack all conviction while the worst are filled with passionate intensity”
I think the exceptionally truth-seeking, analytical, and quantitative nature of EA is virtuous, but those virtues too easily translate into a culture of timidness if you don’t consciously promote boldness.
Conceptually, Julia Galef talks about pairing social confidence with with epistemic humility in the Scout Mindset. It doesn’t come naturally, but it is possible and valuable when done well.
Right now I think Nicholas Decker is a great embodiment of this ethos. He says what he thinks without fear or social hesitation. He’s not always right and he flagrantly runs afoul of what’s considered socially acceptable or what a PR consultant would tell him to do, but there’s no mistaking his good-natured-ness and self-assurance that he’s on the right side of history, because generally *he is.* He doesn’t make the perfect the enemy of the good or excessively play it safe to avoid criticism.
A “bring on the haters” attitude is in fact more welcoming and trust-inducing than words carefully crafted to minimize criticism because it defeats the concern that you’re hiding something. And come on friends, the stuff you’re “hiding” in EA’s case – veganism, shrimp, future generations, etc. – is nothing to be ashamed of. And when you soft roll it, you’re endorsing the social sanction on these things as weird. Fuck that. Hit back. With grace. And pride. Fire the PR consultant in your head.
Amsterdam seems like the best one to look into given your background. EA Netherlands has long seemed strong to me and they regularly hold EAGx’s there. Generally Prague and Berlin will be quite strong too.
I’d be doing less good with my life if I hadn’t heard of effective altruism
Subjectively, I think I have done a lot more good because of EA, but have doubts around the potential negative sign of AI safety work (have we contributed to AI hype? Delayed beneficial AI? Else?) and cluelessness. On cluelessness, my alternative was a very causally-minimal life close to ~never leaving the house. I still don’t leave the house, but lots of what I send out of the house over the internet is more effectual than it would be.
Re commuter schools, it seems like the argument is just as strong in principle because the would-be organizer’s opportunity cost is proportionately lower. In practice, if that organizer is reading this post, there’s a good chance that they’re a a big-enough fish in a small-enough pond that they should focus on their individual development, so your point might hold nonetheless.
Maybe something that spans all the cruxes here is that there are just very low effort ways to run a group and capture a big part of the value. If no one else is doing it, it’s just very worth it to text the 3-4 interested people you know and substitute a group meeting for a general hang out once a month.
4-6 seem like compelling reasons to discount the intersection of AI and animals work (which is what this post is addressing), because AI won’t be changing what’s important for animals very much in those scenarios. I don’t think the post makes any comment on the value of current, conventional animal welfare work in absolute terms.
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
Can you say more about this? I largely have the opposite intuition: that presenting a specific set of empirical predictions (indeed “a story”) requires more – rather than less – trust in the presenter as compared to a more abstract model with its assumptions and alternative explanations explicitly stated.
Really nice post and completely resonates with my experience.
Dealing with squeaky wheels is especially challenging once you’re in a position where it’s known you have lots of good connections. There are just a lot of people very committed to their own advancement for its own sake who will nominally jump through whatever hoops you put in front of them even though you doubt they will put the connections they want to good use for the world. Getting good at saying no without being mean or cagey is a real skill.
The Tiny Flicker of Altruism
Thank you for praising my new hobby, Ozzie.
As an expansion on point 7: consistent, legible, self-motivated output on topics that matter is just a huge signal of value in intellectual work. A problem with basically all hiring is that the vast majority of people just want to “get through the day” in their work rather than push for excellence (or even just improvement). Naturally, in hiring, you’re trying to select for people who will care intrinsically about the quality*quantity of their outputs. There are few stronger signals of that than someone consistently doing something that looks like the work in their personal time for ~no (direct) reward.
Also, if you’re worried you’re not good enough, you’re probably right, but the only way to get good is to start writing bad stuff and make it better. I wrote the first post of my meh blog on this topic to keep me going. It’s sort of helped.
So much big picture, so few details
Yep. I agree there’s too great a default towards optimism and that some people are wasting their time as a result.
Based on the numbers I saw when I worked on hiring, I’d say something like 125-200 of the GovAI applicants were determined to work in AIS, properly conceived, as the primary thing they wanted to do. FIG is harder to guess, but I wouldn’t be surprised is they just got added to some popular lists of random/”easy” internship opportunities and got flooded with totally irrelevant apps.