Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team
Mjreard
Your feedback for Actually After Hours: the unscripted, informal 80k podcast
It seems like some discussion of s-risks is called for as they seem to be assumed away, though many longtermists are concerned about them.
“Leadership” and “eco-systems” sound very nice as far as they’re described here but I find this post unhelpful as a guide to what “EA” should do.
Assuming this post is addressing EA funders – rather than the collection of diverse, largely uncoordinated, people, organizations, and perspectives that ‘EA’ is – is the claim that funders should open 20 of these offices? Who do they pay to do that and apply the “high standards” for early membership? What are the standards? Should people have models of the world that distinguish good/bad opportunities and big/small ones? At what point does answering these questions become too analytical?
“Find all the smart altruistic people, point them to each other, give them some money, and let them do what they want” sounds nice, but aren’t there hundreds or thousands of organizations interested in funding various projects, not least of which the whole VC industry? My sense is that not analyzing why you might be the funder of last resort, at least a little bit, is a recipe to crash and burn very quickly. $1m/yr/office could feed a handful of people and keep the lights on, but it’s not scaling any projects. “EA” doesn’t have enough money to last long without a lot of analysis and it’s only been around for ~10 years.
People with diverse, niche interests and moxie have had really outsized influence on the world. It’s easy to say “go find them,” but the ones who will actually make a difference are very very few and far between and it takes some analysis to find them. There are a million people in Port Au Prince and probably hundreds of discernible perspectives on how to make things better there. Some multiplication for other localities. The Future Fund has 30 categories of ideas they want to pursue. Maybe that’s “too small,” but they’re largely unaddressed and really big in scale. If they wanted to count all wins as equal, I don’t doubt they could rack up a lot of very concrete wins and cool stories, but that seems to be what… all the rest of philanthropy is doing. And I’m glad they are!
There’s an undergrad econ thing where burning a dollar lowers the price level for all other dollar holders and increases their welfare, but everyone thinks you can do better than that by being more discerning. So just saying “more causes/ideas!” isn’t really helpful without some limiting principle.
EAGx Boston 2018 Postmortem
This interview with Obama Allan Defoe pointed to once is pretty instructive around these questions. On reflection, reasonable government actors see the case, it’s just really hard to prioritize given short-run incentives (“maybe in 20 years the next generation will see things coming and deal with it”).
My basic model is that government actors are all tied up by the stopping problem. If you ever want to do something good you need to make friends and win the next election. The voters and potential allies are even more short-termist than you. Availability bias explains why people would care about private nuclear weapons. Superintelligence codes as dinner party/dorm room chat. It will sell books, but it’s not action relevant.
Nice punchy writing! I hope this sparks some interesting, good faith discussions with classmates.
I think a powerful thing to bring up re earning to give is how it can strictly dominate some other options. e.g. a 4th or 5th year biglaw associate could very reasonably fund two fully paid public defender positions with something like 25-30% of their salary. A well-paid plastic surgeon could fund lots of critical medical workers in the developing world with less.
One important thing to keep in mind when you have these chats is that there are better options; they’re just harder to carve out and evaluate. One toy model I play with is entrepreneurship. Most people inclined towards working for social good have a modesty/meekness about them where they just want to be a line-worker standing shoulder-to-shoulder with people doing the hard work of solving problems. This suggests there might be a dearth of people with this outlook looking to build, scale, and importantly sell novel solutions.
As you point out, there are a lot of rich people out there. Many/most of them just want to get richer, sure, but lots of them have foundations or would fund exciting/clever projects with exciting leaders, even if there wasn’t enormous (or any) profitability in it. The problem is a dearth of good prosocial ideas – which Harvard students seem well positioned to spin up: you have four years to just think and learn about the world, right? What projects could exist that need to? Figure it out instead of soldiering away for existing things.
Curious if you’ve seen or could share botecs on the all-in cost per retreat?
Naïvely, people like to benchmark 5% of property value per year as the all-in-cost of ownership alone (so ~$750k/yr here? Really not sure how this scales to properties like Wytham).
I wonder how that compares to the savings in variable retreat costs? Like if you had 20 retreats/yr are you saving (close to?) $32,500 per retreat (assuming 750k is the right number). Accommodation for 25 people for 4 nights in Oxford could plausibly be ~$20k itself, so it seems like with a given number of retreats or attendees, you could get quite close, but the numbers matter here.
For what it’s worth, I think you shouldn’t worry about the first two bullets. The way you as an individual or EA as a community will have big impact is through specialization. Being an excellent communicator of EA ideas is going to have way bigger and potentially compounding returns than your personal dietary or donation choices (assuming you aren’t very wealthy). If stressing about the latter takes away from the former, that seems like a mistake to be worried about.
I also shouldn’t comment without answering the question:
I balk at thorny or under-scoped research/problems that could be very valuable
It feels aversive to dig into something without a sense of where I’ll end up or whether I’ll even get anywhere
If there’s a way I can bend what I already know/am good at into the shape of the problem, I’ll do that instead
One way this happens is that I only seek out information/arguments/context that are legible to me, specifically more big picture/social science-oriented things like Holden, Joe Carlsmith or Carl Shulman, even though understanding whether technical aspects of AI alignment/evals make sense is a bigger and more unduly under-explored crux for understanding what matters
I fail to be a team player in a lot of ways.
I have my own sense of what my team/org’s priorities should be
I expect others around me to intuit, adopt these priorities with no or minimal communication
When we don’t agree or reach consensus and there’s a route for me to avoid resolving the tension, I take the avoidant route. Things that I don’t think are important, but others do, don’t happen
If you haven’t, you should talk to the 80k advising team with regard to feedback. We obviously aren’t the hiring orgs ourselves, but I think we have reasonable priors on how they read certain experiences and proposals. We’ve also been through a bunch of EA hiring rounds ourselves and spoken to many, many people on both sides of them.
If the lives of pests are net negative,* I think a healthy attitude is to frame your natural threat/disgust reaction to them as useful. The pests you see now are a threat to all the future pests they will create. It’s imperative to the suffering of those future creatures that the first ones don’t live to create them. Our homes are fertile breeding grounds for enormous suffering. I think creating these potential breeding grounds gives us a responsibility to prevent them from realizing that potential.
I take the central (practical) lesson of this post to be that that responsibility should spark some urgency to act and overcome guilt when we notice the first moth or mouse. We’ve already done the guilty thing of creating this space and not isolating it. The only options left are between more suffering and less.
Thank you for the post!
*I mean this broadly to include both cases where their lives are net negative in the intervention-never scenario and in (more likely) scenarios like these where the ~inevitable human intervention might make them that way.
Perhaps surprisingly (and perhaps not as relevant to this audience): take cause prioritization seriously, or more generally, have clarity about your ultimate goals/what you’ll look to to know whether you’ve made good decisions after the fact.
It’s very common that someone wants to do X, I ask them why, they give an answer that doesn’t point to their ultimate priorities in life, I ask them “why [thing you pointed to]?” and they more or less draw a blank/fumble around uncertainly. Granted it’s a big question, but it’s your life, have a sense of what you’re trying to do at a fundamental level.
Tiny nit: I didn’t and don’t read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much.
Don’t be too fixated on instant impact. Take good opportunities as they come of course, but people are often drawn towards things that sound good/ambitious for the problems of the moment even though they might not be best positioned to tackle those things and might burn a lot of future opportunities by doing so. Details will vary by situation of course.
Speaking just to the little slice of the world I know:
Using a legal research platform (i.e. Westlaw, LexisNexis, Casetext) could be really helpful with several of these. If you’re good at thinking up search terms and analogous products/actors/circumstances (3D firearms, banned substances, and squatting on patents are good examples here), there’s basically always a case where someone wasn’t happy someone else was doing X, so they hired lawyers to figure out which laws were implicated by X and file a suit/indict someone, usually on multiple different theories/laws.
The useful part is that courts will then write opinions on the theories which provide a clear, lay-person explanation of the law at issue, what it’s for, how it works, etc. before applying it to the facts at hand. Basically, instead of you having to stare at some pretty abstract, high-level, words in a statute/rule and imagine how they apply, a lot of that work has already been done for you, and in an authoritative, citable way. Because cases rely on real facts and circumstances, they also make things more concrete for further analogy-making to the thing you care about.
Downside is these tools seem to cost at least ~$150/mo, but you may be able to get free access through a university or find other ways to reduce this. Google Scholar case law is free, but pretty bad.
I was confused by this:
There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this.
You seem to be saying that Sam was among the smartest and so saw that these were unprincipled hacks and ignored them, yet the rest of the post goes into great detail on how profoundly stupid Sam was for buying into such a naive, simplistic world model that very predictably failed on its own terms. I would expect the smartest people in the world to make better predictions and use better models.
I interpret EA as common sense morality+. Abiding by common sense morality will often leave you with a lot of time and energy left over. If you want to do more good, you should use those to increase welfare, and do so in a scientific and scope-sensitive way. Is that clearly not EA or an unprincipled hack?
Openness to working in existential risk mitigation is not a strict requirement for having a call with us, but it is our top priority and the broad area we know and think most about. EA identity is not-at-all a requirement outside the very broad bounds of wanting to do good and being scope sensitive with regard to that good. Accordingly, I think it’s worth the 10 minutes to apply if you’ve 1) read/listened to some 80k content and found it interesting, and 2) have some genuine uncertainty about your long run career. I think 1) + 2) describe a broad enough range of people that I’m not worried about our potential user base being too small.
So, depending on how you define EA, I might be fine with our current messaging. If people think you need to be a multiple-EAG attendee who wears the heart-lightbulb shirt all the time to get a call, that would be a problem and I’d be interested to know what we’re doing to send that message. When I look at our web content and YouTube ads for example, I’m not worried about being too narrow.
1-3 seem good for generating more research questions like ASB’s, but the narrower research questions are ultimately necessary to get to impact. 4-8 seem like things EA is over-invested in relative to what ASB lays out here, not that more couldn’t be done there.
People are often surprised that full time advisors only do ~400 calls/year as opposed to something like 5 calls/day (i.e.1,300/yr). For one thing, my BOTEC on the average focus time for an individual advisee is 2.25 hours (between, call prep, the call itself, post-call notes/research on new questions, introduction admin, and answering follow up emails). Beyond that, we have to keep up with what’s going on in the world and the job markets we track, as well as skilling up as generalist advisors. There’s also more formal systems we need to contribute to like marketing, impact assessment, and maintaining the systems that get us all the information we use to help advisees and keep that 2.25 hours at 2.25 hours.
As I understand it, [part of] Anthropic’s theory of change is to be a meaningful industry player so its safety agenda can become a potential standard to voluntarily emulate or adopt in policy. Being a meaningful industry player in 2024 means having desirable consumer products and advertising them as such.
It’s also worth remembering that this is advertising. Claiming to be a little bit better on some cherry picked metrics a year after GPT-4 was released is hardly a major accelerant in the overall AI race.
I appreciate the care and detail here, but would guess that wild animals dwarf everything considered here and present a much more difficult + important question.
How bad are forests per unit of land vs corn/soy/wheat fields or cattle ranches that have been replacing them seems like a key question.