Building out research fellowships and public-facing educational programming for lawyers
Mjreard
80k will sponsor conference trips for 10 people who refer others to 80k advising
Things downstream of OpenPhil are in the 90th+ percentile of charity pay, yes, but why do people work in the charity sector? Either because they believe in the specific thing (i.e. they are EAs) or because they want the warm glow of working for a charity. Non-EA charities offer more warm glow, but maybe there’s a corner of “is a charity” and “pays well for a charity even though people in my circles don’t get it” that appeals to some. I claim it’s not many and EA jobs are hard to discover for the even smaller population of people who have preferences like these and are high competence.
Junior EA roles sometimes pay better than market alternatives in the short run, but I believe high potential folks will disproportionately track lifetime earnings vs the short run and do something that’s better career capital.
I’d guess the biggest con is adverse selection. Why is this person accepting a below market wage at a low-conventional-status organization?
An EA might be the most conventionally talented candidate because they’re willing to take the role despite these things.
Agree with the analysis and quite likely to take the Off the Clock suggestion. Thank you!
I think your current outlook should be the default for people who engage on the forum or agree with the homepage of effectivealtruism.com. I’m glad you got there and that you feel (relatively) comfortable about it. I’m sorry that the process of getting there was so trying. It shouldn’t be.
It sounds like the tryingness came from a social expectation to identify as capital ‘E’ capital ‘A’ upon finding resonance with the basic ideas and that identifying that way implied an obligation to support and defend every other EA person and project.
I wish EA weren’t a question of identity, but behavior. Which actions are you choosing to take and why? Your reasons can draw on many perspectives at once and none needs to dominate. Even the question of “should I do meta EA work/advocacy?” can be taken this way and I think something like it is what’s at stake in the “should I identify as EA?” question.
I personally do meta work and feel free to criticize particular EA-identifying people and organizations. I also feel free to quit this work all together and be quieter about cause-spanning features of EA. If I quit and got quieter, I’d feel like that’s just another specific thing I did, rather than some identity line in the sand I’ve crossed. Maybe it’s because my boss sucked; maybe it’s because I felt compelled to focus on tax policy instead of GCRs, but it wouldn’t be because of my general feelings of worthiness or acceptance by the EA monolith.
A lot of what I’m saying is couched in the language of personal responsibility on your part in what kinds of identity questions you ask yourself, but I want to be clear that the salience of the identity question is also socially created by people asking questions in the direction of fidelity to ideas and their logical limits as a sort of status test, e.g. which utilitarian bullets you will or won’t bite. As much as we shouldn’t preoccupy ourselves with such questions, we shouldn’t preoccupy others with them either.
Your feedback for Actually After Hours: the unscripted, informal 80k podcast
If you haven’t, you should talk to the 80k advising team with regard to feedback. We obviously aren’t the hiring orgs ourselves, but I think we have reasonable priors on how they read certain experiences and proposals. We’ve also been through a bunch of EA hiring rounds ourselves and spoken to many, many people on both sides of them.
I think you’ve failed to think on the margin here. I agree that the broad classes of regulation you point to here have *netted out* badly, but this says little about what the most thoughtful and determined actors in these spaces have achieved.
Classically, Germany’s early 2000s investments in solar R&D had enormous positive externalities on climate and the people who pushed for those didn’t have to support restricting nuclear power also. The option space for them was not “the net-bad energy policy that emerged” vs “libertarian paradise;” it was: “the existing/inevitable bad policies with a bet on solar R&D” vs “the existing/inevitable bad policies with no bet on solar R&D.”
I believe most EAs treat their engagement with AI policy as researching and advocating for narrow policies tailored to mitigate catastrophic risk. In this sense, they’re acting as an organized/expert interest group motivated by a good, even popular per some polls, view of the public interest. They are competing with rather than complementing the more selfishly motivated interest groups seeking the kind of influence the oil & gas industry did in the climate context. On your model of regulation, this seems like a wise strategy, perhaps the only viable one. Again the alternative is not no regulation, but regulation that leaves out the best, most prosocial ideas.
To the extent you’re trying to warn EAs not to indiscriminately cheer any AI policy proposal assuming it will help with x-risk, I agree with you. I don’t however agree that’s reflective of how they’re treating the issue.
Tiny nit: I didn’t and don’t read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much.
Yes, in general it’s good to remember that people are far from 1:1 substitutes for each other for a given job title. I think the “1 into 2” reasoning is a decent intuition pump for how wide the option space becomes when you think laterally though and that lateral thinking of course shouldn’t stop at earning to give.
A minor, not fully-endorsed object level point: I think people who do ~one-on-one service work like (most) doctors and lawyers are much less likely to 10x the median than e.g. software engineers. With rare exceptions, their work just isn’t that scalable and in many cases output is a linear return to effort. I think this might be especially true in public defense where you sort of wear prosecutors down over a volume of cases.
Looks like the UK hardcover release isn’t until 21 May, but it’s available on Kindle? Is that right?
If the lives of pests are net negative,* I think a healthy attitude is to frame your natural threat/disgust reaction to them as useful. The pests you see now are a threat to all the future pests they will create. It’s imperative to the suffering of those future creatures that the first ones don’t live to create them. Our homes are fertile breeding grounds for enormous suffering. I think creating these potential breeding grounds gives us a responsibility to prevent them from realizing that potential.
I take the central (practical) lesson of this post to be that that responsibility should spark some urgency to act and overcome guilt when we notice the first moth or mouse. We’ve already done the guilty thing of creating this space and not isolating it. The only options left are between more suffering and less.
Thank you for the post!
*I mean this broadly to include both cases where their lives are net negative in the intervention-never scenario and in (more likely) scenarios like these where the ~inevitable human intervention might make them that way.
Nice punchy writing! I hope this sparks some interesting, good faith discussions with classmates.
I think a powerful thing to bring up re earning to give is how it can strictly dominate some other options. e.g. a 4th or 5th year biglaw associate could very reasonably fund two fully paid public defender positions with something like 25-30% of their salary. A well-paid plastic surgeon could fund lots of critical medical workers in the developing world with less.
One important thing to keep in mind when you have these chats is that there are better options; they’re just harder to carve out and evaluate. One toy model I play with is entrepreneurship. Most people inclined towards working for social good have a modesty/meekness about them where they just want to be a line-worker standing shoulder-to-shoulder with people doing the hard work of solving problems. This suggests there might be a dearth of people with this outlook looking to build, scale, and importantly sell novel solutions.
As you point out, there are a lot of rich people out there. Many/most of them just want to get richer, sure, but lots of them have foundations or would fund exciting/clever projects with exciting leaders, even if there wasn’t enormous (or any) profitability in it. The problem is a dearth of good prosocial ideas – which Harvard students seem well positioned to spin up: you have four years to just think and learn about the world, right? What projects could exist that need to? Figure it out instead of soldiering away for existing things.
Curious if you’ve seen or could share botecs on the all-in cost per retreat?
Naïvely, people like to benchmark 5% of property value per year as the all-in-cost of ownership alone (so ~$750k/yr here? Really not sure how this scales to properties like Wytham).
I wonder how that compares to the savings in variable retreat costs? Like if you had 20 retreats/yr are you saving (close to?) $32,500 per retreat (assuming 750k is the right number). Accommodation for 25 people for 4 nights in Oxford could plausibly be ~$20k itself, so it seems like with a given number of retreats or attendees, you could get quite close, but the numbers matter here.
For what it’s worth, I think you shouldn’t worry about the first two bullets. The way you as an individual or EA as a community will have big impact is through specialization. Being an excellent communicator of EA ideas is going to have way bigger and potentially compounding returns than your personal dietary or donation choices (assuming you aren’t very wealthy). If stressing about the latter takes away from the former, that seems like a mistake to be worried about.
I also shouldn’t comment without answering the question:
I balk at thorny or under-scoped research/problems that could be very valuable
It feels aversive to dig into something without a sense of where I’ll end up or whether I’ll even get anywhere
If there’s a way I can bend what I already know/am good at into the shape of the problem, I’ll do that instead
One way this happens is that I only seek out information/arguments/context that are legible to me, specifically more big picture/social science-oriented things like Holden, Joe Carlsmith or Carl Shulman, even though understanding whether technical aspects of AI alignment/evals make sense is a bigger and more unduly under-explored crux for understanding what matters
I fail to be a team player in a lot of ways.
I have my own sense of what my team/org’s priorities should be
I expect others around me to intuit, adopt these priorities with no or minimal communication
When we don’t agree or reach consensus and there’s a route for me to avoid resolving the tension, I take the avoidant route. Things that I don’t think are important, but others do, don’t happen
I think this is a version of a more general form of motivated reasoning where one seeks out a variable in an argument which is:
imprecise,
ambiguous,
dependent on multiple other hard-to-track variables, or
a variable over which they can claim unique knowledge (here, ‘what I am good at personally and how good at it I am’)
which they can then ratchet up to the maximum value for things they want to believe and the minimum value for things they don’t want to believe.
I noticed this acutely in the comments on the 80k/Rational Animations crossover video, namely things like “If you become a doctor, you don’t know how many life-saving situations you run into” (imprecision about likelihoods) or “Dr. Nalin couldn’t have achieved what he did without the help of many others, down to the bricklayers and garbagemen who provided the essentials he needed to focus” (ambiguity/dependencies about credit).
Finding low-confrontation ways to point such things out seems valuable. Maybe the Scout Mindset remains the best work here.
It is scary and painful for people to admit they were mistaken, especially about their basic narratives concerning what’s valuable or what they intended to do with their lives. I’d guess highlighting that truth-seeking is a broader, more-endorsed narrative – that also implies lots of changing your mind – is one way to shake people out of these more contingent narratives.
I think this characterizes the disagreement between pause advocates and Anthropic as it stood before the Claude 3 release with some pause-advocacy-favorable assumptions about the politics of maintaining one’s position in the industry. Full-throated, public pause advocacy doesn’t seem like a good way to induce investment in your company, for example.
More broadly I think Anthropic, like many, hasn’t come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.
As I understand it, [part of] Anthropic’s theory of change is to be a meaningful industry player so its safety agenda can become a potential standard to voluntarily emulate or adopt in policy. Being a meaningful industry player in 2024 means having desirable consumer products and advertising them as such.
It’s also worth remembering that this is advertising. Claiming to be a little bit better on some cherry picked metrics a year after GPT-4 was released is hardly a major accelerant in the overall AI race.
Too high. I thought there were huge scaling barriers based on something Linch wrote ~2 years ago. Maybe that’s wrong or been retracted.
Leopold’s implicit response as I see it:
Convincing all stakeholders of high p(doom) such that they take decisive, coordinated action is wildly improbable (“step 1: get everyone to agree with me” is the foundation of many terrible plans and almost no good ones)
Still improbable, but less wildly, is the idea that we can steer institutions towards sensitivity to risk on the margin and that those institutions can position themselves to solve the technical and other challenges ahead
Maybe the key insight is that both strategies walk on a knife’s edge. While Moore’s law, algorithmic improvement, and chip design hum along at some level, even a little breakdown in international willpower to enforce a pause/stop can rapidly convert to catastrophe. Spending a lot of effort to get that consensus also has high opportunity cost in terms of steering institutions in the world where the effort fails (and it is very likely to fail).
Leopold’s view more straightforwardly makes a high risk bet on leaders learning things they don’t know now and developing tools they can’t foresee now by a critical moment that’s fast approaching.
I think it’s accordingly unsurprising that confidence in background doom is the crux here. In Leopold’s 5% world, the first plan seems like the bigger risk. In MIRI’s 90% world, the second does. Unfortunately, the error bars are wide here and the arguments on both sides seem so inextricably priors-driven that I don’t have much hope they’ll narrow any time soon.