These are the articles I sent one friend interested in EA: https://forum.effectivealtruism.org/posts/KvLyxHcwCforffpkC/introduction-to-effective-altruism-reading-list.
How much ML/CS knowledge is too much? For someone working in AI Policy, do you see diminishing returns to become a real expert in ML/CS, such that you could work directly as a technical person? Or is that level of expertise very useful for policy work?
How useful is general CS knowledge vs ML knowledge specifically?
What impactful career paths do you think law school might prepare you particularly well for, besides ETG and AI Policy? If an EA goes to law school and discovers they don’t want to do ETG or AI Policy, where should they look next?
Do these options mostly look like “find something idiosyncratic and individual that’s impactful”, or do you see any major EA pipelines that could use tons of lawyers?
[Didn’t mean to comment this]
For the average law school grad, what specific knowledge is most important to develop for working in AI Policy?
How to implement ML? A conceptual understanding of the history of ML? Math, like linear algebra? Coding or computer science more generally? Considerations around AI forecasting & AI risk? Current work on AI policy or technical safety? Histories of revolutionary technologies?
Are the best AI Policy opportunities concentrated in San Francisco? Or are there comparable opportunities in e.g. Washington DC, New York, Boston, Chicago, LA?
How much would your career suffer if you weren’t willing to live in the Bay Area?
If you go to a top law school, how difficult is it to work in AI Policy afterwards? Can virtually anyone with a top law degree get into the relevant roles? Or do you also need to distinguish yourself in other ways—technical understanding of AI, past policy research, etc?
One tractable and useful line of research at this point could be summarizing histories of colonizations and frontier explorations, and extrapolating lessons for space colonization.
Has there been any work here?
Strongly agreed. In order to make myself understood to a broad audience online, I find I have to be much more sincere, straightforward, and kind than I would be in real life.
Personally, I really appreciate when other people online go out of their way to say positive things about me or my thoughts, particularly when it’s right before they disagree with or criticize me—feeling affirmed keeps me from getting defensive.
Online, you’re simultaneously speaking to many different people who come to the discussion with very different perceptions of background and context. I try to write accessibly, so that no matter your level of understanding of context and background assumptions, you can read what I’m saying and interpret it how I intended.
It’s a lot more work, and it feels weird to write with a different personality than I live, but I think it helps many more people understand what I really mean.
(I’m not trying to criticize redmoonsoaring’s comment, or say it fails to do these things, I’m just going on a tangent about communicating online)
Thanks a ton for these responses Michelle, very helpful. Hopefully I’ll be able to get back to you soon with some more questions and clarifications.
Thanks so much for doing this, your answers are really informative. :)
Here’s a bunch of questions, all of them tied together, so feel no obligation to answer all (or any) of them.
Which individual parts of advising do you think are the most and least valuable? You listed these components above, which are most critical?
discussing cause prioritisation, suggesting career options the person hadn’t yet considered, helping rank options, providing encouragement to apply for things where the person might be too diffident, making introductions, giving more information / context on specific roles or organisations, recommending particular resources, brainstorming a concrete plan / next steps.
Can you say more about the relative value with advising of you being “a sounding board” and “helping people think through a fundamentally difficult and personal decision”, compared to you “hav[ing] a bunch of information [advisees] don’t”?
My underlying question is whether I (and other EAs) should spend much more time concretely planning my career than I am. (See here for my background thoughts.) If advising is valuable because it forces people to sit down and seriously plan their careers, then people could get the same value by planning on their own time. On the other hand, if the value of advising is something unique to 80k—information, insights, abilities, connections—then people probably can’t replicate the success of advising alone.
In general, do you think most EAs aren’t spending enough time on concrete career planning? In your opinion, how much of the benefit of advising could be achieved by someone independent of 80k by seriously researching and planning for a day?
Do you actually use the A/B/Z career planning tool described here? Is that out of date? Do you think that’s a very good way to plan your career, or might you suggest others?
Thank you both very much, I will do that, and I almost definitely wouldn’t have without your encouragement.
If anyone has more thoughts on the topic, please comment or reach out to me, I’d love to incorporate them into the top-level post.
I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I’ve started thinking it’s basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.
80,000 Hours targets the most professionally successful people in the world. That’s probably the right idea for them—giving good career advice takes a lot of time and effort, and they can’t help everyone, so they should focus on the people with the most career potential.
But, unfortunately for most EAs (myself included), the nine priority career paths recommended by 80,000 Hours are some of the most difficult and competitive careers in the world. If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, I’d guess you have slim-to-none odds of succeeding in any of them. The advice just isn’t tailored for you.
So how can the vast majority of people have an impactful career? My best answer: A lot of independent thought and planning. Your own personal brainstorming and reading and asking around and exploring, not just following stock EA advice. 80,000 Hours won’t be a gospel that’ll give all the answers; the difficult job of finding impactful work falls to the individual.
I know that’s pretty vague, much more an emotional mindset than a tactical plan, but I’m personally really happy I’ve started thinking this way. I feel less status anxiety about living up to 80,000 Hours’s recommendations, and I’m thinking much more creatively and concretely about how to do impactful work.
More concretely, here’s some ways you can do that:
Think of easier versions of the 80,000 Hours priority paths. Maybe you’ll never work at OpenPhil or GiveWell, but can you work for a non-EA grantmaker reprioritizing their giving to more effective areas? Maybe you won’t end up in the US Presidential Cabinet, but can you bring attention to AI policy as a congressional staffer or civil servant? (Edit: I forgot, 80k recommends congressional staffing!) Maybe you won’t run operations at CEA, but can you help run a local EA group?
The 80,000 Hours job board actually has plenty of jobs that aren’t on their priority paths, and I think some of them are much more accessible for a wider audience.
80,000 Hours tries to answer the question “Of all the possible careers people can have, which ones are the most impactful?” That’s the right question for them, but the wrong question for an individual. For any given person, I think it’s probably much more useful to think, “What potentially impactful careers could I plausibly enter, and of those, which are the most impactful?” Start with what you already have—skills, connections, experience, insights—and think outwards from there: how you can transform what you already have into an impactful career?
There are tons of impactful charities out there. GiveWell has identified some of the top few dozen. But if you can get a job at the 500th most effective charity in the world, you’re still making a really important impact, and it’s worth figuring out how to do that.
Talk to people working in the most important problems who aren’t top 1% of professional success—seeing how people like you have an impact can be really motivating and informative.
Personal donations can be really impactful—not earning to give millions in quant trading, just donating a reasonable portion of your normal-sized salary, wherever it is that you work.
Convincing people you know to join EA is also great—you can talk to your friends about EA, or attend/help out at a local EA group. Converting more people to EA just multiplies your own impact.
Don’t let the fact that Bill Gates saved a million lives keep you from saving one. If you put some hard work into it, you can make a hell of a difference to a whole lot of people.
Faunalytics does a lot of research and data oriented writing about animal rights. I’m a student with little to no credentials, yet Faunalytics helped me analyze twelve years of their proprietary polling data on animal rights and write up a post about it. I was very happy with the experience—it’s a useful bit of EA-aligned work, a great skills boost for me, and a nice resume item.
If you want to get involved with them, check out their website and email asking to do some volunteer work. They’ll probably ask you to do some menial/boring (yet still important) work at first to prove you’re serious about following through on your commitment. If you do that well, I’d bet they’d be happy to give you some kind of data project to take care of.
Happy to answer any questions!
Freakonomics, Steven Dubner and Steven Levitt—Very fun, cool little stories about economics, not super educational but drives an interest
Naked Economics, Charles Wheelan—The best intro I’ve read to standard economic ideas, fun and easy to read
Poor Economics, Banerjee and Duflo—A deep dive on how some anti-poverty interventions are radically more effective than others, and how details matter a lot. Pretty dry and you’ll forget most of the content, but the best case for evidence-based altruism I’ve read
Justice, Michael Sandel—Great intro to moral philosophy, covers all the major schools of thoughts with tons of fun anecdotes and thought experiments
That’s very interesting. Seems like evidence that EA might not be inherently more appealing to students at top schools, but rather that EA’s current composition is a product of circumstance and chance.
Hm. Definitely more a personal impression, and I should’ve qualified that as “it seems to me”. But I’d also bet on it being true.
Data point #1, people who took the 2018 EA Survey are twice as likely as the average American to hold a bachelor’s degree, and 7x more likely to hold a Ph.D. Maybe they’re getting these degrees from less competitive schools, but that seems less likely than the alternative.
Data point #2, a quick Google search reveals that all 7 Ivy League universities have EA clubs. On the other hand, at the 5 most populous US universities, each with 5-10x more students than the average Ivy, only Ohio State and Texas A&M have any online indication of an EA club, and both of these online pages have zero content posted on them.
Anecdotally, I go to an unranked state university with >50k students. There’s no EA club and I haven’t met anyone that’s ever heard of EA.
I think there’s a lot of potential for EA to be a lot more mainstream, but in its current state where top recommended careers include Machine Learning PhDs, Economics PhDs, and quant trading, it can be very hard to appeal to the vast majority of people.
What kinds of high schools did you generally target? Did you specifically target your efforts at schools that are feeders for top universities?
Though I wish EA were more diverse, it’s simply true that students at top universities have far more interest in EA than the average population. I’d imagine this holds true in high schools: The kids who end up running Berkeley EA are the ones who’d love to read Peter Singer in high school.
An interesting data point is that the current Director of Operations at Open Philanthropy, Beth Jones, was previously the Chief Operating Officer of the Hillary Clinton 2016 campaign.
On the other hand, the four operations associates most recently hired by OpenPhil have impressive but not overwhelmingly intimidating backgrounds. I’d like to know how many applied for those four positions.