Substack shill @ parhelia.substack.com
Conor Barnes š¶
@Sudhanshu Kasewa has enlisted me for this one!
I think earning to give is a really strong option and indeed the best option for many people.
Lack of supply is definitely an issue, though it can be helped by looking for impactful opportunities outside of āEA orgsā per seāI donāt know if this is your scenario, but this is often a problem. Knowing nothing about a personās situation and location, Iād prompt:
Can you look for roles in government in order to work on impactful policy?
Are there start-ups or other learning opportunities you could apply to in order to develop career capital?
Is there a niche in the world youāre suited to fill? Could you find a co-founder and start filling it?I say a bit more here.
A clarification: We would not post roles if we thought they were net harmful and were hoping that somebody would counterfactually do less harm. I think that would be too morally fraught to propose to a stranger.
Relatedly, we would not post a job where we thought that to have a positive impact, youād have to do the job badly.
We might post roles if we thought the average entrant would make the world worse, but a job board user would make the world better (due to the EA context our applicants typically have!). No cases of this come to mind immediately though. We post our jobs because we consider them promising opportunities to have a positive impact in the world, and expect job board users to do even more good than the average person.
Hi Geoffrey,
Iām curious to know which roles weāve posted which you consider to be capabilities developmentāour policy is to not post capabilities roles at the frontier companies. We do aim to post jobs that are meaningfully able to contribute to safety and arenāt just safety-washing (and our views are discussed much more in depth here). Of course, weāre not infallible, so if people see particular jobs they think are safety in name only, we always appreciate that being raised.
I strongly agree with @Bellaās comment. Iād like to add:
I encourage job-seekers to not think of EA jobs as the one way to have impact in oneās career. Almost all impactful roles are not at EA orgs. On the 80,000 Hours job board we try to find the most promising ones, but we wonāt catch everything!
Our co-worker Laura GonzÔlez Salmerón has a great talk on this topic.
Even if the movement is not talent-constrained, the problems weāre trying to solve are talent-constrained. The world still needs way more people working on catastrophic risks, animal welfare, and global health, whether or not there are EA organisations hiring for roles.
Earning to give remains a wonderful option.
ļBook ReĀview: The Book Against Death
If your strategy is to just apply to open hiring rounds, such as through job ads that are listed on the 80,000 Hours job boards, you are cutting your chances of landing a role by ~half. Itās hard to know the exact figure, but I wouldnāt be surprised if as many as 30-50% of paid roles in the movement arenāt being recruited through traditional open hiring rounds ā¦
This is my impression as well, though heavily skewed by experience level. Iād estimate that >80%+ of senior āhiresā in the movement occur without a public posting, and something like 20% of junior hires.
As an aside and as ever though, Iād encourage people to not get attached to finding a role āin the movementā as a marker of impact.
80,000 Hours: Job Board ā Job Birds
I really appreciated reading this. It captured a lot of how I feel when I think about having taken the pledge. Itās astounding. I think itās worth celebrating, and assuming the numbers add up, I think itās worth grappling with the immensity of having saved a life.
Hey Manuel,
I would not describe the job board as currently advertising all cause areas equally, but yes, the bar for jobs not related to AI safety will be higher now. As I mention in my other comment, the job board is interpreting this changed strategic focus broadly to include biosecurity, nuclear security, and even meta-EA workāwe think all of these have important roles to play in a world with a short timeline to AGI.
In terms of where weāll be raising the bar, this will mostly affect global health, animal welfare, and climate postings ā specifically in terms of the effort we put into finding roles in these areas. With global health and animal welfare, weāre lucky to have great evaluators like GiveWell and great programs like Charity Entrepreneurship to help us find promising orgs and teams. Itās easy for us to share these roles, and I remain excited to do so. However, part of our work involves sourcing for new roles and evaluating borderline roles. Much of this time will shift into more AIS-focused work.
Cause-neutral job board: Itās possible! I think that our change makes space for other boards to expand. I also think that this creates something of a trifecta, to put it very roughly: The 80k job board with our existential risk focus, Probably Good with a more global health focus, and Animal Advocacy Careers with an animal welfare focus. Itās possible that effort put into a cause-neutral board could be better put elsewhere, given that thereās already coverage split between these three.
I want to extend my sympathies to friends and organisations who feel left behind by 80kās pivot in strategy. Iāve talked to lots of people about this change in order to figure out the best way for the job board to fit into this. In one of these talks, a friend put it in a way that captures my own feelings: I hate that this is the timeline weāre in.
Iām very glad 80,000 Hours is making this change. Iām not glad that weāve entered the world where this change feels necessary.
To elaborate on the job board changes mentioned in the post:
We will continue listing non-AI-related roles, but will be raising our bar. With some cause areas, we still consider them relevant to AGI (for example: pandemic preparedness). With others, we still think the top roles could benefit from talented people with great fit, so weāll continue to post these roles.
Weāll be highlighting some roles more prominently. Even among the roles we post, we think the best roles can be much more impactful than others. Based on conversations with experts, we have some guess at which roles these are, and want to feature them a little more strongly.
Become conversational in Spanish so I can talk to my fianceĆ©ā²s family easily.
Work out ten times per month (3x/āweek with leeway)
Submit 12 short stories about transformative AI to publishers this year.
More details here. Ongoing mission: get a literary agent for my novel!
One example I can think of with regards to people āgraduatingā from philosophies is the idea that people can graduate out of arguably āadolescentā political philosophies like libertarianism and socialism. Often this looks like people realizing society is messy and that simple political philosophies donāt do a good job of capturing and addressing this.
However, I think EA as a philosophy is more robust than the above: There are opportunities to address the immense suffering in the world and to address existential risk, some of these opportunities are much more impactful than others, and itās worth looking for and then executing on these opportunities. I expect this to be true for a very long time.
In general I think effective giving is the best opportunity for most people. We often get fixated on the status of directly working on urgent problems, which I think is a huge mistake. Effective giving is a way to have a profound impact, and I donāt like to think of it as something just āfor mere mortalsāāI think thereās something really amazing about people giving a portion of their income every year to save lives and health, and I think doing so makes you as much an EA as somebody whose job itself is impactful.
Hi there, Iād like to share some updates from the last month.
Text during last update (July 5)
OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.
Text as of today:
OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic. Note that there have also been concerns around OpenAIās HR practices.
The thinking behind these updates has been:We continue to get negative updates concerning OpenAI, so itās good for us to update our guidance accordingly.
While itās unclear exactly whatās going on with the NDAs (are they cancelled or are they not?), itās pretty clear that itās in the interest of users to know thereās something they should look into with regard to HR practices.
Weāve tweaked the language to āconcerns about doing harmā instead of āconsiderationsā for all three frontier labs to indicate more strongly that these are potentially negative considerations to make before applying.
We donāt go into much detail for the sake of length /ā people not glazing over themāmy guess is that the current text is the right length to have people notice it and then look into it more with our newly updated AI company article and the Washington Post link.
This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still donāt align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!
I interpreted the title to mean āIs it a good idea to take an unpaid UN internship?ā, and it took a bit to realize that isnāt the point of the post. You might want to change the title to be clear about what part of the unpaid UN internship is the questionable part!
Update: Weāve changed the language in our top-level disclaimers: example. Thanks again for flagging! Weāre now thinking about how to best minimize the possibility of implying endorsement.
(Copied from reply to Raemon)
Yeah, I think this needs updating to something more concrete. We put it up while āeverything was happeningā but Iāve neglected to change it, which is my mistake and will probably prioritize fixing over the next few days.
Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we donāt list because we lack confidence theyāre safety-focused.
For the alignment role in question, I think the team description given at the top of the post gives important context for the roleās responsibilities:
OpenAIās Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to directly supervise them.
With the above in mind, the role responsibilities seem fine to me. I think this is all pretty tricky, but in general, Iāve been moving toward looking at this in terms of the teams:
Alignment Science: Per the above team description, Iām excited for people to work there ā though, concerning the question of what evidence would shift me, this would change if the research they release doesnāt match the team description.
Preparedness: I continue to think itās good for people to work on this team, as per the description: āThis team ⦠is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.ā
Safety Systems: I think roles here depend on what they address. I think the problems listed in their team description include problems I definitely want people working on (detecting unknown classes of harm, red-teaming to discover novel failure cases, sharing learning across industry, etc), but itās possible that we should be more restrictive in which roles we list from this team.
I donāt feel confident giving a probability here, but I do think thereās a crux here around me not expecting the above team descriptions to be straightforward lies. Itās possible that the teams will have limited resources to achieve their goals, and with the Safety Systems team in particular, I think thereās an extra risk of safety work blending into product work. However, my impression is that the teams will continue to work on their stated goals.
I do think itās worthwhile to think of some evidence that would shift me against listing roles from a team:
If a team doesnāt publish relevant safety research within something like a year.
If a teamās stated goal is updated to have less safety focus.
Other notes:
Weāre actually in the process of updating the AI company article.
The top-level disclaimer: Yeah, I think this needs updating to something more concrete. We put it up while āeverything was happeningā but Iāve neglected to change it, which is my mistake and will probably prioritize fixing over the next few days.
Thanks for diving into the implicit endorsement point. I acknowledge this could be a problem (and if so, I want to avoid it or at least mitigate it), so Iām going to think about what to do here.
Hi, I run the 80,000 Hours job board, thanks for writing this out!
I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we donāt conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.
For OpenAI in particular, weāve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work ā a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do important work. Two live examples:
Even if we were very sure that OpenAI was reckless and did not care about existential safety, I would still expect them to not want their model to leak out to competitors, and importantly, we think itās still good for the world if their models donāt leak! So I would still expect people working on their infosec to be doing good work.
These still seem like potentially very strong roles with the opportunity to do very important work. We think itās still good for the world if talented people work in roles like this!
This is true even if we expect them to lack political power and to play second fiddle to capabilities work and even if that makes them less good opportunities vs. other companies.
We also include a note on their ājob cardsā on the job board (also DeepMindās and Anthropicās) linking to the Working at an AI company article you mentioned, to give context. Weāre not opposed to giving more or different context on OpenAIās cards and are happy to take suggestions!
- Jul 19, 2024, 2:13 PM; 13 points) 's comment on 80,000 hours should reĀmove OpenAI from the Job Board (and similar EA orgs should do similarly) by (
I find the Leeroy Jenkins scenario quite plausible, though in this world itās still important to build the capacity to respond well to public support.
There are some great replies here from career advisorsāIām not one, but I want to mention that I got into software engineering without a university degree. Iām hesitant to recommend software engineering as the safe and well-paying career it once was, but I think learning how to code is still a great way to quickly develop useful skills without requiring a four-year degree!