Could 80,000 Hours make it clear on their job which roles they think are valuable only for career capital and aren’t directly impactful? It could just involve adding a quick boilerplate statement like in the job details, such as:
We think this role could be a great way to develop relevant career capital, although other opportunities would be better for directly making an impact.
Perhaps this suggestion is unworkable for various reasons. But I think it’s easy for people to think, since this job is listed on the 80,000 Hours jobs board and seems to have some connection to social impact, then it’s a great way to make an impact. It’s already tempting enough for people to work on AGI capabilities as long as it’s “”safe”″. And when the job description says “OpenAI […] is often perceived as one of the leading organisations working on the development of beneficial AGI,” the takeaway for readers is likely that any role there is a great way to positively shape the development of AI.
Please don’t work in AI capabilities research, and in particular don’t work in labs directly trying to build AGI (e.g. OpenAI or Deepmind). There are few jobs that cause as much harm, and historically the EA community has already caused great harm here. (There are some arguments that people can make the processes at those organizations safer, but I’ve only heard negative things about people working in jobs that are non-safety related who tried to do this, and I don’t currently think you will have much success changing organizations like that from a ground-level engineering role)
Could 80,000 Hours make it clear on their job which roles they think are valuable only for career capital and aren’t directly impactful? It could just involve adding a quick boilerplate statement like in the job details, such as:
Perhaps this suggestion is unworkable for various reasons. But I think it’s easy for people to think, since this job is listed on the 80,000 Hours jobs board and seems to have some connection to social impact, then it’s a great way to make an impact. It’s already tempting enough for people to work on AGI capabilities as long as it’s “”safe”″. And when the job description says “OpenAI […] is often perceived as one of the leading organisations working on the development of beneficial AGI,” the takeaway for readers is likely that any role there is a great way to positively shape the development of AI.
What are your thoughts on Habryka’s comment here?
China-related AI safety and governance paths—Career review (80000hours.org) recommends working in regular AI labs and trying to build up the field of AI safety there. But how would one actually try to pivot a given company in a more safety-oriented direction?