Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it’s totally reasonable to:
Disagree with 80,000 Hours’s views on AI safety being so high priority, in which case you’ll disagree with a big chunk of the organisation’s strategy.
Disagree with 80k’s views on working in AI companies (which, tl;dr, is that it’s complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong here. It’s not obvious what the best thing to do here is, and we discuss this a bunch internally. But we think there’s risk in any approach to issue, and are going with our best guess based on talking to people in the field. (We reported on some of their views, some of which were basically ‘no don’t do it!’, here.)
Think that people should prioritise personal fit more than 80k causes them to. To be clear, we think (& 80k’s content emphasises) that personal fit matters a lot. But it’s possible we don’t push this hard enough. Also, because we think it’s not the only thing that matters for impact (& so also talk a lot about cause and intervention choice), we tend to present this as a set of considerations to navigate that involves some trade-offs. So It’s reasonable to think that 80k encourages too much trading off of personal fit, at least for some people.
Thanks Arden. I suspect you don’t disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
We are cause neutral[1] – we prioritise x-risk reduction because we think it’s most pressing, but it’s possible we could learn more that would make us change our priorities.
I don’t think that’s how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It’s very common for orgs to be more ‘sticky’ than their constituent employees in this way.
I appreciate it’s a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round.
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it’s totally reasonable to:
Disagree with 80,000 Hours’s views on AI safety being so high priority, in which case you’ll disagree with a big chunk of the organisation’s strategy.
Disagree with 80k’s views on working in AI companies (which, tl;dr, is that it’s complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong here. It’s not obvious what the best thing to do here is, and we discuss this a bunch internally. But we think there’s risk in any approach to issue, and are going with our best guess based on talking to people in the field. (We reported on some of their views, some of which were basically ‘no don’t do it!’, here.)
Think that people should prioritise personal fit more than 80k causes them to. To be clear, we think (& 80k’s content emphasises) that personal fit matters a lot. But it’s possible we don’t push this hard enough. Also, because we think it’s not the only thing that matters for impact (& so also talk a lot about cause and intervention choice), we tend to present this as a set of considerations to navigate that involves some trade-offs. So It’s reasonable to think that 80k encourages too much trading off of personal fit, at least for some people.
Thanks Arden. I suspect you don’t disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
I don’t think that’s how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It’s very common for orgs to be more ‘sticky’ than their constituent employees in this way.
I appreciate it’s a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round.
Speaking in a personal capacity here --
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!