Thanks Arden. I suspect you don’t disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
We are cause neutral[1] – we prioritise x-risk reduction because we think it’s most pressing, but it’s possible we could learn more that would make us change our priorities.
I don’t think that’s how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It’s very common for orgs to be more ‘sticky’ than their constituent employees in this way.
I appreciate it’s a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round.
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
Thanks Arden. I suspect you don’t disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
I don’t think that’s how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It’s very common for orgs to be more ‘sticky’ than their constituent employees in this way.
I appreciate it’s a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round.
Speaking in a personal capacity here --
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!