Yeah I see your point. I think I personally have a stronger aversion to illegal requests from employers as a matter of a principle, even if the employee does that sort of thing anyway. But I can see how other people might view that differently.
That said, in this particular case, it doesn’t seem like Chloe would otherwise be illegally buying weed?
For context, I’m an AI safety researcher and I think the stance that AGI is by far the #1 issue is defensible, although not my personal view.
I would like to applaud 80k hours for several things here.
1. Taking decisive action based on their convictions, even if it might be unpopular.
2. Announcing that action publicly and transparently.
3. Responding to comments on this post and engaging with people’s concerns.
However, several aspects of this move leave me feeling disappointed.
1. This feels like a step away from Effective Altruism is a Question (not an ideology), which I think is something that makes EA special. If you’ll pardon the oversimplification, to me this decision has the vibe of “Good news everyone, we figured out how to do the most good and it’s working on AGI!” I’m not sure to what extent that is the actual belief of 80k hours staff, but that’s the vibe I get from this post.
2. For better or for worse, I think 80k hours wields tremendous influence in the EA community and it seems likely to me that this decision will shift the overall tenor and composition of EA as a movement. Given that, it seems a bit weird to me that this decision was made based on the beliefs of a small subset of the community (80k hours staff). Especially since my impression is that “AGI is by far the #1 issue” is not the median EA’s view (I could be wrong here though). 80k is a private organization, and I’m not saying there should have been a public vote or something, but I think the views of 80k hours staff are not the only relevant views for this type of decision.
Overall, there’s a crucial difference between (A) helping people do the most good according to *their* definition and views, or (B) helping people do the most good according to *your* definition and views. One could argue that (B) is always better, since after all, those are your views. But I think that neglects important second-order effects such as the value of a community.
It may be true that (B) is better in this specific case if the benefits outweigh those costs. It’s also not clear to me if 80k hours fully subscribes to (B) or is just shifting in that direction. More broadly, I’m not claiming that 80k hours made the wrong decision: I think it’s totally plausible that 80k hours is 100% correct and AGI is so pressing that even given the above drawbacks, the shift is completely worth it. But I wanted to make sure these drawbacks were raised.
Questions for 80k hours staff members (if you’re still reading the comments):
1. Going forward, do you view your primary goal more as (A) helping people do the most good according to their own definition and views, or (B) helping people do the most good according to your definition and views? (Of course it can be some combination)
2. If you agree that your object-level stance on AGI differs from the median EA, do you have any hypotheses for why? Example reasons could be (A) you have access to information that other people don’t, (B) you believe people are in semi-denial about the urgency of AGI, (C) you believe that your definition of positive impact differs significantly from the median EA.