I’m not affiliated with 80k, but I would be surprised if the average reader who encounters their work comes away from it with higher regard for AI labs than they came in with — and certainly not that there is something like a brand partnership going on. Most of the content I’ve seen from them has (in my reading) dealt pretty frankly with the massive negative externalities that AI labs could be generating. In fact, my reading of their article “Should you work at a leading AI lab?” is that they don’t broadly recommend it at all. Here’s their 1-sentence summary verbatim:
Recommendation: it’s complicated
We think there are people in our audience for whom this is their highest impact option — but some of these roles might also be very harmful for some people. This means it’s important to take real care figuring out whether you’re in a harmful role, and, if not, whether the role is a good fit for you.
Hopefully this is helpful. It also sounds like these questions could be rhetorical / you have suspicions about their recommendation, so it could be worth writing up the affirmative case against working at labs if you have ideas about that. I know there was a post last week about this, so that thread could be a good place for this.
Hey Tyler—I agree that this solves for the case where somebody engages deeply with 80k’s content (System 2 thinking). But unfortunately that is not how most people make decisions and form opinions (System 1 thinking).
The bias here is something like—I am an effective altruist who thinks long and hard about all the opinions I form and decisions I make and therefore that is what everyone else does.
I think 80k needs to either deny or acknowledge that is the reality of the situation.
I just wanted to return to this and say that I think you were directionally correct here and, in light of recent news, recommending jobs at OpenAI in particular was probably a worse mistake than I realized when I wrote my original comment.
Reading the recent discussion about this reminded me of your post, and it’s good to see that 80k has updated somewhat. I still don’t know quite how to feel about the recommendations they’ve left up in infosec and safety, but I think I’m coming around to your POV here.
I’m not affiliated with 80k, but I would be surprised if the average reader who encounters their work comes away from it with higher regard for AI labs than they came in with — and certainly not that there is something like a brand partnership going on. Most of the content I’ve seen from them has (in my reading) dealt pretty frankly with the massive negative externalities that AI labs could be generating. In fact, my reading of their article “Should you work at a leading AI lab?” is that they don’t broadly recommend it at all. Here’s their 1-sentence summary verbatim:
Hopefully this is helpful. It also sounds like these questions could be rhetorical / you have suspicions about their recommendation, so it could be worth writing up the affirmative case against working at labs if you have ideas about that. I know there was a post last week about this, so that thread could be a good place for this.
Hey Tyler—I agree that this solves for the case where somebody engages deeply with 80k’s content (System 2 thinking). But unfortunately that is not how most people make decisions and form opinions (System 1 thinking).
The bias here is something like—I am an effective altruist who thinks long and hard about all the opinions I form and decisions I make and therefore that is what everyone else does.
I think 80k needs to either deny or acknowledge that is the reality of the situation.
Hey yanni,
I just wanted to return to this and say that I think you were directionally correct here and, in light of recent news, recommending jobs at OpenAI in particular was probably a worse mistake than I realized when I wrote my original comment.
Reading the recent discussion about this reminded me of your post, and it’s good to see that 80k has updated somewhat. I still don’t know quite how to feel about the recommendations they’ve left up in infosec and safety, but I think I’m coming around to your POV here.
Hey mate! Lovely to hear from you :)
Yeah I just think that most EAs assume that the message does most of the work in marketing when it is actually the medium: https://en.wikipedia.org/wiki/The_medium_is_the_message
I think this is a fair assumption to make if you believe people make decisions extremely rationally.
I basically don’t (I.e. the 80k brand is powering the OpenAI brand through a Halo Effect).
Unfortunately this is really hard to avoid!