I hear this; I don’t know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don’t at all think you’re alone there), I hope there’s something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.
Appreciate your work as a university organizer—thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).
Sorry I don’t have much in the way of other recommendations; I hope others will post them.
Even though we might have been concerned about the prioritisation, it still made sense to refer to 80k because it still at least gave the impression of openness to a range of causes.
Now even if the good initial advice remains, all roads lead to AI so it feels like a bit of a bait and switch to send someone there when the advice can only lead one way from 80ks perspective.
Yes it’s more “straightforward” and clear, but it’s also a big clear gap now on the trusted, well known non-AI career advice front. Uni groups will struggle a bit but hopefully the career advice marketplace continues to improve
Huh, I think this way is a substantial improvement—if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness
I think there’s a range of approaches one could take on career advice, ranging (for lack of better terms) from client-centered counseling to advocacy-focused recruiting. Once an advisor has decided where on the continuum they want to be, I think your view that it is “far better to be honest about this and let people make informed decisions” follows. But I think the decision about transparency only comes after the decision about how much to listen to the client-advisees vs. attempt to influence them has been made.
It is not inconsistent for an advisor to personally believe X but be open to a range of V . . . Z when conducting advising. For example, most types of therapists are supposed to be pretty non-directive; not allowing one’s views to shine too brightly to one’s therapy client is not a epistemic defect.
To be sure, 80K has never strongly been into a client-centered counseling model, nor should it have been. The end goal isn’t to benefit the client, and opera and many other things have never been on the table! But the recent announcement seems to be a move away from what physicians might analogize to a shared decisionmaking model toward a narrower focus on roles that are maximum impact in the organization’s best judgment. There are upsides and downsides of that shift.
n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group.
If non-EAs also find 80K’s career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause
I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program
Existing write-ups on non-AI problem profiles will also remain unchanged
There will be a separate AGI career guide
But the job board will be more AI focused
Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in “making AI go well” or mid/senior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.
I hear this; I don’t know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don’t at all think you’re alone there), I hope there’s something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.
Appreciate your work as a university organizer—thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).
Sorry I don’t have much in the way of other recommendations; I hope others will post them.
Even though we might have been concerned about the prioritisation, it still made sense to refer to 80k because it still at least gave the impression of openness to a range of causes.
Now even if the good initial advice remains, all roads lead to AI so it feels like a bit of a bait and switch to send someone there when the advice can only lead one way from 80ks perspective.
Yes it’s more “straightforward” and clear, but it’s also a big clear gap now on the trusted, well known non-AI career advice front. Uni groups will struggle a bit but hopefully the career advice marketplace continues to improve
Huh, I think this way is a substantial improvement—if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness
I think there’s a range of approaches one could take on career advice, ranging (for lack of better terms) from client-centered counseling to advocacy-focused recruiting. Once an advisor has decided where on the continuum they want to be, I think your view that it is “far better to be honest about this and let people make informed decisions” follows. But I think the decision about transparency only comes after the decision about how much to listen to the client-advisees vs. attempt to influence them has been made.
It is not inconsistent for an advisor to personally believe X but be open to a range of V . . . Z when conducting advising. For example, most types of therapists are supposed to be pretty non-directive; not allowing one’s views to shine too brightly to one’s therapy client is not a epistemic defect.
To be sure, 80K has never strongly been into a client-centered counseling model, nor should it have been. The end goal isn’t to benefit the client, and opera and many other things have never been on the table! But the recent announcement seems to be a move away from what physicians might analogize to a shared decisionmaking model toward a narrower focus on roles that are maximum impact in the organization’s best judgment. There are upsides and downsides of that shift.
From the update, it seems that:
80K’s career guide will remain unchanged
I especially feel good about this, because the guide does a really good job of emphasizing the many approaches of pursuing an impactful career
n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group.
If non-EAs also find 80K’s career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause
I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program
Existing write-ups on non-AI problem profiles will also remain unchanged
There will be a separate AGI career guide
But the job board will be more AI focused
Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in “making AI go well” or mid/senior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.
That’s interesting and would be nice if it was the case. That wasn’t the vibe I got from the announcement but we will see.