What are the relevant disclaimers here? Conor is saying 80l does think that alignment roles at OpenAI are impactful. Your article mentions the career development tag, but the roles under discussion don’t have that tag right?
Rebecca
Echoing Raemon, it’s still a value judgement about an organisation to say that 80k believes that a given role is one where, as you say, “they can contribute to solving our top problems or build career capital to do so”. You are saying that you have sufficient confidence that the organisation is run well enough that someone with little context of internal politics and pressures that can’t be communicated via a job board can come in and do that job impactfully.
But such a person would be very surprised to learn that previous people in their role or similar ones at the company have not been able to do their job due to internal politics, lies, obfuscation etc, and that they may not be able to do even the basics of their job (see the broken promise of dedicated compute supply).
It’s difficult to even build career capital as a technical researcher when you’re not given the resources to do your job and instead find yourself having to upskill in alliance building and interpersonal psychology.
I would agree with this if 80k didn’t make it so easy for the podcast episodes to become PR vehicles for the companies: some time back 80k changed their policy and now they send all questions to interviewees in advance, and let them remove any answers they didn’t like upon reflection. Both of these make it very straightforward for the companies’ PR teams to influence what gets said in an 80k podcast episode, and remove any confidence that you’re getting an accurate representation of the researcher’s views, rather than what the PR team has approved them to say.
I don’t think Sydney has ever been a major centre for the EA movement, and it’s not a very good proxy for the culture/s of major EA hubs.
Can you say more about how you think the solving things part pulls towards x-risk?
I presume the person doesn’t realise those events are hosted at your venue
I don’t think being truth-seeking means you need to over-analyse your hobbies—most hobbies don’t really involve truth claims
It could be partially crowdsourced. People could add links to interviews to a central location as they come across them, quotes can be taken from news articles, maybe some others can do AI transcription of other interviews. I think subtitles from YouTube videos can also be downloaded?
‘New’ is probably a lot of the reason
One quote from Sam I came across recently that might be of interest to you: “What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT. That maybe there was something hard and complicated in there (the system) that we didn’t understand and have now already kicked it off.”
What about for people who’ve already resigned?
Note that not all the workshops are one-off, eg Future Impact Group was every trimester I believe
My guess is Ben’s referring to people like Holden Karnofsky, who went from working in finance to co-founding and -running GiveWell and then Open Phil to now doing research at a think tank.
Ilya is no longer on the Superalignment team?
+1 for 80k off the clock
I’m inferring from other comments that AGB as an individual EtG donor is the “expert [who] funded [80k] at an early stage of their existence, but has not funded them since” mentioned in the report. If this is the case, how do individual EtG donors relate to the criteria you mention here?
I’m confused by the use of the term “expert” throughout this report. What exactly is the expertise that these individual donors and staff members are meant to be contributing? Something more neutral like ‘stakeholder’ seems more accurate.
I think a 2x2 rather than 1x3 seating arrangement would be more natural. Currently it feels like you and Arden are too far away to make it a cosy chat vibe. I agree with Jamie that the topics should be impact-relevant, rather than just friends chatting about random things.
The work tests that don’t require a single sitting still do have a max number of hours
I’m still confused about what the misunderstanding is