This was great, thanks. Feels unusually well-executed and well-written to me—thanks for doing the work and sharing the info!
Vael Gates
(Minor typo that maybe you didn’t want to link to someone’s Google Scholar research profile for “software developer”?)
Arkose is seeking an AI Safety Call Specialist who will be speaking with and supporting professors, PhD students, and industry professionals who are interested in AI safety research or engineering.
Salary: $75,000 - $95,000, depending on prior experience and location. This is currently a 9-month-long fixed contract.
Location: Remote (but we highly prefer candidates to be able to work in roughly US time zones).
Deadline: 30 March 2024, with rolling admission (early applications encouraged).
Learn more on our website, and apply here if you’re interested!
FAQ
This is cool! Why haven’t I heard of this?
Arkose has been in soft-launch for a while, and we’ve been focused on email outreach more than public comms. But we’re increasingly public, and are in communication with other AI safety fieldbuilding organizations!
How big is the team?
3 people: Zach Thomas and Audra Zook are doing an excellent job in operations, and I’m the founder.How do you pronounce “Arkose”? Where did the name come from?
I think whatever pronunciation is fine, and it’s the name of a rock. We have an SEO goal for arkose.org to surpass the rock’s Wikipedia page.
Where does your funding come from?
The Survival and Flourishing Fund.
Are you kind of like the 80,000 Hours 1-1 team?
Yes, in that we also do 1-1 support calls, and that there are many people for whom it’d make sense to do a call with both 80,000 Hours and Arkose! One key difference is that Arkose is aiming to specifically support mid-career people interested in getting more involved in technical AI safety.
I’m not a mid-career person, but I’d still be interested in a call with you. Should I request a call?
Regretfully no, since we’re currently focusing on professors, PhD students, or industry researcher or engineers who have AI / ML experience. This may expand in the future, but we’ll probably still be pretty focused on mid-career folks.
Is Arkose’s Resource page special in any way?
Generally, our resources are selected to be most helpful to professors, PhD students, and industry professionals, which is a different focus than most other resource lists. We also think arkose.org/papers is pretty cool: it’s a list of AI safety papers that you can filter by topic area. It’s still in development and we’ll be updating it over time (and if you’d like to help, please contact Vael!)
How can I help?
• If you know someone who might be a good fit for a call with Arkose, please pass along arkose.org to them! Or fill out our referral form.
• If you have machine learning expertise and would like to help us review our resources (for free or for pay), please contact vael@arkose.org.
Thanks everyone!
Thanks!
Neat! As someone who’s not on the ground and doesn’t know much about either initiative, I’m curious what Arcadia’s relationship is London Initiative for Safe AI (LISA)? Mostly in the spirit of “if I know someone in AI safety in London, in what cases should I recommend them to each?”
This is sideways to the main point in the post, but I’m interested in a ticket type that’s just “Swapcard / unsupported virtual attendee” where accepted people just get access to Swapcard, which lets them schedule 1-1 online videoconferencing, and that’s it.
I find a lot of the value of EAG is in 1-1s, and I’d hope that this would be an option where virtual attendees can get potentially lots of networking value for very little cost.
(Asking because I don’t want to pay a lot of money to attend an EAG where I’d mostly be taking on a mentor role, but I would potentially be happy to do some online 1-1s with people during a Schelling time.)
Update: Just learned about EAGxVirtual, which seems very relevant!
“For those applying for grants, asking for less money might make you more likely to be funded”
My guess is that it’s good to still apply for lots of money, and then you just may not be funded the full amount? And one can say what one would do with more or less money granted, so that the grantmakers can take that into account in their decision.
I didn’t give a disagreement vote, but I do disagree on aisafety.training being the “single most useful link to give anyone who wants to join the effort of AI Safety research”, just because there’s a lot of different resources out there and I think “most useful” depends on the audience. I do think it’s a useful link, but most useful is a hard bar to meet!
Not directly relevant to the OP, but another post covering research taste: An Opinionated Guide to ML Research (also see Rohin Shah’s advice about PhD programs (search “Q. What skills will I learn from a PhD?”) for some commentary.
Small update: Two authors gave me permission to publish their transcripts non-anonymously!
Two authors gave me permission to publish their transcripts non-anonymously! Thus:
Whoops, forgot I was the owner. I tried moving those files to the drive folder, but also had trouble with it? So I’m happy to have them copied instead.
Thanks plex, this sounds great!
You can read more here!
Thanks Peter!
Update: Michael Keenan reports it is now fixed!
Thanks for the bug report, checking into it now.
No, the same set of ~28 authors read all of the readings.
The order of the readings was indeed specified:Concise overview (Stuart Russell, Sam Bowman; 30 minutes)
Different styles of thinking about future AI systems (Jacob Steinhardt; 30 minutes)
A more in-depth argument for highly advanced AI being a serious risk (Joe Carlsmith; 30 minutes)
A more detailed description of how deep learning models could become dangerously “misaligned” and why this might be difficult to solve with current ML techniques (Ajeya Cotra; 30 minutes)
An overview of different research directions (Paul Christiano; 30 minutes)
A study of what ML researchers think about these issues (Vael Gates; 45 minutes)
Some common misconceptions (John Schulman; 15 minutes)
Researchers had the option to read the transcripts where transcripts were available; we said that consuming the content in either form (video or transcript) was fine.
I would love a way to interface with EAGs where (I pay no money, but) I have access to the Swapcard interface and I talk only with people who request meetings with me. I often want to “attend” EAGs in this way, where I don’t interface with the conference but I’m available as a resource if people want to talk to me, for which I will schedule remote 1:1s over Zoom. It’d be nice to be helpful to people at a time where they’re available and can see I’m available on Swapcard. Are there any kind of “virtual, restricted” options like this?
Victoria has been doing a great job taking over Arkose so far, and I’m excited to see where she brings the organization! It’s been hard to find someone as skilled as Victoria to lead ML researcher outreach efforts at Arkose, and I feel grateful and happy to have her at the helm.