Substack shill @ parhelia.substack.com
Conor Barnes
I think this is a joke, but for those who have less-explicit feelings in this direction:
I strongly encourage you to not join a totalizing community. Totalizing communities are often quite harmful to members and being in one makes it hard to reason well. Insofar as an EA org is a hardcore totalizing community, it is doing something wrong.
I really appreciated reading this, thank you.
Rereading your post, I’d also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!
One thing I’ve seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don’t hurt your head. And hang out with friends who are good at distracting you!
I’m really sorry you’re experiencing this. I think it’s something more and more people are contending with, so you aren’t alone, and I’m glad you wrote this. As somebody who’s had bouts of existential dread myself, there are a few things I’d like to suggest:
With AI, we fundamentally do not know what is to come. We’re all making our best guesses—as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
For this reason, it can be useful to practice thinking through the models on your own. Start making your own guesses! I also often find the technical and philosophical details beyond me—but that doesn’t mean we can’t think through the broad strokes. “How confident am I that instrumental convergence is real?” “Do I think evals for new models will become legally mandated?” “Do I think they will be effective at detecting deception?” At the least, this might help focus your content consumption instead of being an amorphous blob of dread—I refer to it this way because I found the invasion of Ukraine sent me similarly reading as much as I could. Developing a model by focusing on specific, concrete questions (e.g. What events would presage a nuclear strike?) helped me transform my anxiety from “Everything about this worries me” into something closer to “Events X and Y are probably bad, but event Z is probably good”.
I find it very empowering to work on the problems that worry me, even though my work is quite indirect. AI safety labs have content writing positions on occasion. I work on the 80,000 Hours job board and we list roles in AI safety. Though these are often research and engineering jobs, it’s worth keeping an eye out. It’s possible that proximity to the problem would accentuate your stress, to be fair, but I do think it trades against the feeling of helplessness!
C. S. Lewis has a take on dealing with the dread of nuclear extinction that I’m very fond of and think is applicable: ‘How are we to live in an atomic age?’ I am tempted to reply: ‘Why, as you would have lived in the sixteenth century when the plague visited London almost every year...’
I hope this helps!
I hadn’t seen the previous dashboard, but I think the new one is excellent!
Thanks for the Possible Worlds Tree shout-out!
I haven’t had capacity to improve it (and won’t for a long time), but I agree that a dashboard would be excellent. I think it could be quite valuable even if the number choice isn’t perfect.
“Give a man money for a boat, he already knows how to fish” would play off of the original formation!
Quite happy to see this on the forum!
It’s pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.
In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.
Love to see this, thank you Nuño!
I’m really glad to hear it! Polishing is ongoing. Replied on GH too!
The probability of any one story being “successful” is very low, and basically up to luck, though connections to people with the power to move stories (ex. publishers, directors) would significantly help.
Most ex-risk scenarios are perfect material for compelling and entertaining stories. They tap into common tropes (hubris of humans and scientists), are near-future disaster scenarios, and can have opposed hawk and dove characters. I imagine that a successful ex-risk movie could have a narrative shaped like Jurassic Park or The Day After Tomorrow.
My actionable advice is that EA writers and potential EA writers should write EA fiction alongside their other fiction and we should explore connections with publishers.
As a side-note, I wrote an AI-escapes-the-box story the other week, and have since used Midjourney to illustrate it, as is fitting: https://twitter.com/Ideopunk/status/1553003805091979265. If anybody would like to read the first draft, message me!
I love Possible Worlds Tree! It’s aligned with the optimistic outlook, conveys the content better, and has a mythology pun. I couldn’t be happier. Messaging re: bounty!
Thanks for all the feedback! I think the buffs to interactivity are all great ideas. They should mostly be implemented this week.
A positive title would definitely help! I’ll think on this.
Agreed. I think it needs a ‘name’ as a symbol, but the current one is a little fudged. My placeholder for a while was ‘the tree of forking paths’ as a Borges reference, but that was a bit too general...
This isn’t exactly what I’m looking for (though I do think that concept needs a word).
The way I’m conceptualizing it right now is that there are three non-existential outcomes:
1. Catastrophe
2. Sustenance / Survival
3. Flourishing
If you look at Toby Ord’s prediction, he includes a number for flourishing, which is great. There isn’t a matching prediction in the Ragnarok series, so I’ve squeezed 2 and 3 together as a “non-catastrophe” category.
Thanks! I hadn’t thought of user interviews, that’s a great idea!
Thank you! And yeah, this is an artifact of the green nodes being filled in from the implicit inverse percent of the Ragnarok prediction and not having its own prediction. I could link to somewhere else, but it would need to be worth breaking the consistency of the links (all Metaculus Ragnarok links).
Hi Remmelt,
Just following up on this — I agree with Benjamin’s message above, but I want to add that we actually did add links to the “working at an AI lab” article in the org descriptions for leading AI companies after we published that article last June.
It turns out that a few weeks ago the links to these got accidentally removed when making some related changes in Airtable, and we didn’t notice these were missing — thanks for bringing this to our attention. We’ve added these back in and think they give good context for job board users, and we’re certainly happy for more people to read our articles.
We also decided to remove the prompt engineer / librarian role from the job board, since we concluded it’s not above the current bar for inclusion. I don’t expect everyone will always agree with the judgement calls we make about these decisions, but we take them seriously, and we think it’s important for people to think critically about their career choices.