Substack shill @ parhelia.substack.com
Conor Barnes
Forgot to say thanks, just used it!
Nice!! This is pretty similar to a project Nuño Sempereand I are are working on, inspired by this proposal:
https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=vi7zALLALF39R6exF
I’m currently building the website for it while Nuño works on the data. I suspect these are compatible projects and there’s an effective way to link up!
There’s good discussion happening in the Discord if you want to hop in there!
Thank you! And yeah, this is an artifact of the green nodes being filled in from the implicit inverse percent of the Ragnarok prediction and not having its own prediction. I could link to somewhere else, but it would need to be worth breaking the consistency of the links (all Metaculus Ragnarok links).
Thanks! I hadn’t thought of user interviews, that’s a great idea!
This isn’t exactly what I’m looking for (though I do think that concept needs a word).
The way I’m conceptualizing it right now is that there are three non-existential outcomes:
1. Catastrophe
2. Sustenance / Survival
3. Flourishing
If you look at Toby Ord’s prediction, he includes a number for flourishing, which is great. There isn’t a matching prediction in the Ragnarok series, so I’ve squeezed 2 and 3 together as a “non-catastrophe” category.
Agreed. I think it needs a ‘name’ as a symbol, but the current one is a little fudged. My placeholder for a while was ‘the tree of forking paths’ as a Borges reference, but that was a bit too general...
A positive title would definitely help! I’ll think on this.
Thanks for all the feedback! I think the buffs to interactivity are all great ideas. They should mostly be implemented this week.
I love Possible Worlds Tree! It’s aligned with the optimistic outlook, conveys the content better, and has a mythology pun. I couldn’t be happier. Messaging re: bounty!
The probability of any one story being “successful” is very low, and basically up to luck, though connections to people with the power to move stories (ex. publishers, directors) would significantly help.
Most ex-risk scenarios are perfect material for compelling and entertaining stories. They tap into common tropes (hubris of humans and scientists), are near-future disaster scenarios, and can have opposed hawk and dove characters. I imagine that a successful ex-risk movie could have a narrative shaped like Jurassic Park or The Day After Tomorrow.
My actionable advice is that EA writers and potential EA writers should write EA fiction alongside their other fiction and we should explore connections with publishers.
As a side-note, I wrote an AI-escapes-the-box story the other week, and have since used Midjourney to illustrate it, as is fitting: https://twitter.com/Ideopunk/status/1553003805091979265. If anybody would like to read the first draft, message me!
I’m really glad to hear it! Polishing is ongoing. Replied on GH too!
Love to see this, thank you Nuño!
It’s pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.
In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.
Quite happy to see this on the forum!
“Give a man money for a boat, he already knows how to fish” would play off of the original formation!
Thanks for the Possible Worlds Tree shout-out!
I haven’t had capacity to improve it (and won’t for a long time), but I agree that a dashboard would be excellent. I think it could be quite valuable even if the number choice isn’t perfect.
I hadn’t seen the previous dashboard, but I think the new one is excellent!
I’m really sorry you’re experiencing this. I think it’s something more and more people are contending with, so you aren’t alone, and I’m glad you wrote this. As somebody who’s had bouts of existential dread myself, there are a few things I’d like to suggest:
With AI, we fundamentally do not know what is to come. We’re all making our best guesses—as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
For this reason, it can be useful to practice thinking through the models on your own. Start making your own guesses! I also often find the technical and philosophical details beyond me—but that doesn’t mean we can’t think through the broad strokes. “How confident am I that instrumental convergence is real?” “Do I think evals for new models will become legally mandated?” “Do I think they will be effective at detecting deception?” At the least, this might help focus your content consumption instead of being an amorphous blob of dread—I refer to it this way because I found the invasion of Ukraine sent me similarly reading as much as I could. Developing a model by focusing on specific, concrete questions (e.g. What events would presage a nuclear strike?) helped me transform my anxiety from “Everything about this worries me” into something closer to “Events X and Y are probably bad, but event Z is probably good”.
I find it very empowering to work on the problems that worry me, even though my work is quite indirect. AI safety labs have content writing positions on occasion. I work on the 80,000 Hours job board and we list roles in AI safety. Though these are often research and engineering jobs, it’s worth keeping an eye out. It’s possible that proximity to the problem would accentuate your stress, to be fair, but I do think it trades against the feeling of helplessness!
C. S. Lewis has a take on dealing with the dread of nuclear extinction that I’m very fond of and think is applicable: ‘How are we to live in an atomic age?’ I am tempted to reply: ‘Why, as you would have lived in the sixteenth century when the plague visited London almost every year...’
I hope this helps!
I second interest in a private submission / private forum option! I intend to submit my entry to a few places soon, but that won’t be possible if it’s “published” by submitting it here. If there isn’t a private option I probably won’t submit here.