Stampy’s AI Safety Info is a project to create an interactive FAQ about existential risk from AI, started by Rob Miles. Our goal is to build a single resource aimed at informing all audiences, whether that means giving them a basic introduction to the concepts, addressing their objections, or onboarding them into research or other useful projects. We currently have 280 answers live on the site, and hundreds more as drafts.
After running two ‘Distillation Fellowships’, in which a small team of paid editors spent three months working to improve and expand the material, we think the site is ready for a soft launch. We’re making this post to invite the collective attention of LessWrong and the EA Forum, hoping that your feedback will help us prepare for a full launch that will use Rob’s YouTube channel to reach a large audience.
What we’d like to know
In roughly descending order of priority:
Where are our answers factually or logically wrong, especially in non-obvious ways?
Where are we leaving out key information from the answers?
What parts are hard to understand?
Where can we make the content more engaging?
Where have we made oversights?
What questions should we add?
We’re particularly interested in suggestions from experts on questions and answers related to their area of specialization – please let us know[1] if you’d be interested in having a call where you advise us on our coverage of your domain.
How to leave feedback
Click the edit button in the corner of any answer on aisafety.info to go to the corresponding Google doc:
Leave comments and suggestions on the doc.[2] We’ll process these to improve the answers.
To leave general feedback about the site as a whole, you can use this form, or comment on this post.
To discuss answers in more depth, or get involved with further volunteer writing and editing, you can join Rob Miles’s Discord or look at the ‘Get Involved’ guide on Coda.
Front end
When exploring the site, you may notice that the front end has room for improvement. We welcome feedback on our planned redesign. AIsafety.info is built by volunteer developers – we’re hoping to get a prototype for this redesign working, but if someone reading this is willing to step up and take the lead on that project, we’ll achieve this goal faster. There’s also a more in-depth user experience overhaul coming, with a more prominent place for a chatbot that specializes in AI alignment.
Our plans
Our future plans, depending on available funding and volunteer time, are:
Use your feedback to further improve our answers, then make a full launch to the wider public when we’re confident it’s ready.
Run future distillation fellowships (watch for an announcement about the third fellowship soon).
Run more write-a-thon events, including the third one, running from October 6th through 9th, so participants can add to the content and potentially join as Distillation Fellows.
Improve the front end, as detailed above.
Get the chatbot (which is currently in prototype) ready to be integrated into the main interface.
Thanks for helping us turn aisafety.info into the go-to reference for clear, reliable information about AI safety!
I like this idea and it looks great!
I had a similar concept in mind that I wanted to build but with more of a questionnaire/survey design rather than solely text articles or an open-ended chatbot. More of a hand-holding guided experience through the concerns/debate points.
How’s it going so far? How many daily active users do you have?
Btw, two small suggestions for the chatbot:
smaller max width for container div, size 16px font, and 150% line height
before : https://i.imgur.com/AHjaJHD.png
after: https://i.imgur.com/jAO4ozG.png
Ask the LLM to use standard markdown in it’s output. This will automatically create headings and bolded elements that make it much easier to skim/read