There’s an aggregated list of AI safety research projects available on AI Safety Ideas (forum post) and though it’s a bit messy in there at the moment, it should be quite high quality leads for a hackathon as well! E.g. Neel Nanda and I will add a bunch of project ideas to the Interpretability Hackathon list during the next couple of days.
There’s an aggregated list of AI safety research projects available on AI Safety Ideas (forum post) and though it’s a bit messy in there at the moment, it should be quite high quality leads for a hackathon as well! E.g. Neel Nanda and I will add a bunch of project ideas to the Interpretability Hackathon list during the next couple of days.
Watching.