Working for Apart Research as co-director and research lead. Our aim is to accelerate progress in AI safety by helping talented people try out their fit for AI safety research in short, engaging research sprints. We then help the most promising participants further develop their research during the Apart Lab fellowship and get it published. This can be a very significant short-cut in the personal development of our fellows. We keep our events low-barrier and globally accessible by running our programs in a remote-first fashion.
Jason Hoelscher-Obermaier
thoughtful comment! just want to throw in that suicide should not be considered in isolation imo. while every avoidable death is horrible ofc, I do think that suicide has particularly bad knock-on effects.
I’m interested in learning more!
How many active users do your tools currently have? Any examples of standout successes from these tools?
Do you have some metrics on impact/adoption within EA/AI safety orgs?
What’s the competitive landscape here? I’m slightly worried that this kind of initiative should be a for-profit and EA-independent
If you’re not a book person, here are the best articles to read before launching a startup.
The link seems to be broken. Anyone happen to know where this should be pointing?
Demonstrate and evaluate risks from AI to society at the AI x Democracy research hackathon
I’d be really excited for ML researchers to register their forecasts about what AI systems built on language models will be able to do in the next couple of years.
Great call to action. What’s the best place/way to do it?
I made a shortlist of around 15 from a quick scan and then read more in detail for those (and discussing my biggest concerns for any that seemed interesting with Claude). I want to say that the process of reading many funding requests next to each other was interesting and, dare I say, almost fun!