Ah I’m sorry, I only scanned the post and missed your sentence at the top. Thanks!
Alexandra Bos
Introducing 11 New AI Safety Organizations—Catalyze’s Winter 24/25 London Incubation Program Cohort
AI Safety Seed Funding Network—Join as a Donor or Investor
Deciding What Project/Org to Start: A Guide to Prioritization Research
Re
Rafaels’ conclusionthe takeaways:“Much of Kat and Emerson’s effective altruist work focuses on mentoring young effective altruists and incubating organizations. I believe the balance of the evidence—much of it from their own defense of their actions—shows that they are hilariously ill-suited for that job.”
I don’t think that the recent posts give a balanced view of their work (e.g. I haven’t seen any section anywhere titled ‘Positive things Nonlinear did/achieved’). I don’t think there is enough information to judge to what extent they are suitable for their mentoring/incubating job.
In an attempt to bring in some balancing anecdote, I personally have had positive experiences doing regular coaching calls with Kat over the past year and feel that her input has been very helpful.
Thanks for making this series! I added it to this oversight of project ideas Impactful Projects and Organizations to Start—List of Lists.
Commenting here because I thought it might be useful to know this list of lists exists for those who are drawn to this post.
Congrats Brad! I’ll watch this tonight
I wanted to link this post here in case someone who might benefit from it might see it here: How to get EA ideas onto the TEDx stage
AI Safety Research Organization Incubation Program—Expression of Interest
To share some anecdotal data: I personally have had positive experiences doing regular coaching calls with Kat this year and feel that her input has been very helpful.
I would encourage us all to put off updating until we also get the second side of the story—that generally seems like good practice to me whenever it is possible.
Thanks for the post!
A related question: Is LTFF more likely to fund a small AI safety research group than to fund individual independent AI Safety researchers?
So could we see a scenario where, if person A, B or C apply individually for an independent research grant, they might not meet your funding bar. But where, if similarly impressive people with a similarly good research agenda applied as a research group, they would be a more attractive funding opportunity for you?
Thanks for publishing this! I added it to this list of impactful org/project ideas
Hi Rime, I’m not aware of any designated online space for independent alignment researchers either. Peer support networks are a central part of the plan for Catalyze so hopefully we’ll be able to help you out with that soon! I just created a channel on the AI Alignment slack called ‘independent-research’ for now (as Roman suggested).
As for the fiscal sponsorship, it should not place any constraints on the independence of the research. The benefits would be that fundraising can be easier, you can get administrative support, tax-exempt status, and increased credibility because you are affiliated with an organization (which probably sounds better than being independent, especially outside of EA circles).
I currently don’t see risks there that would restrict independent researchers’ independence.
Fair point, I see understand what you meant now. I think these would be great resources for us to potentially connect the independent researchers we would incubate with as well
The current plan is to run a pilot starting in July
Great point! They are currently compiling their results for what people have been doing post-MATS, I’m also curious what the results are
I understand it may look quite similar to different initiatives because I am only giving a very broad description in this post. Let me clarify a few things which will highlight differences with the other orgs/projects you mention:
-Catalyze’s focus is on the post-SERI MATS part of the pipeline (so targeting people who have already done a lot of upskilling—e.g. already done AI Safety Camp/SERI MATS)-The current plan is not to fund the researchers but to support already funded researchers (the ‘hiring’ them is just another way of saying their funding would not be paid out directly to them but first go through an org with tax-deductibility benefits e.g. 501(c)3 and then go to them). - so no overlap with LTFF there. One exception to the supporting already funded researchers is helping not-yet funded researchers in the fundraising process.
I don’t really see similarities with Nonlinear apart from both naming ourselves ‘incubators’. Same for with ENAIS apart from them also connecting people together.
In short, I agree these interventions are not new. I think the packaging them up together and making a few additions & thereby making them easily accessible to this specific target group is most of the added value here.
Thanks for sharing! I skimmed through the things you linked but will read it in more detail soon
Amazing, thanks!
I understand what tension you are describing, a question for clarification: I personally am not familiar with your description that “EA is already known for shoving women into community building/operations roles”, where does this sense come from?
And I think that’s another tangible proposal you’re making here which I’d like to draw attention to and make explicit to see what others think: creating quota for how many spots have to go to women at conferences, organizations, fellowships etc.
Updated it, thanks Jai! Feel free to PM if there’s anything else to change