Ah I’m sorry, I only scanned the post and missed your sentence at the top. Thanks!
Alexandra Bos
Re
Rafaels’ conclusionthe takeaways:“Much of Kat and Emerson’s effective altruist work focuses on mentoring young effective altruists and incubating organizations. I believe the balance of the evidence—much of it from their own defense of their actions—shows that they are hilariously ill-suited for that job.”
I don’t think that the recent posts give a balanced view of their work (e.g. I haven’t seen any section anywhere titled ‘Positive things Nonlinear did/achieved’). I don’t think there is enough information to judge to what extent they are suitable for their mentoring/incubating job.
In an attempt to bring in some balancing anecdote, I personally have had positive experiences doing regular coaching calls with Kat over the past year and feel that her input has been very helpful.
Thanks for making this series! I added it to this oversight of project ideas Impactful Projects and Organizations to Start—List of Lists.
Commenting here because I thought it might be useful to know this list of lists exists for those who are drawn to this post.
Congrats Brad! I’ll watch this tonight
I wanted to link this post here in case someone who might benefit from it might see it here: How to get EA ideas onto the TEDx stage
To share some anecdotal data: I personally have had positive experiences doing regular coaching calls with Kat this year and feel that her input has been very helpful.
I would encourage us all to put off updating until we also get the second side of the story—that generally seems like good practice to me whenever it is possible.
Thanks for the post!
A related question: Is LTFF more likely to fund a small AI safety research group than to fund individual independent AI Safety researchers?
So could we see a scenario where, if person A, B or C apply individually for an independent research grant, they might not meet your funding bar. But where, if similarly impressive people with a similarly good research agenda applied as a research group, they would be a more attractive funding opportunity for you?
Thanks for publishing this! I added it to this list of impactful org/project ideas
Hi Rime, I’m not aware of any designated online space for independent alignment researchers either. Peer support networks are a central part of the plan for Catalyze so hopefully we’ll be able to help you out with that soon! I just created a channel on the AI Alignment slack called ‘independent-research’ for now (as Roman suggested).
As for the fiscal sponsorship, it should not place any constraints on the independence of the research. The benefits would be that fundraising can be easier, you can get administrative support, tax-exempt status, and increased credibility because you are affiliated with an organization (which probably sounds better than being independent, especially outside of EA circles).
I currently don’t see risks there that would restrict independent researchers’ independence.
Fair point, I see understand what you meant now. I think these would be great resources for us to potentially connect the independent researchers we would incubate with as well
The current plan is to run a pilot starting in July
Great point! They are currently compiling their results for what people have been doing post-MATS, I’m also curious what the results are
I understand it may look quite similar to different initiatives because I am only giving a very broad description in this post. Let me clarify a few things which will highlight differences with the other orgs/projects you mention:
-Catalyze’s focus is on the post-SERI MATS part of the pipeline (so targeting people who have already done a lot of upskilling—e.g. already done AI Safety Camp/SERI MATS)-The current plan is not to fund the researchers but to support already funded researchers (the ‘hiring’ them is just another way of saying their funding would not be paid out directly to them but first go through an org with tax-deductibility benefits e.g. 501(c)3 and then go to them). - so no overlap with LTFF there. One exception to the supporting already funded researchers is helping not-yet funded researchers in the fundraising process.
I don’t really see similarities with Nonlinear apart from both naming ourselves ‘incubators’. Same for with ENAIS apart from them also connecting people together.
In short, I agree these interventions are not new. I think the packaging them up together and making a few additions & thereby making them easily accessible to this specific target group is most of the added value here.
Thanks for sharing! I skimmed through the things you linked but will read it in more detail soon
Amazing, thanks!
I understand what tension you are describing, a question for clarification: I personally am not familiar with your description that “EA is already known for shoving women into community building/operations roles”, where does this sense come from?
And I think that’s another tangible proposal you’re making here which I’d like to draw attention to and make explicit to see what others think: creating quota for how many spots have to go to women at conferences, organizations, fellowships etc.
Hi Larks, thank you for taking the time to articulate your concerns! I will respond to a few below:
Concern 1: passing off evidential burden
• I agree it would be preferable if we would have a made a solid case for why gender diversity is important in this post.
-> To explain this choice: we did not feel like we could do this topic justice in the limited time we had available for this so decided to prioritize sharing the information in this post instead. Another reason for focusing on the content of the post above is that we had a somewhat rare opportunity to get this many people’s input on the topic all at once—which I would say gave us some comparative advantage for writing about this rather than writing about why/whether gender diversity is important.
• As you specifically mention that you think “relying on posts that received a lot of justified criticism” is a bad idea, do you have suggestions for different posts that you found better?
Concern 2: “Some of your proposals, like adopting “the patriarchy” as a cause area, or rejecting impartiality in favour of an “ethics of care”, are major and controversial changes”
• Something I’d like to point out here: these are not our proposals. As we mention in the post, ‘The views we describe in this post don’t necessarily correspond with our (Veerle Bakker’s & Alexandra Bos’) own but rather we are describing others’ input.′ For more details on this process, I’d recommend taking a look at the Methodology & Limitations if you haven’t already.
-> Overall, I think the reasons you mention for not taking on the proposals under ‘Adjusting attributes of EA thought’ are very fair and I probably agree with you on them.
• A second point regarding your concern: I think you are conflating the underlying reasons participants suspected are behind the gender gap with the solutions they propose.
However,
saying ‘X might be the cause of problem Y’, is not the same as saying:
‘we should do the opposite from X so that problem Y is solved’
Therefore, I don’t feel that, for instance, your claim that a proposal in this post was to adopt “the patriarchy” as cause area fairly represents the written content. What we wrote is that “One of these topics is how EA does not focus specifically on gender inequality issues in its thinking (e.g. ‘the patriarchy’ is not a problem recommended to work on by the EA community).” This is a description of a concern some of the participants described, not a solution they proposed. The same goes for your interpretation that the proposal is “rejecting impartiality in favour of an ethics of care”.
Thank you for the addition! I added it
Good point, it also reminded me to add the tags to the main text of this post as another tip for where to look. Thank you!
Awesome, sounds like you have cracked the code :)
This makes me wonder: how did you get your way into 6 TEDx line-ups? Did you reach out to organizers as I described in the post or did you take some different approach?
Updated it, thanks Jai! Feel free to PM if there’s anything else to change