However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions.
It would be interesting to see the actual numbers, I think Ryan Kidd should have them.
I understand it may look quite similar to different initiatives because I am only giving a very broad description in this post. Let me clarify a few things which will highlight differences with the other orgs/projects you mention:
-Catalyze’s focus is on the post-SERI MATS part of the pipeline (so targeting people who have already done a lot of upskilling—e.g. already done AI Safety Camp/SERI MATS)
-The current plan is not to fund the researchers but to support already funded researchers (the ‘hiring’ them is just another way of saying their funding would not be paid out directly to them but first go through an org with tax-deductibility benefits e.g. 501(c)3 and then go to them). - so no overlap with LTFF there. One exception to the supporting already funded researchers is helping not-yet funded researchers in the fundraising process.
I don’t really see similarities with Nonlinear apart from both naming ourselves ‘incubators’. Same for with ENAIS apart from them also connecting people together.
In short, I agree these interventions are not new. I think the packaging them up together and making a few additions & thereby making them easily accessible to this specific target group is most of the added value here.
Re: Nonlinear, they directly do services that you plan to do as well:
The Nonlinear Network: Funders get access to AI safety deal flow similar to large EA funders. People working in AI safety can apply to >45 AI safety funders in one application.
The Nonlinear Support Fund: Automatically qualify for mental health or productivity grants if you work full-time in AI safety.
(Note that both are targeted not only at AI safety founders as may seem from the website, but independent researchers as well.)
Fair point, I see understand what you meant now. I think these would be great resources for us to potentially connect the independent researchers we would incubate with as well
Cool! I do alignment research independently and it would be nice to find an online hub where other people do this. The commonality I’m looking for is something like “nobody is telling you what to do, you’ve got to figure it out for yourself.”
Alas, I notice you don’t have a Discord, Slack, or any such thing yet. Are there plans for a peer support network?
Also, what obligations come with being hired as an “‘employee’”? What will be the constraints on the independence of the independent research?
Fiscal sponsorship: hiring funded independent researchers as ‘employees’
→ take away operational tasks which distract from research & help them build better career capital through institutional affiliation
Hi Rime, I’m not aware of any designated online space for independent alignment researchers either. Peer support networks are a central part of the plan for Catalyze so hopefully we’ll be able to help you out with that soon! I just created a channel on the AI Alignment slack called ‘independent-research’ for now (as Roman suggested).
As for the fiscal sponsorship, it should not place any constraints on the independence of the research. The benefits would be that fundraising can be easier, you can get administrative support, tax-exempt status, and increased credibility because you are affiliated with an organization (which probably sounds better than being independent, especially outside of EA circles).
I currently don’t see risks there that would restrict independent researchers’ independence.
Although, I think many distinct spaces for small groups leads to better research outcomes for network epistemology reasons, as long as links between peripheral groups & central hubs are clear. It’s the memetic equivalent of peripatric vs parapatric speciation. If there’s nearly panmictic “meme flow” between all groups, then individual groups will have a hard time specialising towards the research niche they’re ostensibly trying to research.
In bio, there’s modelling (& some observation) suggesting that the range of a species can be limited by the rate at which peripheral populations mix with the centre.[1] Assuming that the territory changes the further out you go, the fitness of pioneering subpopulations will depend on how fast they can adapt to those changes. But if they’re constantly mixing with the centroid, adaptive mutations are diluted and expansion slows down.
As you can imagine, this homogenisation gets stronger if fitness of individual selection units depend on network effects. Genes have this problem to a lesser degree, but memes are special because they nearly always show something like a strong Allee effect[2]--proliferation rate is proportional to prevalence, but is often negative below a threshold for prevalence.
Most people are usually reluctant to share or adopt new ideas (memes) unless they feel safe knowing their peers approve of it. Innovators who “oversell themselves” by being too noveltoo quickly, before they have the requisite “social status license”, are labelled outcasts and associating with them is reputationally risky. And the conversation topics that end up spreading are usually very marginal contributions that people know how to cheaply evaluate.
By segmenting the market for ideas into small-world network of tight-knit groups loosely connected by central hubs, you enable research groups to specialise to their niche while feeling less pressure to keep up with the global conversation. We don’t need everybody to be correct, we want the community to explore broadly so at least one group finds the next universally-verifiable great solution. If everybody else gets stuck in a variety of delusional echo-chambers, their impact is usually limited to themselves, so the potential upside seems greater. Imo. Maybe.
• Increasing the number of research bets: additional independent research might increase the number of research directions being pursued. After all, as independent researchers individuals have more agency over deciding which research agendas to pursue. Pursuing more research bets could be very beneficial in this pre-paradigmatic field.
But independent researchers are not obliged to craft their own theories, of course, they could work within existing established frameworks (and collaborate with other researchers who work in these frameworks), just be organisationally independent.
It would be interesting to see the actual numbers, I think Ryan Kidd should have them.
Great point! They are currently compiling their results for what people have been doing post-MATS, I’m also curious what the results are
The things that the proposed startup is going to do seems to overlap in various ways with MATS, AI Safety Camp, Orthogonal (https://www.lesswrong.com/posts/b2xTk6BLJqJHd3ExE/orthogonal-a-new-agent-foundations-alignment-organization), European Network for AI Safety (ENAIS, https://forum.effectivealtruism.org/posts/92TAmcppCL7t54Ajn/announcing-the-european-network-for-ai-safety-enais), Nonlinear.org, and LTFF (if you plan to ‘hire’ researchers and pay them salary, i.e., effectively fund them, you basically plan to increase the total fundraising for AI safety, which is currently the LTFF’s role).
Detailing similarities, differences, and partnerships with these projects and orgs would be useful
I understand it may look quite similar to different initiatives because I am only giving a very broad description in this post. Let me clarify a few things which will highlight differences with the other orgs/projects you mention:
-Catalyze’s focus is on the post-SERI MATS part of the pipeline (so targeting people who have already done a lot of upskilling—e.g. already done AI Safety Camp/SERI MATS)
-The current plan is not to fund the researchers but to support already funded researchers (the ‘hiring’ them is just another way of saying their funding would not be paid out directly to them but first go through an org with tax-deductibility benefits e.g. 501(c)3 and then go to them). - so no overlap with LTFF there. One exception to the supporting already funded researchers is helping not-yet funded researchers in the fundraising process.
I don’t really see similarities with Nonlinear apart from both naming ourselves ‘incubators’. Same for with ENAIS apart from them also connecting people together.
In short, I agree these interventions are not new. I think the packaging them up together and making a few additions & thereby making them easily accessible to this specific target group is most of the added value here.
Re: Nonlinear, they directly do services that you plan to do as well:
(Note that both are targeted not only at AI safety founders as may seem from the website, but independent researchers as well.)
Fair point, I see understand what you meant now. I think these would be great resources for us to potentially connect the independent researchers we would incubate with as well
Interesting, have you had a chance to pilot or trial this with any researchers so far?
The current plan is to run a pilot starting in July
This seems like a great opportunity. It is now live on the EA Opportunity Board!
Amazing, thanks!
Cool! I do alignment research independently and it would be nice to find an online hub where other people do this. The commonality I’m looking for is something like “nobody is telling you what to do, you’ve got to figure it out for yourself.”
Alas, I notice you don’t have a Discord, Slack, or any such thing yet. Are there plans for a peer support network?
Also, what obligations come with being hired as an “‘employee’”? What will be the constraints on the independence of the independent research?
Hi Rime, I’m not aware of any designated online space for independent alignment researchers either. Peer support networks are a central part of the plan for Catalyze so hopefully we’ll be able to help you out with that soon! I just created a channel on the AI Alignment slack called ‘independent-research’ for now (as Roman suggested).
As for the fiscal sponsorship, it should not place any constraints on the independence of the research. The benefits would be that fundraising can be easier, you can get administrative support, tax-exempt status, and increased credibility because you are affiliated with an organization (which probably sounds better than being independent, especially outside of EA circles).
I currently don’t see risks there that would restrict independent researchers’ independence.
That’s very kind of you, thanksmuch.
I think it’s better not to increase the number of distinct slack spaces without necessity. We can create a channel for independent researchers in the AI Alignment slack (see https://coda.io/@alignmentdev/alignmentecosystemdevelopment)
Thanks!
Although, I think many distinct spaces for small groups leads to better research outcomes for network epistemology reasons, as long as links between peripheral groups & central hubs are clear. It’s the memetic equivalent of peripatric vs parapatric speciation. If there’s nearly panmictic “meme flow” between all groups, then individual groups will have a hard time specialising towards the research niche they’re ostensibly trying to research.
In bio, there’s modelling (& some observation) suggesting that the range of a species can be limited by the rate at which peripheral populations mix with the centre.[1] Assuming that the territory changes the further out you go, the fitness of pioneering subpopulations will depend on how fast they can adapt to those changes. But if they’re constantly mixing with the centroid, adaptive mutations are diluted and expansion slows down.
As you can imagine, this homogenisation gets stronger if fitness of individual selection units depend on network effects. Genes have this problem to a lesser degree, but memes are special because they nearly always show something like a strong Allee effect[2]--proliferation rate is proportional to prevalence, but is often negative below a threshold for prevalence.
Most people are usually reluctant to share or adopt new ideas (memes) unless they feel safe knowing their peers approve of it. Innovators who “oversell themselves” by being too novel too quickly, before they have the requisite “social status license”, are labelled outcasts and associating with them is reputationally risky. And the conversation topics that end up spreading are usually very marginal contributions that people know how to cheaply evaluate.
By segmenting the market for ideas into small-world network of tight-knit groups loosely connected by central hubs, you enable research groups to specialise to their niche while feeling less pressure to keep up with the global conversation. We don’t need everybody to be correct, we want the community to explore broadly so at least one group finds the next universally-verifiable great solution. If everybody else gets stuck in a variety of delusional echo-chambers, their impact is usually limited to themselves, so the potential upside seems greater. Imo. Maybe.
H/T Holly. Also discussed with ChatGPT here.
I somewhat disagree this is a good idea to increase the number of “bets”, where a “bet” is taken as an idiosyncratic framework or a theory. I explained this position here: https://www.alignmentforum.org/posts/FnwqLB7A9PenRdg4Z/for-alignment-we-should-simultaneously-use-multiple-theories#Creating_as_many_new_conceptual_approaches_to_alignment_as_possible__No and also touched upon it and discussed it with Ryan Kidd in the comments to this post: https://www.lesswrong.com/posts/bRtP7Mub3hXAoo4vQ/an-open-letter-to-seri-mats-program-organisers.
But independent researchers are not obliged to craft their own theories, of course, they could work within existing established frameworks (and collaborate with other researchers who work in these frameworks), just be organisationally independent.
Thanks for sharing! I skimmed through the things you linked but will read it in more detail soon