Cool! I do alignment research independently and it would be nice to find an online hub where other people do this. The commonality I’m looking for is something like “nobody is telling you what to do, you’ve got to figure it out for yourself.”
Alas, I notice you don’t have a Discord, Slack, or any such thing yet. Are there plans for a peer support network?Also, what obligations come with being hired as an “‘employee’”? What will be the constraints on the independence of the independent research?
Fiscal sponsorship: hiring funded independent researchers as ‘employees’ → take away operational tasks which distract from research & help them build better career capital through institutional affiliation
Fiscal sponsorship: hiring funded independent researchers as ‘employees’
→ take away operational tasks which distract from research & help them build better career capital through institutional affiliation
Hi Rime, I’m not aware of any designated online space for independent alignment researchers either. Peer support networks are a central part of the plan for Catalyze so hopefully we’ll be able to help you out with that soon! I just created a channel on the AI Alignment slack called ‘independent-research’ for now (as Roman suggested).
As for the fiscal sponsorship, it should not place any constraints on the independence of the research. The benefits would be that fundraising can be easier, you can get administrative support, tax-exempt status, and increased credibility because you are affiliated with an organization (which probably sounds better than being independent, especially outside of EA circles).
I currently don’t see risks there that would restrict independent researchers’ independence.
That’s very kind of you, thanksmuch.
I think it’s better not to increase the number of distinct slack spaces without necessity. We can create a channel for independent researchers in the AI Alignment slack (see https://coda.io/@alignmentdev/alignmentecosystemdevelopment)
Although, I think many distinct spaces for small groups leads to better research outcomes for network epistemology reasons, as long as links between peripheral groups & central hubs are clear. It’s the memetic equivalent of peripatric vs parapatric speciation. If there’s nearly panmictic “meme flow” between all groups, then individual groups will have a hard time specialising towards the research niche they’re ostensibly trying to research.
In bio, there’s modelling (& some observation) suggesting that the range of a species can be limited by the rate at which peripheral populations mix with the centre. Assuming that the territory changes the further out you go, the fitness of pioneering subpopulations will depend on how fast they can adapt to those changes. But if they’re constantly mixing with the centroid, adaptive mutations are diluted and expansion slows down.
As you can imagine, this homogenisation gets stronger if fitness of individual selection units depend on network effects. Genes have this problem to a lesser degree, but memes are special because they nearly always show something like a strong Allee effect--proliferation rate is proportional to prevalence, but is often negative below a threshold for prevalence.
Most people are usually reluctant to share or adopt new ideas (memes) unless they feel safe knowing their peers approve of it. Innovators who “oversell themselves” by being too novel too quickly, before they have the requisite “social status license”, are labelled outcasts and associating with them is reputationally risky. And the conversation topics that end up spreading are usually very marginal contributions that people know how to cheaply evaluate.
By segmenting the market for ideas into small-world network of tight-knit groups loosely connected by central hubs, you enable research groups to specialise to their niche while feeling less pressure to keep up with the global conversation. We don’t need everybody to be correct, we want the community to explore broadly so at least one group finds the next universally-verifiable great solution. If everybody else gets stuck in a variety of delusional echo-chambers, their impact is usually limited to themselves, so the potential upside seems greater. Imo. Maybe.
H/T Holly. Also discussed with ChatGPT here.