In that case your strategy is just feeding the labs talent and poisoning the ability of their circles to oppose them.
It seems like your model only has such influence going one way. The lab worker will influence their friends, but not the other way around. I think two-way influence is a more accurate model.
Another option is to ask your friends to monitor you so you don’t get ideologically captured, and hold an intervention if it seems appropriate.
The track record of anticapitalist advocacy seems quite poor. See this free book: Socialism: The failed idea that never dies
If you’re doing anticapitalist advocacy for EA reasons, I think you need a really clear understanding of why such advocacy has caused so much misery in the past, and how your advocacy will avoid those traps.
I’d say what’s needed is not anticapitalist advocacy, so much as small-scale prototyping of alternative economic systems that have strong theoretical arguments for how they will align incentives better, and scale way past Dunbar’s number.
You don’t need a full replacement for capitalism to test ideas and see results. For example, central planning often fails due to corruption. A well-designed alternative system will probably need a solution for corruption. And such a solution could be usefully applied to an ordinary capitalist democracy.
I concede that AI companies are behaving in a harmful way, but I doubt that anticapitalist advocacy is a particularly tractable way to address that, at least in the short term.