What about corporations or nation states during times of conflict—do you think it’s accurate to model them as roughly as ruthless in pursuit of their own goals as future AI agents?
They don’t have the same psychological makeup as individual people, they have a strong tradition and culture of maximizing self-interest, and they face strong incentives and selection pressures to maximize fitness (i.e. for companies to profit, for nation states to ensure their own survival) lest they be outcompeted by more ruthless competitors. On average, while I’d expect that these entities tend to show some care for goals besides self-interest maximization, I think the most reliable predictor of their behavior is the maximization of their self-interest.
If they’re roughly as ruthless as future AI agents, and we’ve developed institutions that somewhat robustly align their ambitions with pro-social action, then we should have some optimism that we can find similarly productive systems for working with misaligned AIs.
One other potential suggestion: Organizers should consider focusing on their own career development rather than field-building if their timelines are shortening and they think they can have a direct impact sooner than they can have an impact through field-building. Personally I regret much of the time I spent starting an AI safety club in college because it traded off against building skills and experience in direct work. I think my impact through direct work has been significantly greater than my impact through field-building, and I should’ve spent more time on direct work in college.