Sorry to hear about your experience!
Which countries are at the top/bottom of the priority list to be funded? [And why?]
I think this is a great question, and I suspect it’s somewhat under-considered. I looked into this a couple years ago as a short research project, and I’ve heard there hasn’t been a ton more work on it since then. So my guess is that the reasoning might be somewhat ad-hoc or intuitive, but tries to take into account important factors like “size / important-seemingness of country for EA causes”, talent pool for EA, and ease of movement-building (e.g. do we already have high-quality content in the relevant language).
My guess is that:
There are some valuable nuances that could be included in assessments of countries, but are either not included or are done so inconsistently.
For example, for a small-medium country like Romania it might be more useful to think of a national group as similar to a city group for the country’s largest city, and Bucharest looks pretty promising to me based on a quick glance at its Wiki page—but I wouldn’t have guessed that if I hadn’t thought to look it up. Whereas e.g. Singapore benefits from being a well-known world-class city.
Similarly, it looks like Romania has a decent share of English-speakers (~30% or ~6 million) and they tend to be pretty fluent, but again I wouldn’t have really known that beforehand. Someone making an ad-hoc assessment may not have thought to check those data sources, + might not have context on how to compare different countries (is 30% high? low?) .
The skills / personality of group members and leaders probably make up a large part of funders’ assessments, but are kinda hard to assess if they don’t have a long track record. But they probably need funding to get a track record in the first place!
And intuitive assessments of leaders are probably somewhat biased against people who don’t come from the assessor’s context (e.g. have a different accent), though I hope and assume people at least try to notice & counteract that.
Like Akash, I agree with a lot of the object-level points here and disagree with some of the framing / vibes. I’m not sure I can articulate the framing concerns I have, but I do want to say I appreciate you articulating the following points:
Society is waking up to AI risks, and will likely push for a bunch of restrictions on AI progress
Sydney and the ARC Captcha example have made AI safety stuff more salient.
There’s opportunity for substantially more worry about AI risk to emerge after even mild warning events (e.g. AI-powered cyber events, crazier behavior emerging during evals)
Society’s response will be dumb and inefficient in a lot of ways, but could also end up getting pointed in some good directions
The more an org’s AI development / deployment abilities are constrained by safety considerations (whether their own concerns or other stakeholders’), the more safety looks like just another thing you need in order to deploy your powerful AI systems, so that safety work becomes a complement to capabilities work.