Feels like claims like “Trump’s tariffs have slowed down AGI development” need some evidence to back then up. The larger companies working on AGI have already raised funds, assembled teams and bought hardware (which can be globally distributed if necessary) and believe they’re going to get extraordinary returns on that effort. Unlike retail and low margin business, it doesn’t seem like a 10% levy on manufactured goods or even being unable to import Chinese chips is going to stop them from making progress
David T
I think the most likely explanation, particularly for people working at Anthropic is that EA has a lot of “takes” on AI, many of which they (for good or bad reasons) very strongly disagree with. This might fall into “brand confusion”, but I think some of it’s simply a point of disagreement. It’s probably accurate to characterise the AI safety wing of EA as generally regarding it as very important to debate whether AGI is safe to attempt to develop. Anthropic and their backers have obviously picked a side on that already.
I think that’s probably more important for them to disassociate from than FTX or individuals being problematic in other ways.
If we say that “because targeting you is the most effective thing we can do”, we incentivise them to not budge. Because they will know that willingness to compromise invites more aggression
That presumably depends on whether “targeting you is the most effective thing we can do” translates into because you’re most vulnerable to enforcement action or because you’re a major supplier of this company that’s listening very carefully to your arguments or because you claim to be market leading in ethics or even just because you’re the current market leader. Under those framings, it still absolutely makes sense for companies to consider compromising.
Agree with the broader argument that if you resolve to never bother about small entities or entities that tell you to get lost then that will deter even more receptive ears from listening to you though.
I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP’s office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
“small” is relative. AMF manages significantly more donations compared with most local NGOs, but it does one thing and has <20 staff. That’s very different from Save the Children or the Red Cross or indeed the Global Fund type organizations I was comparing it with, that have more campaigns and programmes to address local needs but also more difficulty in evaluating how effective they are overall. I understand that below the big headline “recommended” charities Give well does actually make smaller grants to some smaller NGOs too, but these will still be difficult to access for many
Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?
I think this is the most interesting question, and I would be interested in your thoughts about how to make that easier.[1]
I think part of the reason EA doesn’t do this is simply because it doesn’t have those answers, being predominantly young Western people centred around certain universities and tech communities[2] And also because EA (and especially the part of EA that is interested in global health) is very numbers oriented.
This is also somewhat related to a second point you raise regarding political and social realities including corruption: it is quite easy for GiveWell or OpenPhilanthropy to identify that infectious diseases are likely to be real, that a small international NGO is providing evidence that they’re actually buying and shipping the nets or pills that deal with it, and that on average given infectious disease prevalence they will save a certain amount of lives. Some other programmes that may deliver results highly attuned to local needs are more difficult to evaluate (and local NGOs are not always good at dealing with the complex requests for evidence for foreign evaluators even if they are very effective at their work). The same is true of large multinational organizations that have both local capacity building programs and the ability to deal with complex requests from foreign evaluators, but are also so big that Global Fund type issues can happen...
- ^
I would note that there is a regular contributor to this forum @NickLaing who is based in Uganda and focused on trying to solve local problems, although I don’t believe he receives very much funding compared with other EA causes, and also @Anthony Kalulu, a rural farmer in eastern Uganda who has an ambitious plan for a grain facility to solve problems in Busoaga, but seems to be getting advice from the wrong people on how to fund it…
- ^
This is also, I suspect, part of the reason many but not all EAs think AI is so important...
- ^
Just realise that betting on crypto is like betting on a casino. Probably worse, if it’s a memecoin which has apparently lost nearly all of its value in the last two months. Then decide whether something like a casino but probably worse is how you would want to invest the last $10k which you could still help your fellow farmers with.
FWIW I remember liking your original post and your ambition. I might have some ability to assist with grant application writing. But only if you spend any funds you can get on helping fellow Ugandans, not crypto!
What section do you put Marco Rubio in?
The side that defied a court order to eliminate 90% of USAID programs this week including all the lifesaving programs described above, with the name Marco Rubio referenced as being the decision-making authority in the termination letters.
I’m not sure the number of statements he’s made in favour of some of these programs being lifesaving before termination letters were sent out in his name is a mitigating factor. And if he’s not actually making the decisions it’s a moot point: appealing to Rubio’s better nature doesn’t seem to be a way forward.
Where was USAID mentioned in the PDF you linked?
My bad, I should have linked to this one
FWIW I agree with your point that people who are broadly neutral/sympathetic are more likely to be sympathetic to a broad explainer than a “denunciation”.
But I worded my post quite carefully, it’s “people who like Musk’s cuts to US Aid and AI Safety” I don’t think overlap with EA. I don’t imagine either of the EA-affiliated people you linked to would object to EAs pointing out that Musk shutting down AI safety institutes might be the opposite of what he says he cares about. And I don’t think people who think foreign aid is a big scam and AI should be unregulated are putative EAs (whether they trust Musk or not!)
I don’t think a “denunciation” is needed, but I don’t think avoiding criticising political figures because they’re sensitive, powerful and have some public support is a way forward either.
I’m pushing back more at 80k ranking it as a priority above the likes of global health or mental health rather than concluding it doesn’t have any value and nobody should be studying it!
I mean, something like CleanSeaNet probably is cost effective using standard EA [animal welfare] metrics, it’s certainly very effective at stopping oil dumping in the Med, but I wouldn’t treat that sort of program as a higher level of priority than any other area of environmental law enforcement (and it’s one which is already relatively easy to get space agency funding for....).
Well I did say I went further than you!
Agree there are valid space policy considerations (and I could add to that list)[1], but I think lack of tractability is a bigger problem than neglect.[2] Everyone involved in space already knows ASAT weapons are a terrible idea, they’re technically banned since 1966, but yes, tests have happened despite that because superpowers gotta superpower. As with many other international relations problems—and space is more important than some of those and less than others - the problem is lack of coordination and enforceability rather than lack of awareness that problems might exist. Similarly Elon’s obligation to deorbit Starlink at end of life is linked to SpaceX’s FCC licence and parallel ESA regulation exists.[3] If he decides to gut the FCC and disregard it, it won’t be from lack of study into congested orbital space or lack of awareness the problem exists.
- ^
“Examine environmental effects of deorbiting masses of satellites into the mesosphere and potential implications for future LEO deorbiting policy” would be at the top of my personal list for timeliness and terrestrial impact...
- ^
And above all, am struggling to see the marginal impact being bigger than health. as 80k suggested.
- ^
It’s also not in SpaceX’s interests to jeopardise LEO because they extract more economic value from that space than anyone else...
- ^
Charities removing false claims from their website is usually a good thing that should happen as soon as possible.
The exception to this would be if they are removing them to deny the claim was ever made and attack your credibility, but a mixture of screenshots, archive links and sharing reviews in advance with other trusted third parties who don’t have any stake in those companies should be enough to make that approach very unlikely to work.
Frankly it’s much lower risk for charities to respond with “we have corrected this. these are our excuses. but thanks anyway” even if its a really bad excuse than try to claim they never said anything
That’s more than I thought, but it’s also a decade ago when Elon had very different priorities, and I’m not sure that EA has any image problems associated with people thinking EAs basically want what Elon wants. (I don’t think the Transgender Law Center needs to worry their name might be sullied by his donation to them in 2011 either!)
I largely agree with this, and would go further and say I think that in most cases I don’t think space governance is even a solution to the problems humanity want to solve, as much as a background consideration that will need to be taken into account if deploying some potential solutions, and one which you probably need to speak with the specialists if you are deploying those solutions.
“Space governance” can easily be compared to international policy because much of it is a niche specialism within that category (especially the “what about the future of the solar system” questions that seem to animate longtermists). For more practical near term considerations like monitoring the environment or crop health or human rights or threats from passing asteroids, space assets are just tools, albeit tools that are much more useful with someone who understands how to interpret them in legal contexts and how to communicate with policymakers. Other aspects are just about how governments regulate companies’ activity, with a safety aspect that’s closer to the “should we consider this 1 in 10000 possibility of hitting a person” than preventing nuclear armageddon.[1]
Even as one of the few people actually likely to apportion [commercial R&D] grant funding towards a research that could be construed as “space governance” in the next couple of years, I’d really struggle to rate it as being as important for maximising global impact as 80k Hours does.[2] A potentially interesting and rewarding career which can have positive outcomes if people actually listen to you, yes . Amongst the top ten things a talented individual could do to positively impact human lives, nope.
P.S. thanks for linking your paper, I’ll add it to my reading list.
- ^
I mean, Kessler syndrome would have a huge impact on some critical technology short term, but that’s a risk addressed by developing technical risk mitigation and debris clearing solutions, not by policy papers for regulators who are very aware of its threat already.
- ^
More impactful at the margin than global health!
- ^
I don’t think it’s necessary for EA to denounce Musk on the basis that apart from a vague endorsement of a book a few years back and some general comments on AI safety which run in the opposite direction to his actual actions, he doesn’t seem to be associated with EA at all. (cf people like SBF needing “denouncements” because they were poster boys for it)
But I don’t think the popularity stat you’ve put up there is particularly representative of his present popularity or the direction it’s likely to trend in. More recent polls suggest he’s incredibly unpopular in Europe, whilst in the US’s more partisan environment his popularity clearly depends on party allegiance, but is still well underwater and less popular than USAID etc and also trending downwards.
Yes, people working in policy have to work with the polity they’ve got, not the one they want, but I suspect if you drew a Venn diagramm of “people who like Musk’s cuts to US Aid, AI safety initiatives etc” and “people who are likely to be remotely supportive of EA there wouldn’t be much overlap. I suspect many of the conservatives sympathetic to some of the things EA wants to do are the ones that think he has too much power and is taking the wrong approach...
Using colloquial, simple language is often appropriate, even if it’s not maximally precise. In fact, maximally precise doesn’t even exist—we always have to decide how detailed and complete a picture to paint.
I tend to agree, but historically EA (especially GiveWell) has been critical of the “donor illusion” involved in things like “sponsorship” of children in areas the NGO has already decided to fund by mainstream charities on a similar basis. More explicit statistical claims about future marginal outcomes based on estimates of outcomes of historic campaign spend or claims about liberating from confinement and mutilation when it’s one or the other free seem harder to justify than some of the other stuff condemned as “donor illusion”.
Even leaning towards the view it’s much better for charities to have effective marketing than statistical and semantic exactness, that debate is moot if estimates are based mainly on taking credit for decisions other parties had already made, as claimed by the VettedCauses review. If it’s true[1] that some of their figures come from commitments they should have known do not exist and laws they should have known were already changed it would be absolutely fair to characterise those claims as “false”, even if it comes from honest confusion (perhaps ACE—apparently the source of the figures—not understanding the local context of Sinergia’s campaigns?)
- ^
I would like to hear Sinergia’s response, and am happy for them to take their time if they need to do more research to clarify.
- ^
I think the problem of entities lying about what they’re doing (especially in low trust regions) is wider than just corporate campaigns. Ultimately charities have to make some sort of decision on how and if to audit whether the outcomes they’re expecting are the ones they’re getting.
Asking whether Sinergia had any way to evaluate whether companies were complying (before or after their intervention) is I think the main reason that it would have been good for VettedCauses to share their initial findings before publication. Sinergia appear to have Brazilian staff focused on this specific issue so they shouldn’t have been ignorant of the relevant law, but it’s possible they intentionally targeted companies they suspected were noncompliant (this is the whole theory of change behind Legal Impact for Chickens) and had some success. It is also possible they targeted companies they suspected were noncompliant and simply believed what the companies said in response. It is also possible there are loopholes and exemptions in the law. But I’d still have to agree that taking 70% of the credit for campaigning against something already made illegal is a bold claim, and some of the other claims Sinergia made don’t seem justifiable either.
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
Not really. YC doesn’t just care about percentage of value capture, it also cares about the total amount of value available to capture. This tends towards its target market being deep-pocketed corporations and consumers with disposable income to spend on AI app platforms or subscriAI tools for writing better software, and completely ignoring the Global South and people who don’t use the internet much.
AIM cares about the opposite: people that don’t have access to basics in life and its cost-effectiveness is measured on non-financial returns
I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
But if the advice is bad it might actually be net negative (and AI trained on an internet dominated by the developed world is likely to be suboptimal at generating responses to people with limited literacy on medical conditions specific to their region and poverty level in a language that features relatively little in OpenAI’s corpus). And training generative AI to be good at specialised tasks to life-or-death levels of reliability is definitely not cheap (and nor is getting that chatbot in front of people who tend not to be prolific users of the internet)
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would post an existential threat to humanity
Unlike many EAs, I agree that the threat to humanity posed by ChatGPT is negligible, but there’s a difference between that and trusting OpenAI enough to think building products piggybacking on their infrastructure is potentially one of the most effective uses of donor funds. Even if I did trust them, which I don’t for reasons EAs are generally aware of, I’m also not at all optimistic that their chatbot would be remotely useful at advising subsistence farmers on market and soil conditions in their locality.
And especially not remotely confident it’d be better than an information website, which might not be VC-fundable, but would be a whole lot cheaper to create and keep bullshit-free
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / apply to each organisation
I agree this is also a significant factor
YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals. I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products. The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either...
I think Godwinning the debate actually strengthens the case for “I don’t do labels” as a position. True, most people won’t hesitate to say that the label “Nazi” doesn’t apply to them, whether they say they don’t do labels or have social media profiles which read like a menu of ideologies.[1] On the other hand, many who wouldn’t hesitate to say that they think Nazis and fascists are horrible and agree should be voted against and maybe even fought against would hesitate to label themselves as “antifascist”, with its connotations of ongoing participation in activism and/or membership of self-styled antifascist groups whose other positions they may not agree with.
and from this, we can perhaps infer than figures at Anthropic don’t think EA is as bad as Naziism, if that was ever in doubt ;-)