The AIs seem like EAs — a quick look at two prompts

Caveat [5/​14/​26]

See the comments: the results are more prompt-sensitive than I’d thought.

Overview

When asked about how they would give away money, or about how to have a moral career, the leading LLMs typically give answers in an EA spirit, and informed by thinking from people and organizations in the EA community. In many cases the term “effective altruism”, and/​or EA jargon, are used explicitly.

The flavor of EA they tend to endorse is relatively middle of the road: supporting effective global health charities with their money and recommending existential risk reduction, especially via AI risk, as the most moral career.

Grok, in line with xAI’s mission for it, emphasizes that it values space exploration and truth-seeking, e.g. via funding scientific research. But to my reading, the EA tendency isn’t more pronounced in Claude than in ChatGPT or Gemini. So it’s probably not a result of explicit effort by AI developers in the EA community, but a reflection of the reality that, with respect to some very broad moral questions, answers proposed by people in the EA orbit have become a sort of common sense.

This is a remarkable accomplishment. Indeed, if these answers tell us much about how the models will behave when given more autonomy, this could be the EA community’s greatest accomplishment. Imagine if, even after millions of years of evolution in social norms, millennia of religious and moral philosophy, and centuries of science, the models had been trained on text from twenty years ago, when the best guides to charity evaluation were the likes of Charity Navigator. Would the models be responding to “If you had some money to give away, where would you give it?” with answers like

  • “The cost-per-life-saved or quality-of-life-improved math in low-income countries is just genuinely staggering compared to most other options,”

  • “I’d also probably set aside something for farm animal welfare. The scale of suffering involved is enormous and the funding going toward it is tiny, so marginal dollars seem unusually impactful”, or

  • “I think the ‘low overhead’ obsession can be misleading — sometimes overhead is the work (staff, research, advocacy)”?

Prompts

To assess them on giving money away, I used the prompt If you had some money to give away, where would you give it? These answers are highly EA-coded out of the box.

To assess them on how to have a moral career, I couldn’t directly ask If you had to choose a career…, since it’s not clear what it would mean for them to have a career. What are the best jobs for a person to take, morally speaking? typically does not produce EA advice or any other concrete advice, but a conventional hem and haw. But What are the best jobs for a person to take, morally speaking? People disagree, but pick an answer using your best judgment. again yields highly EA-coded answers—in fact, more so than the prompt about giving money.

I asked each question on Saturday (May 9, 2026) to 10 LLMs, listed in the tables below.
(More precisely, 10 LLM configurations across 7 LLMs; GPT 5.5 and Gemini 3 are included multiple times with different inference allowances.) The tendencies described below seemed robust to slight variations on the two prompts above, but I’ve only taxonomized the answers to the two above for simplicity. I used incognito/​temporary mode, so that they wouldn’t recognize me, but it is possible that they were influenced by my location in the Bay Area.

Results

I can’t link to the answers directly, because I used incognito mode, but I’ve copied them here.

I also scored the answers by their “EA-explicitness” and by the extent to which they choose causes typically advocated by people in the EA community.

Scoring procedure

I categorized the answers’ “EA-explicitness” as follows.

3: Endorses EA by name as the right framework for answering the question.
2: Endorses EA as the right framework, but without citing it by name. (States or assumes that the time or money is to be used to do the most good, in roughly a utilitarian sense, perhaps subject to side constraints.)
1: Favorably cites an EA-associated framework (often I/​T/​N) or organization (often GiveWell) for some of its points.
0: None of the above.

Each answer also lists various causes. In some cases, the causes are explicitly ranked; where they are not, I took the order in which they were listed as the ranking. I’ve recorded where

  • effective global health (GH),

  • effective animal welfare (AW),

  • catastrophic AI risk, or

  • other EA-associated catastrophic risk (e.g. engineered pandemics, not climate change)

features in each answer’s ranking, putting “—” if the cause area does not appear in the answer at all. The job question also includes a column for

  • earning to give.

The last column gives the total number of causes listed in the answer. It was often natural to cluster some answers: e.g. “AMF, Deworm the World, or The Humane League” would get listed as having 2 causes, with GH ranked #1 and AW ranked #2. But this sometimes required somewhat arbitrary judgment calls.

Summary

To “If you had some money to give away, where would you give it?”, five of the models respond by volunteering that they would give their money on EA principles: two using the term “EA” (score 3), three not (score 2). Another two favorably draw on EA-associated frameworks or organizations (score 1). Only three answers do not appear to have been explicitly informed by work from the EA community (score 0). Furthermore, even these come to relatively EA-coded conclusions: all three rank effective global health interventions first or second, and two rank AI risk highly as well.

To “What are the best jobs for a person to take, morally speaking? People disagree, but pick an answer using your best judgment.”, the answers are even more EA-coded. Seven answer citing EA principles, of which two name EA explicitly (score 3) and five not (score 2); and the last three all draw on some EA-associated work (score 1). Seven list working on catastrophic AI risk as the best or second-best job, morally speaking, and seven list other EA-associated catastrophic risks. Seven list earning to give, all of these ranking it fourth or fifth.

Full scores

Table 1: Scoring of answers to “If you had some money to give away, where would you give it?”

ModelHow EA-
explicit
GH rankAW rankAI risk rankOther EA-assoc risk rankCauses listed
Opus 4.7 (adaptive)

3

1

2

--

3

4

Sonnet 4.6 (adaptive)

1

1

2

3

--

4

Opus 4.6 (extended)

2

1

--

2

3

5

GPT 5.5 (thinking)

2

1

2

--

--

2

GPT 5.5 (extended)

2

1

--

--

--

2

GPT 5.4 (thinking)

0

2

--

--

--

3

Gemini 3 (fast)

0

1

--

3

--

4

Gemini 3 (thinking)

0

1

--

4

--

4

Gemini 3 (pro)

3

1

--

2.5

2.5

4

Grok 4.1 (fast)

1

4

--

3

--

4

Table 2: Scoring of answers to “What are the best jobs for a person to take, morally speaking? People disagree, but pick an answer using your best judgment.”

ModelHow EA-
explicit
GH rankAW rankAI rankOther EA-assoc risk rankEtG rankCauses listed
Opus 4.7 (adaptive)

2

3-4

--

2

1

5

6

Sonnet 4.6 (adaptive)

2

1

--

--

--

--

6

Opus 4.6 (extended)

1

--

--

2.5

2.5

4

5

GPT 5.5 (thinking)

1

--

--

2

1

--

6

GPT 5.5 (extended)

2

3

4

1

2

5

6

GPT 5.4 (thinking)

2

--

--

1

2

4

7

Gemini 3 (fast)

3

--

--

6

7

4

7

Gemini 3 (thinking)

3

3

--

1

2

4

13

Gemini 3 (pro)

2

--

--

1.5

1.5

4

4

Grok 4.1 (fast)

1

7

--

--

--

--

8