I’ve now spoken to ~1,400 people as an advisor with 80,000 Hours, and if there’s a quick thing I think is worth more people doing, it’s doing a short reflection exercise about one’s current situation.
Below are some (cluster of) questions I often ask in an advising call to facilitate this. I’m often surprised by how much purchase one can get simply from this—noticing one’s own motivations, weighing one’s personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.
A long list of semi-useful questions I often ask in an advising call
Your context:
What’s your current job like? (or like, for the roles you’ve had in the last few years…)
The role
The tasks and activities
Does it involve management?
What skills do you use? Which ones are you learning?
Is there something in your current job that you want to change, that you don’t like?
Default plan and tactics
What is your default plan?
How soon are you planning to move? How urgently do you need to get a job?
Have you been applying? Getting interviews, offers? Which roles? Why those roles?
Have you been networking? How? What is your current network?
Have you been doing any learning, upskilling? How have you been finding it?
How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week?
What are you feeling blocked/bottlenecked by?
What are your preferences and/or constraints?
Money
Location
What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.)
What skills do you want to develop?
Are you interested in leadership, management, or individual contribution?
Do you want to shoot for impact? How important is it compared to your other preferences?
How much certainty do you want to have wrt your impact?
If you could picture your perfect job – the perfect combination of the above – which ones would you relax first in order to consider a role?
Reflecting more on your values:
What is your moral circle?
Do future people matter?
How do you compare problems?
Do you buy this x-risk stuff?
How do you feel about expected impact vs certain impact?
If possible, I’d recommend trying to answer these questions out loud with another person listening (just like in an advising call!); they might be able to notice confusions, tensions, and places worth exploring further. Some follow up prompts that might be applicable to many of the questions above:
How do you feel about that?
Why is that? Why do you believe that?
What would make you change your mind about that?
What assumptions is that built on? What would change if you changed those assumptions?
Have you tried to work on that? What have you tried? What went well, what went poorly, and what did you learn?
Is there anyone you can ask about that? Is there someone you could cold-email about that?
As a community builder, I’ve started donating directly to my local EA group—and I encourage you to consider doing the same.
Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.
Of course, seeking funding from organizations like OpenPhil remains highly valuable—they’ve dedicated extensive thought to effective community building. Yet, don’t underestimate the power and efficiency of utilizing your intimate knowledge of your group’s immediate requirements.
Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
EA community building relies heavily on a few large donors. This creates risk.
One way to reduce that risk is to broaden the funding base. Membership models might help.[1]
Many people assume EA will only ever appeal to a small slice of the population, and so this funding would never amount to anything significant. However, I think people often underestimate how large a “small slice” can be.
Take the Dutch mountaineering association. A mountaineering club in one of the flattest countries on Earth doesn’t exactly scream mass appeal.
So, how many members do you think it has?
Around 80,000. That alone brings in roughly €4 million in membership contributions—about 70% of its total annual income.[2]
Even niche communities can fund themselves if enough people are engaged.
On the income side of the ledger, having more members might help. But the more members you have, the more you need to spend on member-service activities (i.e., whatever it is that you’re offering that makes people want to pay the membership fee).
On the one hand, I don’t think that member-service activity expenditure would scale linearly with increased membership. On the other hand, current spend on meta / community building activities is far more than €50/involved person. So my assumption is that—at best—the marginal costs of serving additional members would be equal to the membership revenue. A meta in which the spend per average member was anywhere close to that would be a very different meta indeed.
You’re right that EA’s current meta budget works out to far more than €50 per “involved person”—but that average includes the highly engaged core: people attending conferences, receiving 1-1 support, travel grants, and significant staff time.
A low-touch “supporter” tier is a different product entirely. If you ask someone for €50/year just to back the mission, receive a newsletter, and gain access to certain events, the marginal cost is minimal: card fees, a CRM entry, an occasional email, maybe a €5 welcome sticker. Even doubling every line item puts the cost at €10–20, leaving €30–40 net per person.
We could keep the high-cost, high-impact activities funded by major donors, while using a supporter tier as a lightweight way for sympathisers to express commitment and reduce funding concentration risk.
I think an organization like that is plausible, but I get the sense that it is a much different animal than the Dutch mountaineering association.
Although the financial breakdown is sparse (and I can’t read Dutch), a glance at the website suggests the association offers a lot of activities and other sources of value for its members—which I am guessing are significantly more costly than a card, sticker, e-mail, and so on. If you’re even moderately interested in mountaineering, it makes sense that joining would provide you with a lot of value. Thus, I wouldn’t be surprised that a large fraction of people who are moderately interested in mountaineering join.
That doesn’t strike me as the right joining-percentage base rate for an organization in which members don’t get much to show for their membership fees. For example, one might consider the number of individuals who support the free / open source software movement versus the number who are paying members of a FOSS software organization. If the conversion rate of interested people into membership of a sticker-and-card organization is rather low, you need a rather large group of interested people to end up with a sizable membership.
Don’t get me wrong; a membership organization with 80,000 people would be great! I just don’t see that a low-cost membership organization as a likely way to reduce net funding pressures.
I think you might be overestimating how much the NKBV offers as part of the basic membership. Most of their trips and courses, etc., are paid add-ons. What the €50 fee actually gets you is fairly lightweight: a magazine, eligibility to join trips (not free), discounted access to mountain huts (because the NKBV helps fund them), inclusion in their group insurance policy, and a 10% discount with a Dutch outdoor brand.
That’s not nothing, but it’s modest and it shows that people will pay for affiliation, identity, and access to a community infrastructure, even if the tangible perks are limited.
The EA equivalent could be things like discounted or early access to EAG(x) events, member-only discussion groups, or eligibility to complete advanced courses offered by national EA associations. If multiple countries coordinated, pooled membership fees could help subsidise international EA public goods such as the Forum, EAG(x) events, group support infrastructure, etc.
I think the key point is this: the NKBV shows that people are willing to pay for affiliation, even if the direct perks are modest, as long as the organisation feels valuable to their identity and goals. EA can plausibly do the same.
The EA equivalent could be things like discounted or early access to EAG(x) events, member-only discussion groups, or eligibility to complete advanced courses offered by national EA associations.
Maybe, but this sounds to me a lot like erecting new pay gates for engagement with the community (both the membership fee and any extra fee for the advanced courses, etc.). Maybe that’s unavoidable, but it does carry some significant downsides that aren’t present with a mountaineering club (where the benefits of participation are intended to flow mainly to the participant rather than to third parties like animals or future people)
It also seems at tension with the current recruitment strategy by increasing barriers/friction to deeper engagement. And it seems that people most commonly become interested in EA in their 20s, an age at which imposing financial barriers to deeper engagement may be particularly negative. While I think people would be okay lowering pay gates based on certain objectively-applied markers of merit or need, I am not confident that this could be done in a way that both didn’t impede “core” recruitment and that “supporter” members experienced as fair and acceptable. Most people don’t want to pay for something others are getting for free / near-free without a sufficiently compelling reason.
I have $20 in unused RunPod.io credit (cloud GPU service) that I’m not using and can’t refund. 😢 I’d love to donate it to someone working on any useful — whether it’s for running models, processing data, or prototyping.
I know that folks in EA often favor donating to more effective things rather than less effective things. With that in mind, I have mixed feelings knowing that many Harvard faculty are donating 10%, and that they are donating to the best funded and most prestigious university in the world.
On the one hand, it is really nice to know that they are willing to put their money where their mouth is when their institution is under attack. I get some warm fuzzy feelings from the idea of defending an education institution against political attacks. On the other hand, Harvard University’s endowment is already very large, and Harvard earns a lot of money each year. It is like a very tailored version of a giving pledge: giving to Harvard, giving for one year. Will such a relatively small amount given toward such a relatively large institution do much good? I do wonder what the impact would be if these fairly well-known and well-respected academics announced they were donating 10% to clean water, or to deworming, or to reducing animal suffering. I wonder how much their donations will do for Harvard.
I’ll include a few graphs to illustrate Harvard’s financial strength.
These are from a project I did several months ago using data from the Common Data Set, from College Scorecard, from their Form 990 tax filings, and some data from the college’s websites.
The selection of the non-Harvard schools is fairly arbitrary. For that particular project I just wanted to select a few different types of schools (small liberal arts, more technical focused, etc.) rather than comparing Harvard to other ‘hyper elite’ schools.
I left the endowment graph non-logarithmic just to illustrate the ludicrous difference. Yes, I know it is bad design practice and that it obscures the numbers for the non-Harvard schools.
Ah that’s great info! Would be useful to get similar numbers for EAGx events. I know the overall acceptance rate is quite high, but don’t know how it is for students who are applying for their regional EAGx.
EAGx undergraduate acceptance rate across 2024 and 2025 = ~82%
EAGx first-timer undergraduate acceptance rate across 2024 and 2025 = ~76%
Obvious caveat that if we tell lots of people that the acceptance rate is high, we might attract more people without any context on EA and the rate would go down.
epistemic status: i timeboxed the below to 30 minutes. it’s been bubbling for a while, but i haven’t spent that much time explicitly thinking about this. i figured it’d be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don’t expect to reflectively endorse all of these points later down the line. i think it’s probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i’m not very keen on arguing about any of the points below, but if you think you could be useful toward my reflecting processes (or if you think i could be useful toward yours!), i’d prefer that you book a call to chat more over replying in the comments. i do not give you consent to quote my writing in this short-form without also including the entirety of this epistemic status.
1-3 years ago, i was a decently involved with EA (helping organize my university EA program, attending EA events, contracting with EA orgs, reading EA content, thinking through EA frames, etc).
i am now a lot less involved in EA.
e.g. i currently attend uc berkeley, and am ~uninvolved in uc berkeley EA
e.g. i haven’t attended a casual EA social in a long time, and i notice myself ughing in response to invites to explicitly-EA socials
e.g. i think through impact-maximization frames with a lot more care & wariness, and have plenty of other frames in my toolbox that i use to a greater relative degree than the EA ones
e.g. the orgs i find myself interested in working for seem to do effectively altruistic things by my lights, but seem (at closest) to be EA-community-adjacent and (at furthest) actively antagonistic to the EA community
(to be clear, i still find myself wanting to be altruistic, and wanting to be effective in that process. but i think describing my shift as merely moving a bit away from the community would be underselling the extent to which i’ve also moved a bit away from EA’s frames of thinking.)
why?
a lot of EA seems fake
the stuff — the orientations — the orgs — i’m finding it hard to straightforwardly point at, but it feels kinda easy for me to notice ex-post
there’s been an odd mix of orientations toward [ aiming at a character of transparent/open/clear/etc ] alongside [ taking actions that are strategic/instrumentally useful/best at accomplishing narrow goals… that also happen to be mildly deceptive, or lying by omission, or otherwise somewhat slimy/untrustworthy/etc ]
the thing that really gets me is the combination of an implicit (and sometimes explicit!) request for deep trust alongside a level of trust that doesn’t live up to that expectation.
it’s fine to be in a low-trust environment, and also fine to be in a high-trust environment; it’s not fine to signal one and be the other. my experience of EA has been that people have generally behaved extremely well/with high integrity and with high trust… but not quite as well & as high as what was written on the tin.
for a concrete ex (& note that i totally might be screwing up some of the details here, please don’t index too hard on the specific people/orgs involved): when i was participating in — and then organizing for — brandeis EA, it seemed like our goal was (very roughly speaking) to increase awareness of EA ideas/principles, both via increasing depth & quantity of conversation and via increasing membership. i noticed a lack of action/doing-things-in-the-world, which felt kinda annoying to me… until i became aware that the action was “organizing the group,” and that some of the organizers (and higher up the chain, people at CEA/on the Groups team/at UGAP/etc) believed that most of the impact of university groups comes from recruiting/training organizers — that the “action” i felt was missing wasn’t missing at all, it was just happening to me, not from me. i doubt there was some point where anyone said “oh, and make sure not to tell the people in the club that their value is to be a training ground for the organizers!” — but that’s sorta how it felt, both on the object-level and on the deception-level.
this sort of orientation feels decently reprensentative of the 25th percentile end of what i’m talking about.
also some confusion around ethics/how i should behave given my confusion/etc
importantly, some confusions around how i value things. it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i’m blinding myself. i think it’s taken me awhile to know what that feels like, and i’ve grown to find that blinding & meta-blinding extremely distasteful, and a signal that something’s wrong.
some of this might merely be confusion about orientation, and not ethics — e.g. it might be that in some sense the right doxastic attitude is “EA,” but that the right conative attitude is somewhere closer to (e.g.) “embody your character — be kind, warm, clear-thinking, goofy, loving, wise, [insert more virtues i want to be here]. oh and do some EA on the side, timeboxed & contained, like when you’re donating your yearly pledge money.”
where now?
i’m not sure! i could imagine the pendulum swinging more in either direction, and want to avoid doing any further prediction about where it will swing for fear of that prediction interacting harmfully with a sincere process of reflection.
You go over more details later and answer other questions like what caused some reactions to some EA-related things, but an interesting thing here is that you are looking for a cause of something that is not.
> it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i’m blinding myself.
I can strongly relate, had the same experience. i think it’s due to christian upbringing or some kind of need for external validation. I think many people don’t experience that, so I wouldn’t say that’s an inherently EA thing, it’s more about the attitude.
Riffing out loud … I feel that there are different dynamics going on here (not necessarily in your case; more in general):
The tensions where people don’t act with as much integrity as is signalled
This is not a new issue for EA (it arises structurally despite a lot of good intentions, because of the encouragement to be strategic), and I think it just needs active cultural resistance
In terms of writing, I like Holden’s and Toby’s pushes on this; my own attempts here and here
But for this to go well, I think it’s not enough to have some essays on reading lists; instead I hope that people try to practice good orientation here at lots of different scales, and socially encourage others to
The meta-blinding
I feel like I haven’t read much on this, but it rings true as a dynamic to be wary of! Where I take the heart of the issue to be that EA presents a strong frame about what “good” means, and then encourages people to engage in ways that make aspects of their thinking subservient to that frame
As someone put it to me, “EA has lost the mandate of heaven”
I think EA used to be (in some circles) the obvious default place for the thoughtful people who cared a lot to gather and collaborate
I think that some good fraction of its value came from performing this role?
Partially as a result of 1 and 2, people are disassociating with EA; and this further reduces the pull to associate
I can’t speak to how strong this effect is overall, but I think the directionality is clear
I don’t know if it’s accessible (and I don’t think I’m well positioned to try), but I still feel a lot of love for the core of EA, and would be excited if people could navigate it to a place where it regained the mandate of heaven.
Most of the problems you mention seem to be about the specific current EA community, as opposed to the main values of “doing a lot of good” and “being smart about doing so.”
Personally, I’m excited for certain altruistic and smart people to leave the EA community, as it suits them, and do good work elsewhere. I’m sure that being part of the community is limiting to certain people, especially if they can find other great communities.
That said, I of course hope you can find ways for the key values of “doing good in the world” and similar to work for you.
I feel like EAs might be sleeping a bit on digital meetups/conferences.
My impression is that many people prefer in-person events to online ones. But at the same time, a lot of people hate needing to be in the Bay Area / London or having to travel to events.
There was one EAG online during the pandemic (I believe the others were EAGxs), and I had a pretty good experience there. Some downsides, but some strong upsides. It seemed very promising to me.
I’m particularly excited about VR. I have a Quest3, and have been impressed by the experience of chatting to people in VRChat. The main downside is that there aren’t any professional events in VR that would interest me. Quest 3s are expensive ($500), but far cheaper than housing and office space in Berkeley or London.
I’d also flag: 1. I think that video calls can be dramatically improved with better microphone and camera setups. These can cost $200 to $2k or so, but make a major difference. 2. I’ve been doing some digging into platforms similar to GatherTown. I found GatherTown fairly ugly, off-putting, and limited. SpatialChat seems promising, though it’s more expensive. Zoom seems to be experimenting in the space with products like Zoom Huddles (for coworking in small groups), but these are new. 3. I like Focusmate, but think we could have better spaces for EAs/community members. 4. I think that people above the age of 25 or so find VR weird for what I’d describe as mostly status quo bias. Younger people seem to be far more willing and excited to hangout in VR. 5. I obviously think this is a larger business question. It seems like there was a wave of enthusiasm for remote work at COVID, and this has mostly dried up. However, there are still a ton of remote workers. My guess is that businesses are making a major mistake by not investing enough in better remote software and setups. 6. Organizing community is hard, even if its online. I’d like to see more attempts to pay people to organize online coworking spaces and meetups more. 7. I think that online events/conferences have become associated with the most junior talent. This seems like a pity to me. 8. I expect that different online events should come with different communities and different restrictions. A lot of existing online events/conferences are open to everyone, but then this means that they will be optimized for the most junior people. I think that we want a mix here. 9. Personally, I abhor the idea that I need to couple the place where I physically live with the friends and colleagues I have. I’d very much prefer optimizing for these two separately. 10. I think our community would generally be better off if remote work were easier to do. I’d expect this would help on multiple fronts—better talent, happier talent, lower expenses, more resilience from national politics, etc. This is extra relevant giving the current US political climate—this makes it tougher to recommend that others immigrate to the US or even visit (and the situation might get worse). 11. I’d definitely admit that remote work has a lot of downsides right now, especially with the current tech. So I’m not recommending that all orgs go remote. Just that we work on improving our remote/online infrastructure.
Have you checked out the EA Gather? It’s been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it’s had several events run or part-run on there—though you’d have to check in with the organisers to see how successful they were.
I assumed it’s been mostly dead for a while (haven’t heard about it for a few months). I’m very supportive of it, would like to see it (and more) do well.
It’s still in use, but it has the basic problem of EA services that unless there’s something to announce, there’s not really any socially acceptable way of advertising it.
Similar to “Greenwashing” and “Safetywashing”, I’ve been thinking about “Intellectual Washing.”
The pattern works as: “Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views.”
This is easiest to see in sides that you disagree with.
For example, MAGA gets intellectual cred from “The dark enlightenment” / Curtis Yarvin / Peter Thiel / etc. But I’m sure Trump never listened to any of these people, and was likely barely influenced by them. [1]
Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn’t agree with this. And I’d expect Hitler engaged very little with Heidegger’s ideas.
There’s a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.
The dynamics that enable this include: - The difficulty of making a living or gaining attention as a serious thinker - Public resource/interest constraints around complex topics - The ready opportunity to be used as a simple token of support for pre-existing agendas
Note: There’s a long list of types of “X-washing.” There’s an interesting discussion to the best terminology for this are, but I suspect most readers won’t find that particularly interesting. One related concept is that of “selling out”, sometimes where an artist with street cred would pair up with a large brand/label or similar.
[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.
I have a bunch of disagreements with Good Ventures and how they are allocating their funds, but also Dustin and Cari are plausibly the best people who ever lived.
I want to agree, but “best people who ever lived” is a ridiculously high bar! I’d imagine that both of them would be hesitant to claim anything quite that high.
Yeah, sorry: it was obvious to me that this was the intended meaning, after I realized it could be interpreted this way. I noted it because I found the syntactic ambiguity mildly interesting/amusing.
For example, Norman Borlaug is often called “the father of the Green Revolution”, and is credited with saving a billion people worldwide from starving to death. Stanislav Petrov and Vasily Arkhipov prevented a probable nuclear war from happening.
The UK offers better access as a conference location for international participants compared to the US or the EU.
I’m being invited to conferences in different parts of the world as a Turkish citizen, and visa processes for the US and the EU have gotten a lot more difficult lately. I’m unable to even get a visa appointment for several European countries, and my appointment for the US visa was scheduled 16 months out. I believe the situation is similar for visa applicants from other countries. The UK currently offers the smoothest process with timelines of only a few weeks. Conference organizers that seek applications from all over the world could choose the UK over other options.
There’s been some neat work on making AI agent forecasters. Some of these seem to have pretty decent levels of accuracy, vs. certain sets of humans.
And yet, very little of this seems to be used in the wild, from what I can tell.
It’s one thing to show some promising results in a limited study. But ultimately, we want these tools to be used by real people.
I assume some obvious todos would be: 1. Websites where you can easily ask one or multiple AI forecasters questions. 2. Competing services that package “AI forecasting” tools in different ways, focusing on optimizing (positive) engagement. 3. I assume that many AI forecasters should really be racking up good scores in Metaculus/Manifold now. The limitation seems to mainly be effort—neither platform has significant incentives yet.
Optimizing AI forecasting bots, but only in experimental settings, seems akin to optimizing cameras, but only in experimental settings. I’d expect you’d wind up with things that are technically impressive but highly unusable. We might learn a lot about a few technical challenges, but little about what real use would look like or what the key bottlenecks will be.
I’m sure some people are using custom AI tools for polymarket, but I don’t expect that to be very public.
I was focusing on Metaculus/Manifold, where I don’t think there’s much AI bot engagement yet. (Metaculus does have a dedicated tournament, but that’s separate from the main part we see, I believe).
Also what are the main or best open source projects in the space? Or if someone wanted to actually use LMs for forecasting, what is better than just asking o3 to produce a forecast?
Rubenstein says that “As the low-hanging fruit of basic health programs and cash transfers are exhausted, saving lives and alleviating suffering will require more complicated political action, such as reforming global institutions.” Unfortunately, there’s a whole lot of low-hanging fruit out there, and things have recently gotten even worse as of late with the USAID collapse and the UK cutting back on foreign aid.
In general, as the level of EA’s involvement and influence in a given domain increases, the more I start to be concerned about the sort of things that Rubenstein worries about here. When a particular approach is at a smaller size, it’s likely to concentrate on niches where its strengths shine and its limitations are less relevant. I would put the classic GiveWell-type interventions in that category, for instance. Compared to the scope of both the needs in global health & development and the actions of other actors, EA is still a fairly small fish.
I’m currently reviewing Wild Animal Initiative’s strategy in light of the US political situation. The rough idea is that things aren’t great here for wild animal welfare or for science, we’re at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn’t be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and science building, and also if we need to make any operational changes (at this early stage, we’re trying to be very open-minded about options — anything from offering various kinds of support to staff to opening a UK branch).
However, in trying to get a sense of whether that rough approach is right, it’s extremely hard to get accurate takes (or, at least, to be able to tell whether someone is thinking of the relevant risks rationally). And, its hard to tell whether “how people feel now” will have lasting impact. For example, a lot of the reporting on scientist sentiment sounds extremely grim (example 1, 2, 3), but it’s hard to know what level the effect will be over the next few years—a reduction in scientific talent, certainly, but so much so that the UK is a better place to work given our historical reasons for existing in the US? Less clear.
It doesn’t help that I personally feel extremely angry about the political situation so that probably is biasing my research.
Curious if any US-based EA orgs have considered leaving the US or taking some other operational/strategic step, given the political situation/staff concerns/etc? Why or why not?
Really appreciate you @mal_graham🔸 thinking out loud on this. Watching from Uganda, I totally get the frustration the US climate feels increasingly hostile to science and progressive work like wild animal welfare. So yeah, shifting more focus to the UK/EU makes sense, especially if it helps stabilize research and morale. That said, if you’re already rethinking geography and community building, I’d gently suggest looking beyond the usual Global North pivots. Regions like East Africa are incredibly underrepresented but ecologically critical and honestly, there’s a small but growing base of people here hungry to build this field with proper support. If there’s ever a window to make this movement more global and future-proof, it might be now. Happy to chat more if useful.
Thank you for your comment! It’s actually a topic of quite a lot of discussion for us, so I would love to connect on it. I’ll send you a DM soon.
Just for context, the main reason I’ve felt a little constrained to the US/UK context is due to comparative advantage considerations, such as having staff who are primarily based in those countries/speaking English as our organizational common tongue/being most familiar with those academic communities, etc.
I definitely think the WAW community, in general, should be investing much more outside of just US/UK/EU—but am less sure whether it makes sense for WAI to do so, given our existing investments/strengths. But I could be convinced otherwise!
Even if we keep our main focus in the US/UK, I’d be very interested in hearing more about how WAI might be able to support the “people hungry to build the field” in other countries, so that could be another thing to discuss.
All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)
“A few years ago, I began to rethink that approach. More recently, with the input from our board, I now believe we can achieve the foundation’s goals on a shorter timeline, especially if we double down on key investments and provide more certainty to our partners.”
It seems it was more of a question of whether they could grant larger amounts effectively, which he was considering for multiple years (I don’t know how much of that may be possible due to aid cuts).
I have only speculation, but it’s plausible to me that developments in AI could be playing a role. The original decision in 2000 was to sunset “several decades after [Bill and Melinda Gates’] deaths.” Likely the idea was that handpicked successor leadership could carry out the founders’ vision and that the world would be similar enough to the world at the time of their death or disability for that plan to make sense for several decades after the founders’ deaths. To the extent that Gates thought that the world is going to change more rapidly than he believed in 2000, this plan may look less attractive than it once did.
(just speculating, would like to have other inputs)
I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.
I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.
My take is that I think there are strong arguments for why AI x-risk is overwhelmingly more important than narrow AI, and I think those arguments are the main reason why x-risk gets more attention among EAs.
Ah I see what you’re saying. I can’t recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven’t really thought about it. It does sound like something that deserves some thought.
When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.
My uninformed guess is that an automatic system doesn’t need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn’t need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.
I’ve noticed that a lot of the research papers related to artificial intelligence that I see folks citing are not peer reviewed. They tend to be research papers posted to arXiv, papers produced by a company/organization, or otherwise papers that haven’t been reviewed and published in respected/mainstream academic journals.
Is this a concern? I know that there are plenty of problems with the system of academic publishing, but are non-peer reviewed papers fine?
Reasons my gut feeling might be wrong here:
many I’m overly focusing with a sort of status quo bias, overly concerned about anything different from the standard system.
maybe experts in the area find these papers to be of acceptable quality.
maybe the handful of papers I’ve seen outside of traditional peer review aren’t representative, suggesting a sort of availability bias, and actually the vast majority of new AI-relevant papers that people care about are really published in top journals. I’m just browsing the internet, so maybe if I were a researcher in this area speaking with other researchers I would have a better sense of what is actually meaningful.
maybe artificial intelligence is an area where peer review doesn’t matter as much, as results can be easily replicated (unlike, say, a history paper, where maybe you didn’t have access to the same archive or field site as the paper’s author did).
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
My understanding is that peer review is somewhat less common in computer science fields because research is often published in conference proceedings without extensive peer review. Of course, you could say that the conference itself is doing the vetting here, and computer science often has the advantage of easy replication by running the supplied code. This applies to some of the papers people are providing… but certainly not all of them.
Peer review is far from perfect, but if something isn’t peer reviewed I won’t fully trust it unless it’s gone through an equivalent amount of vetting by other means. I mean, I won’t fully trust a paper that has gone through external peer review, so I certainly won’t immediately trust something that has gone through nothing.
I’m working on an article about this, but I consider the lack of sufficient vetting to be one of the biggest epistemological problems in EA.
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.
Is there a world where 30% Tariffs on Chinese goods going into America is net positive for the world?
Could the Tariff reduce consumption and carbon emissions a little in the USA, while China puts more focus more on selling goods to lower income countries? Could this perhaps result in a tiny boost in growth in low income countries?
Could the improved wellbeing/welfare stemming from growth in low income countries + reduced American consumption offset the harms caused by economic slowdown in America/China?
Probably not—I’m like 75% sure the answer is no, but thought the question might be worth asking...
I’ve now spoken to ~1,400 people as an advisor with 80,000 Hours, and if there’s a quick thing I think is worth more people doing, it’s doing a short reflection exercise about one’s current situation.
Below are some (cluster of) questions I often ask in an advising call to facilitate this. I’m often surprised by how much purchase one can get simply from this—noticing one’s own motivations, weighing one’s personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.
A long list of semi-useful questions I often ask in an advising call
Your context:
What’s your current job like? (or like, for the roles you’ve had in the last few years…)
The role
The tasks and activities
Does it involve management?
What skills do you use? Which ones are you learning?
Is there something in your current job that you want to change, that you don’t like?
Default plan and tactics
What is your default plan?
How soon are you planning to move? How urgently do you need to get a job?
Have you been applying? Getting interviews, offers? Which roles? Why those roles?
Have you been networking? How? What is your current network?
Have you been doing any learning, upskilling? How have you been finding it?
How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week?
What are you feeling blocked/bottlenecked by?
What are your preferences and/or constraints?
Money
Location
What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.)
What skills do you want to develop?
Are you interested in leadership, management, or individual contribution?
Do you want to shoot for impact? How important is it compared to your other preferences?
How much certainty do you want to have wrt your impact?
If you could picture your perfect job – the perfect combination of the above – which ones would you relax first in order to consider a role?
Reflecting more on your values:
What is your moral circle?
Do future people matter?
How do you compare problems?
Do you buy this x-risk stuff?
How do you feel about expected impact vs certain impact?
For any domain of research you’re interested in:
What’s your answer to the Hamming question? Why?
If possible, I’d recommend trying to answer these questions out loud with another person listening (just like in an advising call!); they might be able to notice confusions, tensions, and places worth exploring further. Some follow up prompts that might be applicable to many of the questions above:
How do you feel about that?
Why is that? Why do you believe that?
What would make you change your mind about that?
What assumptions is that built on? What would change if you changed those assumptions?
Have you tried to work on that? What have you tried? What went well, what went poorly, and what did you learn?
Is there anyone you can ask about that? Is there someone you could cold-email about that?
Good luck!
https://economics.mit.edu/news/assuring-accurate-research-record
A really important paper on how AI speeds up R&D discovery was withdrawn and the PhD student who wrote it is no longer at MIT.
As a community builder, I’ve started donating directly to my local EA group—and I encourage you to consider doing the same.
Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.
Of course, seeking funding from organizations like OpenPhil remains highly valuable—they’ve dedicated extensive thought to effective community building. Yet, don’t underestimate the power and efficiency of utilizing your intimate knowledge of your group’s immediate requirements.
Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
EA community building relies heavily on a few large donors. This creates risk.
One way to reduce that risk is to broaden the funding base. Membership models might help.[1]
Many people assume EA will only ever appeal to a small slice of the population, and so this funding would never amount to anything significant. However, I think people often underestimate how large a “small slice” can be.
Take the Dutch mountaineering association. A mountaineering club in one of the flattest countries on Earth doesn’t exactly scream mass appeal.
So, how many members do you think it has?
Around 80,000. That alone brings in roughly €4 million in membership contributions—about 70% of its total annual income.[2]
Even niche communities can fund themselves if enough people are engaged.
For now I’m putting to one side the question of whether having a membership model would distort community building incentives.
These figures are taken from their multi-year plan, available here.
On the income side of the ledger, having more members might help. But the more members you have, the more you need to spend on member-service activities (i.e., whatever it is that you’re offering that makes people want to pay the membership fee).
On the one hand, I don’t think that member-service activity expenditure would scale linearly with increased membership. On the other hand, current spend on meta / community building activities is far more than €50/involved person. So my assumption is that—at best—the marginal costs of serving additional members would be equal to the membership revenue. A meta in which the spend per average member was anywhere close to that would be a very different meta indeed.
You’re right that EA’s current meta budget works out to far more than €50 per “involved person”—but that average includes the highly engaged core: people attending conferences, receiving 1-1 support, travel grants, and significant staff time.
A low-touch “supporter” tier is a different product entirely. If you ask someone for €50/year just to back the mission, receive a newsletter, and gain access to certain events, the marginal cost is minimal: card fees, a CRM entry, an occasional email, maybe a €5 welcome sticker. Even doubling every line item puts the cost at €10–20, leaving €30–40 net per person.
We could keep the high-cost, high-impact activities funded by major donors, while using a supporter tier as a lightweight way for sympathisers to express commitment and reduce funding concentration risk.
I think an organization like that is plausible, but I get the sense that it is a much different animal than the Dutch mountaineering association.
Although the financial breakdown is sparse (and I can’t read Dutch), a glance at the website suggests the association offers a lot of activities and other sources of value for its members—which I am guessing are significantly more costly than a card, sticker, e-mail, and so on. If you’re even moderately interested in mountaineering, it makes sense that joining would provide you with a lot of value. Thus, I wouldn’t be surprised that a large fraction of people who are moderately interested in mountaineering join.
That doesn’t strike me as the right joining-percentage base rate for an organization in which members don’t get much to show for their membership fees. For example, one might consider the number of individuals who support the free / open source software movement versus the number who are paying members of a FOSS software organization. If the conversion rate of interested people into membership of a sticker-and-card organization is rather low, you need a rather large group of interested people to end up with a sizable membership.
Don’t get me wrong; a membership organization with 80,000 people would be great! I just don’t see that a low-cost membership organization as a likely way to reduce net funding pressures.
I think you might be overestimating how much the NKBV offers as part of the basic membership. Most of their trips and courses, etc., are paid add-ons. What the €50 fee actually gets you is fairly lightweight: a magazine, eligibility to join trips (not free), discounted access to mountain huts (because the NKBV helps fund them), inclusion in their group insurance policy, and a 10% discount with a Dutch outdoor brand.
That’s not nothing, but it’s modest and it shows that people will pay for affiliation, identity, and access to a community infrastructure, even if the tangible perks are limited.
The EA equivalent could be things like discounted or early access to EAG(x) events, member-only discussion groups, or eligibility to complete advanced courses offered by national EA associations. If multiple countries coordinated, pooled membership fees could help subsidise international EA public goods such as the Forum, EAG(x) events, group support infrastructure, etc.
I think the key point is this: the NKBV shows that people are willing to pay for affiliation, even if the direct perks are modest, as long as the organisation feels valuable to their identity and goals. EA can plausibly do the same.
Maybe, but this sounds to me a lot like erecting new pay gates for engagement with the community (both the membership fee and any extra fee for the advanced courses, etc.). Maybe that’s unavoidable, but it does carry some significant downsides that aren’t present with a mountaineering club (where the benefits of participation are intended to flow mainly to the participant rather than to third parties like animals or future people)
It also seems at tension with the current recruitment strategy by increasing barriers/friction to deeper engagement. And it seems that people most commonly become interested in EA in their 20s, an age at which imposing financial barriers to deeper engagement may be particularly negative. While I think people would be okay lowering pay gates based on certain objectively-applied markers of merit or need, I am not confident that this could be done in a way that both didn’t impede “core” recruitment and that “supporter” members experienced as fair and acceptable. Most people don’t want to pay for something others are getting for free / near-free without a sufficiently compelling reason.
I have $20 in unused RunPod.io credit (cloud GPU service) that I’m not using and can’t refund. 😢 I’d love to donate it to someone working on any useful — whether it’s for running models, processing data, or prototyping.
Feel free to message me if you want it.
I know that folks in EA often favor donating to more effective things rather than less effective things. With that in mind, I have mixed feelings knowing that many Harvard faculty are donating 10%, and that they are donating to the best funded and most prestigious university in the world.
On the one hand, it is really nice to know that they are willing to put their money where their mouth is when their institution is under attack. I get some warm fuzzy feelings from the idea of defending an education institution against political attacks. On the other hand, Harvard University’s endowment is already very large, and Harvard earns a lot of money each year. It is like a very tailored version of a giving pledge: giving to Harvard, giving for one year. Will such a relatively small amount given toward such a relatively large institution do much good? I do wonder what the impact would be if these fairly well-known and well-respected academics announced they were donating 10% to clean water, or to deworming, or to reducing animal suffering. I wonder how much their donations will do for Harvard.
I’ll include a few graphs to illustrate Harvard’s financial strength.
Some notes about the graphs:
These are from a project I did several months ago using data from the Common Data Set, from College Scorecard, from their Form 990 tax filings, and some data from the college’s websites.
The selection of the non-Harvard schools is fairly arbitrary. For that particular project I just wanted to select a few different types of schools (small liberal arts, more technical focused, etc.) rather than comparing Harvard to other ‘hyper elite’ schools.
I left the endowment graph non-logarithmic just to illustrate the ludicrous difference. Yes, I know it is bad design practice and that it obscures the numbers for the non-Harvard schools.
As a group organiser I was wildly miscalibrated about the acceptance rate for EAGs! I spoke to the EAG team, and here are the actual figures:
The overall acceptance rate for undergraduate student is about ¾! (2024)
For undergraduate first timers, it’s about ½ (Bay Area 2025)
If that’s peaked your interest, EAG London 2025 applications close soon—apply here!
Jemima
Ah that’s great info! Would be useful to get similar numbers for EAGx events. I know the overall acceptance rate is quite high, but don’t know how it is for students who are applying for their regional EAGx.
EAGx undergraduate acceptance rate across 2024 and 2025 = ~82%
EAGx first-timer undergraduate acceptance rate across 2024 and 2025 = ~76%
Obvious caveat that if we tell lots of people that the acceptance rate is high, we might attract more people without any context on EA and the rate would go down.
(I’ve not closely checked the data)
why do i find myself less involved in EA?
epistemic status: i timeboxed the below to 30 minutes. it’s been bubbling for a while, but i haven’t spent that much time explicitly thinking about this. i figured it’d be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don’t expect to reflectively endorse all of these points later down the line. i think it’s probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i’m not very keen on arguing about any of the points below, but if you think you could be useful toward my reflecting processes (or if you think i could be useful toward yours!), i’d prefer that you book a call to chat more over replying in the comments. i do not give you consent to quote my writing in this short-form without also including the entirety of this epistemic status.
1-3 years ago, i was a decently involved with EA (helping organize my university EA program, attending EA events, contracting with EA orgs, reading EA content, thinking through EA frames, etc).
i am now a lot less involved in EA.
e.g. i currently attend uc berkeley, and am ~uninvolved in uc berkeley EA
e.g. i haven’t attended a casual EA social in a long time, and i notice myself ughing in response to invites to explicitly-EA socials
e.g. i think through impact-maximization frames with a lot more care & wariness, and have plenty of other frames in my toolbox that i use to a greater relative degree than the EA ones
e.g. the orgs i find myself interested in working for seem to do effectively altruistic things by my lights, but seem (at closest) to be EA-community-adjacent and (at furthest) actively antagonistic to the EA community
(to be clear, i still find myself wanting to be altruistic, and wanting to be effective in that process. but i think describing my shift as merely moving a bit away from the community would be underselling the extent to which i’ve also moved a bit away from EA’s frames of thinking.)
why?
a lot of EA seems fake
the stuff — the orientations — the orgs — i’m finding it hard to straightforwardly point at, but it feels kinda easy for me to notice ex-post
there’s been an odd mix of orientations toward [ aiming at a character of transparent/open/clear/etc ] alongside [ taking actions that are strategic/instrumentally useful/best at accomplishing narrow goals… that also happen to be mildly deceptive, or lying by omission, or otherwise somewhat slimy/untrustworthy/etc ]
the thing that really gets me is the combination of an implicit (and sometimes explicit!) request for deep trust alongside a level of trust that doesn’t live up to that expectation.
it’s fine to be in a low-trust environment, and also fine to be in a high-trust environment; it’s not fine to signal one and be the other. my experience of EA has been that people have generally behaved extremely well/with high integrity and with high trust… but not quite as well & as high as what was written on the tin.
for a concrete ex (& note that i totally might be screwing up some of the details here, please don’t index too hard on the specific people/orgs involved): when i was participating in — and then organizing for — brandeis EA, it seemed like our goal was (very roughly speaking) to increase awareness of EA ideas/principles, both via increasing depth & quantity of conversation and via increasing membership. i noticed a lack of action/doing-things-in-the-world, which felt kinda annoying to me… until i became aware that the action was “organizing the group,” and that some of the organizers (and higher up the chain, people at CEA/on the Groups team/at UGAP/etc) believed that most of the impact of university groups comes from recruiting/training organizers — that the “action” i felt was missing wasn’t missing at all, it was just happening to me, not from me. i doubt there was some point where anyone said “oh, and make sure not to tell the people in the club that their value is to be a training ground for the organizers!” — but that’s sorta how it felt, both on the object-level and on the deception-level.
this sort of orientation feels decently reprensentative of the 25th percentile end of what i’m talking about.
also some confusion around ethics/how i should behave given my confusion/etc
importantly, some confusions around how i value things. it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i’m blinding myself. i think it’s taken me awhile to know what that feels like, and i’ve grown to find that blinding & meta-blinding extremely distasteful, and a signal that something’s wrong.
some of this might merely be confusion about orientation, and not ethics — e.g. it might be that in some sense the right doxastic attitude is “EA,” but that the right conative attitude is somewhere closer to (e.g.) “embody your character — be kind, warm, clear-thinking, goofy, loving, wise, [insert more virtues i want to be here]. oh and do some EA on the side, timeboxed & contained, like when you’re donating your yearly pledge money.”
where now?
i’m not sure! i could imagine the pendulum swinging more in either direction, and want to avoid doing any further prediction about where it will swing for fear of that prediction interacting harmfully with a sincere process of reflection.
i did find writing this out useful, though!
“why do i find myself less involved in EA?”
You go over more details later and answer other questions like what caused some reactions to some EA-related things, but an interesting thing here is that you are looking for a cause of something that is not.
> it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i’m blinding myself.
I can strongly relate, had the same experience. i think it’s due to christian upbringing or some kind of need for external validation. I think many people don’t experience that, so I wouldn’t say that’s an inherently EA thing, it’s more about the attitude.
I appreciated you expressing this.
Riffing out loud … I feel that there are different dynamics going on here (not necessarily in your case; more in general):
The tensions where people don’t act with as much integrity as is signalled
This is not a new issue for EA (it arises structurally despite a lot of good intentions, because of the encouragement to be strategic), and I think it just needs active cultural resistance
In terms of writing, I like Holden’s and Toby’s pushes on this; my own attempts here and here
But for this to go well, I think it’s not enough to have some essays on reading lists; instead I hope that people try to practice good orientation here at lots of different scales, and socially encourage others to
The meta-blinding
I feel like I haven’t read much on this, but it rings true as a dynamic to be wary of! Where I take the heart of the issue to be that EA presents a strong frame about what “good” means, and then encourages people to engage in ways that make aspects of their thinking subservient to that frame
As someone put it to me, “EA has lost the mandate of heaven”
I think EA used to be (in some circles) the obvious default place for the thoughtful people who cared a lot to gather and collaborate
I think that some good fraction of its value came from performing this role?
Partially as a result of 1 and 2, people are disassociating with EA; and this further reduces the pull to associate
I can’t speak to how strong this effect is overall, but I think the directionality is clear
I don’t know if it’s accessible (and I don’t think I’m well positioned to try), but I still feel a lot of love for the core of EA, and would be excited if people could navigate it to a place where it regained the mandate of heaven.
Thanks for clarifying your take!
I’m sorry to hear about those experiences.
Most of the problems you mention seem to be about the specific current EA community, as opposed to the main values of “doing a lot of good” and “being smart about doing so.”
Personally, I’m excited for certain altruistic and smart people to leave the EA community, as it suits them, and do good work elsewhere. I’m sure that being part of the community is limiting to certain people, especially if they can find other great communities.
That said, I of course hope you can find ways for the key values of “doing good in the world” and similar to work for you.
I feel like EAs might be sleeping a bit on digital meetups/conferences.
My impression is that many people prefer in-person events to online ones. But at the same time, a lot of people hate needing to be in the Bay Area / London or having to travel to events.
There was one EAG online during the pandemic (I believe the others were EAGxs), and I had a pretty good experience there. Some downsides, but some strong upsides. It seemed very promising to me.
I’m particularly excited about VR. I have a Quest3, and have been impressed by the experience of chatting to people in VRChat. The main downside is that there aren’t any professional events in VR that would interest me. Quest 3s are expensive ($500), but far cheaper than housing and office space in Berkeley or London.
I’d also flag:
1. I think that video calls can be dramatically improved with better microphone and camera setups. These can cost $200 to $2k or so, but make a major difference.
2. I’ve been doing some digging into platforms similar to GatherTown. I found GatherTown fairly ugly, off-putting, and limited. SpatialChat seems promising, though it’s more expensive. Zoom seems to be experimenting in the space with products like Zoom Huddles (for coworking in small groups), but these are new.
3. I like Focusmate, but think we could have better spaces for EAs/community members.
4. I think that people above the age of 25 or so find VR weird for what I’d describe as mostly status quo bias. Younger people seem to be far more willing and excited to hangout in VR.
5. I obviously think this is a larger business question. It seems like there was a wave of enthusiasm for remote work at COVID, and this has mostly dried up. However, there are still a ton of remote workers. My guess is that businesses are making a major mistake by not investing enough in better remote software and setups.
6. Organizing community is hard, even if its online. I’d like to see more attempts to pay people to organize online coworking spaces and meetups more.
7. I think that online events/conferences have become associated with the most junior talent. This seems like a pity to me.
8. I expect that different online events should come with different communities and different restrictions. A lot of existing online events/conferences are open to everyone, but then this means that they will be optimized for the most junior people. I think that we want a mix here.
9. Personally, I abhor the idea that I need to couple the place where I physically live with the friends and colleagues I have. I’d very much prefer optimizing for these two separately.
10. I think our community would generally be better off if remote work were easier to do. I’d expect this would help on multiple fronts—better talent, happier talent, lower expenses, more resilience from national politics, etc. This is extra relevant giving the current US political climate—this makes it tougher to recommend that others immigrate to the US or even visit (and the situation might get worse).
11. I’d definitely admit that remote work has a lot of downsides right now, especially with the current tech. So I’m not recommending that all orgs go remote. Just that we work on improving our remote/online infrastructure.
Have you checked out the EA Gather? It’s been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it’s had several events run or part-run on there—though you’d have to check in with the organisers to see how successful they were.
I assumed it’s been mostly dead for a while (haven’t heard about it for a few months). I’m very supportive of it, would like to see it (and more) do well.
It’s still in use, but it has the basic problem of EA services that unless there’s something to announce, there’s not really any socially acceptable way of advertising it.
Similar to “Greenwashing” and “Safetywashing”, I’ve been thinking about “Intellectual Washing.”
The pattern works as: “Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views.”
This is easiest to see in sides that you disagree with.
For example, MAGA gets intellectual cred from “The dark enlightenment” / Curtis Yarvin / Peter Thiel / etc. But I’m sure Trump never listened to any of these people, and was likely barely influenced by them. [1]
Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn’t agree with this. And I’d expect Hitler engaged very little with Heidegger’s ideas.
There’s a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.
The dynamics that enable this include:
- The difficulty of making a living or gaining attention as a serious thinker
- Public resource/interest constraints around complex topics
- The ready opportunity to be used as a simple token of support for pre-existing agendas
Note: There’s a long list of types of “X-washing.” There’s an interesting discussion to the best terminology for this are, but I suspect most readers won’t find that particularly interesting. One related concept is that of “selling out”, sometimes where an artist with street cred would pair up with a large brand/label or similar.
[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.
What can ordinary people do to reduce AI risk? People who don’t have expertise in AI research / decision theory / policy / etc.
Some ideas:
Donate to orgs that are working to AI risk (which ones, though?)
Write letters to policy-makers expressing your concerns
Be public about your concerns. Normalize caring about x-risk
I have a bunch of disagreements with Good Ventures and how they are allocating their funds, but also Dustin and Cari are plausibly the best people who ever lived.
I want to agree, but “best people who ever lived” is a ridiculously high bar! I’d imagine that both of them would be hesitant to claim anything quite that high.
“Plausibly best people who have ever lived” is a much lower bar than “best people who have ever lived”.
If you are like me, this comment will leave you perplexed. After a while, I realized that it should not be read as
but as
fwiw i instinctively read it as the 2nd, which i think is caleb’s intended reading
I was going for the second, adding some quotes to make it clearer.
Yeah, sorry: it was obvious to me that this was the intended meaning, after I realized it could be interpreted this way. I noted it because I found the syntactic ambiguity mildly interesting/amusing.
For example, Norman Borlaug is often called “the father of the Green Revolution”, and is credited with saving a billion people worldwide from starving to death. Stanislav Petrov and Vasily Arkhipov prevented a probable nuclear war from happening.
It’s true how many people actually give away so much money as they make it?
The UK offers better access as a conference location for international participants compared to the US or the EU.
I’m being invited to conferences in different parts of the world as a Turkish citizen, and visa processes for the US and the EU have gotten a lot more difficult lately. I’m unable to even get a visa appointment for several European countries, and my appointment for the US visa was scheduled 16 months out. I believe the situation is similar for visa applicants from other countries. The UK currently offers the smoothest process with timelines of only a few weeks. Conference organizers that seek applications from all over the world could choose the UK over other options.
There’s been some neat work on making AI agent forecasters. Some of these seem to have pretty decent levels of accuracy, vs. certain sets of humans.
And yet, very little of this seems to be used in the wild, from what I can tell.
It’s one thing to show some promising results in a limited study. But ultimately, we want these tools to be used by real people.
I assume some obvious todos would be:
1. Websites where you can easily ask one or multiple AI forecasters questions.
2. Competing services that package “AI forecasting” tools in different ways, focusing on optimizing (positive) engagement.
3. I assume that many AI forecasters should really be racking up good scores in Metaculus/Manifold now. The limitation seems to mainly be effort—neither platform has significant incentives yet.
Optimizing AI forecasting bots, but only in experimental settings, seems akin to optimizing cameras, but only in experimental settings. I’d expect you’d wind up with things that are technically impressive but highly unusable. We might learn a lot about a few technical challenges, but little about what real use would look like or what the key bottlenecks will be.
I haven’t been following this area closely, but why aren’t they making a lot of money on polymarket?
I’m sure some people are using custom AI tools for polymarket, but I don’t expect that to be very public.
I was focusing on Metaculus/Manifold, where I don’t think there’s much AI bot engagement yet. (Metaculus does have a dedicated tournament, but that’s separate from the main part we see, I believe).
Also what are the main or best open source projects in the space? Or if someone wanted to actually use LMs for forecasting, what is better than just asking o3 to produce a forecast?
There’s some relevant discussion here:
https://forum.effectivealtruism.org/posts/TG2zCDCozMcDLgoJ5/metaculus-q4-ai-benchmarking-bots-are-closing-the-gap?commentId=TvwwuKB6rNASzMNoo
Basically, it seems like people haven’t outperformed the Metaculus template bot much, which IMO is fairly underwhelming, but it is what it is.
You can do simple tips though like run it a few times and average the results.
Would anyone be up for reading and responding to this article? I find myself agreeing with a lot of it.
”Effective altruism is a movement that excludes poor people”
This is a ten year old article, but it was discussed at the time—see e.g. here.
Rubenstein says that “As the low-hanging fruit of basic health programs and cash transfers are exhausted, saving lives and alleviating suffering will require more complicated political action, such as reforming global institutions.” Unfortunately, there’s a whole lot of low-hanging fruit out there, and things have recently gotten even worse as of late with the USAID collapse and the UK cutting back on foreign aid.
In general, as the level of EA’s involvement and influence in a given domain increases, the more I start to be concerned about the sort of things that Rubenstein worries about here. When a particular approach is at a smaller size, it’s likely to concentrate on niches where its strengths shine and its limitations are less relevant. I would put the classic GiveWell-type interventions in that category, for instance. Compared to the scope of both the needs in global health & development and the actions of other actors, EA is still a fairly small fish.
I’m currently reviewing Wild Animal Initiative’s strategy in light of the US political situation. The rough idea is that things aren’t great here for wild animal welfare or for science, we’re at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn’t be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and science building, and also if we need to make any operational changes (at this early stage, we’re trying to be very open-minded about options — anything from offering various kinds of support to staff to opening a UK branch).
However, in trying to get a sense of whether that rough approach is right, it’s extremely hard to get accurate takes (or, at least, to be able to tell whether someone is thinking of the relevant risks rationally). And, its hard to tell whether “how people feel now” will have lasting impact. For example, a lot of the reporting on scientist sentiment sounds extremely grim (example 1, 2, 3), but it’s hard to know what level the effect will be over the next few years—a reduction in scientific talent, certainly, but so much so that the UK is a better place to work given our historical reasons for existing in the US? Less clear.
It doesn’t help that I personally feel extremely angry about the political situation so that probably is biasing my research.
Curious if any US-based EA orgs have considered leaving the US or taking some other operational/strategic step, given the political situation/staff concerns/etc? Why or why not?
Really appreciate you @mal_graham🔸 thinking out loud on this. Watching from Uganda, I totally get the frustration the US climate feels increasingly hostile to science and progressive work like wild animal welfare. So yeah, shifting more focus to the UK/EU makes sense, especially if it helps stabilize research and morale. That said, if you’re already rethinking geography and community building, I’d gently suggest looking beyond the usual Global North pivots. Regions like East Africa are incredibly underrepresented but ecologically critical and honestly, there’s a small but growing base of people here hungry to build this field with proper support. If there’s ever a window to make this movement more global and future-proof, it might be now. Happy to chat more if useful.
Thank you for your comment! It’s actually a topic of quite a lot of discussion for us, so I would love to connect on it. I’ll send you a DM soon.
Just for context, the main reason I’ve felt a little constrained to the US/UK context is due to comparative advantage considerations, such as having staff who are primarily based in those countries/speaking English as our organizational common tongue/being most familiar with those academic communities, etc.
I definitely think the WAW community, in general, should be investing much more outside of just US/UK/EU—but am less sure whether it makes sense for WAI to do so, given our existing investments/strengths. But I could be convinced otherwise!
Even if we keep our main focus in the US/UK, I’d be very interested in hearing more about how WAI might be able to support the “people hungry to build the field” in other countries, so that could be another thing to discuss.
Bill Gates: “My new deadline: 20 years to give away virtually all my wealth”—https://www.gatesnotes.com/home/home-page-topic/reader/n20-years-to-give-away-virtually-all-my-wealth
All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)
Why not what seems to be the obvious mechanism: the cuts to USAID making this more urgent and imperative. Or am I missing something?
“A few years ago, I began to rethink that approach. More recently, with the input from our board, I now believe we can achieve the foundation’s goals on a shorter timeline, especially if we double down on key investments and provide more certainty to our partners.”
It seems it was more of a question of whether they could grant larger amounts effectively, which he was considering for multiple years (I don’t know how much of that may be possible due to aid cuts).
I have only speculation, but it’s plausible to me that developments in AI could be playing a role. The original decision in 2000 was to sunset “several decades after [Bill and Melinda Gates’] deaths.” Likely the idea was that handpicked successor leadership could carry out the founders’ vision and that the world would be similar enough to the world at the time of their death or disability for that plan to make sense for several decades after the founders’ deaths. To the extent that Gates thought that the world is going to change more rapidly than he believed in 2000, this plan may look less attractive than it once did.
(just speculating, would like to have other inputs)
I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.
I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.
My take is that I think there are strong arguments for why AI x-risk is overwhelmingly more important than narrow AI, and I think those arguments are the main reason why x-risk gets more attention among EAs.
Thank you for your comment. I edited my post for clarity. I was already thinking of x-risk or s-risk (both in AGI risk and in narrow AI risk).
Ah I see what you’re saying. I can’t recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven’t really thought about it. It does sound like something that deserves some thought.
When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.
My uninformed guess is that an automatic system doesn’t need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn’t need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.
I’ve noticed that a lot of the research papers related to artificial intelligence that I see folks citing are not peer reviewed. They tend to be research papers posted to arXiv, papers produced by a company/organization, or otherwise papers that haven’t been reviewed and published in respected/mainstream academic journals.
Is this a concern? I know that there are plenty of problems with the system of academic publishing, but are non-peer reviewed papers fine?
Reasons my gut feeling might be wrong here:
many I’m overly focusing with a sort of status quo bias, overly concerned about anything different from the standard system.
maybe experts in the area find these papers to be of acceptable quality.
maybe the handful of papers I’ve seen outside of traditional peer review aren’t representative, suggesting a sort of availability bias, and actually the vast majority of new AI-relevant papers that people care about are really published in top journals. I’m just browsing the internet, so maybe if I were a researcher in this area speaking with other researchers I would have a better sense of what is actually meaningful.
maybe artificial intelligence is an area where peer review doesn’t matter as much, as results can be easily replicated (unlike, say, a history paper, where maybe you didn’t have access to the same archive or field site as the paper’s author did).
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
My understanding is that peer review is somewhat less common in computer science fields because research is often published in conference proceedings without extensive peer review. Of course, you could say that the conference itself is doing the vetting here, and computer science often has the advantage of easy replication by running the supplied code. This applies to some of the papers people are providing… but certainly not all of them.
Peer review is far from perfect, but if something isn’t peer reviewed I won’t fully trust it unless it’s gone through an equivalent amount of vetting by other means. I mean, I won’t fully trust a paper that has gone through external peer review, so I certainly won’t immediately trust something that has gone through nothing.
I’m working on an article about this, but I consider the lack of sufficient vetting to be one of the biggest epistemological problems in EA.
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.
Is there a world where 30% Tariffs on Chinese goods going into America is net positive for the world?
Could the Tariff reduce consumption and carbon emissions a little in the USA, while China puts more focus more on selling goods to lower income countries? Could this perhaps result in a tiny boost in growth in low income countries?
Could the improved wellbeing/welfare stemming from growth in low income countries + reduced American consumption offset the harms caused by economic slowdown in America/China?
Probably not—I’m like 75% sure the answer is no, but thought the question might be worth asking...