I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I agree that Amsterdam is a good option, but if you are interested in AI policy, Brussels now has a very active ecosystem, most of which is quite EA-oriented. Oddly enough, this is completely separate from the Belgian EA ecosystem.
Can you share how you have set this up?
Thank you!
Hi @Peter Wildeford , It has been a few years, but I hold your opinion in pretty high regard and I was just wondering: To what degree do you stand by the lessons of this book (with or without skimming your own post again)?
Thank you for your post! I think it is essential to consider how charity interacts with power dynamics and the risks of neocolonial approaches, but this did make me think of a point in this kind of critique that consistently puzzles me.
You write:
The final, and perhaps most foundational critique, is that EA interventions are too often designed from the outside âbased on what donors or researchers believe is effectiveâ without meaningful consultation with the people these interventions are supposed to help. The result, critics say, is not just technocratic or impersonal, but paternalistic: a form of help that is imposed, rather than co-created. If poor people are experts in their own needs, who is in a better position to know what would help them? And if effective altruists genuinely want to improve lives, why not begin by amplifying the voices of those they aim to support?
I live in a wealthy country, yet I often appreciate when problems are solved for me without needing my direct involvement. The underlying critique here is valid: local knowledge is invaluable, and listening to those affected will usually improve interventions. As a general heuristic, amplifying local voices makes sense.
However, when I consider how progress works in my own life and in developed countries more broadly, the picture looks different. Many improvements I rely on exist because experts, not locals, prioritized technical knowledge and scale. The water from my tap meets WHO standards rather than being based on local opinion. My appliances result from global competition that identifies expertise wherever it exists. Healthcare is guided not just by local doctors, but also by national and international bodies that assess quality and effectiveness. Medical breakthroughs often arise from global expertise rather than local insight. These examples illustrate that relying solely on local perspectives would leave wealthy countries significantly worse off.
This raises a question: why do we hold global development to a standard of strict localism, when wealthier societies themselves achieved prosperity by leveraging expertise wherever it was found? The principles that drive progress in developed nations, identifying and scaling the best solutions, are directly relevant to tackling global poverty.
Of course, this approach should not override fairness or safety. If interventions risk active harm or require sensitive political trade-offs, local input is crucial. But many EA-funded interventionsâlike reducing worm infestations, malaria, or lead exposureâare a lot like the water from my tap: technocratic precisely because they avoid contentious political questions. These are large-scale problems that individuals cannot realistically address alone, yet they have minimal local opposition when implemented effectively.
In this sense, I think EA is often more systemic than critics acknowledge. It focuses on solving deep-rooted problems through scalable solutions, much like the systems that underpin prosperity in wealthier countries.
Yes! đ Some of us are working on exactly this!
That said, there are also a few caveats:
- At the moment it is actually the Dutch government, not the EU, that is responsible for export controls and which places ASML is allowed to export to and what kinds of restrictions need to be put on that.
- Increasingly, things like export controls are also bypassing European jurisdictions because the U.S. is applying ever more control over the entire supply chain through new legislation that gives them the right to implement export controls on products that contain only a relatively small amount of U.S.-made goods. On the other hand, that is obviously not total control, and the fact that the Atlantic alliance is not in the strongest shape right now might actually indicate that the EU might be willing to move more independently on this in the future.
- Unfortunately, there is much less awareness of the long-term geopolitical importance of AI in EU capitals. So the theory of change most people working on this now follow is something like: first raise awareness of this being an important issue, simultaneously raise awareness of this being a leverage point in the chip supply chain, and then start advocating for these regulations.
- ASML is also a company that might threaten to relocate to a more permissive jurisdiction. Now, I donât see that happening very quickly, but it is worth thinking about as a consideration.
Iâm still figuring out how I feel about all of this, and Iâm not yet sure how it should change my behaviour. But I wanted to say that this post has stayed with me since I read it, thank you for writing it.
I also find it quietly reassuring that Mjreard, of all people, is engaging with this topic; someone I think of as unsentimental, focused on what truly matters, and a sharp critic of ideas. Iâm not sure I entirely endorse the comfort I take from that, but it does make it feel more socially acceptable to be concerned about these shifts too.
Interesting idea! This got me thinking about this, and I think I find it tricky because I want to stay close to the truth, and the truth is, Iâm not really a âmoderate EAâ. I care about shrimp welfare, think existential risk is hugely underrated, and believe putting numbers on things is one of our most powerful tools.
Itâs less catchy, but Iâve been leaning toward something like: âIâm in the EA movement. To me, that means I try to ask what would do the most good, and I appreciate the community of people doing the same. That doesnât mean I endorse everything done under the EA banner, or how itâs sometimes portrayed.â
Should I Be Public About Effective Altruism?
TL;DR: Iâve kept my EA ties low-profile due to career and reputational concerns, especially in policy. But Iâm now choosing to be more openly supportive of effective giving, despite some risks.
For most of my career, Iâve worked in policy rolesâfirst as a civil servant, now in an EA-aligned organization. Early on, both EA and policy work seemed wary of each other. EA had a mixed reputation in government, and I chose to stay quiet about my involvement, sharing only in trusted settings.
This caution gave me flexibility. My public profile isnât linked to EA, and I avoided permanent records of affiliation. At times, Iâve even distanced myself deliberately. But Iâm now wondering if this is limiting both my own impact and the spread of ideas I care about.
Ideas spread through visibility. I believe in EA and effective giving and want it to become a social normâbut norms need visible examples. If no one speaks up, can we expect others to follow?
Iâve been cautious about reputational risksâespecially the potential downsides of being tied to EA in future influential roles, like running for office. EA still carries baggage: concerns about longtermism, elitism, the FTX/âSBF scandal, and public misunderstandings of our priorities. But these risks seem more manageable now. Most people I meet either donât know EA, or have a neutral-to-positive view when I explain it. Also, my current role is somewhat publicly associated with EA, and that wonât change. Hiding my views on effective giving feels less justifiable.
So, Iâm shifting to increased openness: Iâll be sharing more and be more honest about the sources of my thinking, my intellectual ecosystem, and Iâll more actively push ideas around effective giving when relevant. Iâll still be thoughtful about context, but near-total caution no longer serves meâor the causes I care about.
This seems likely to be a shared challenge, curious how to hear how others are navigating it and whether your thinking has changed lately.
Voted! I also donated to get the digital download bundleâbecause, secretly, Iâm still a Nerdfighter!
EA đ Nerdfighteria
The EA Focusmate group has been a massive productivity boost, and to my own surprise, I even made some friends through it!
I just wish the group element on Focusmate were actually a little bit stronger (e.g., more means of interaction, other shared accountability), but this is a limitation of the platform, not the group.
Yes!
TLDR: This forum post is well-written but overlooks two key points: the high-status perception of OP funding within EA, which skews community inclusion and metrics, and the reputational risks for non-EA-branded donors and organisations in aligning with EA, leading to a disconnect in recognising their contributions.
This forum post effectively outlines the options and includes a (partial) call to action to stop complaining, acknowledge issues, and take steps forward. However, it overlooks two important aspects that I believe warrant discussion:
First, the post does not account for how funding from OP is perceived as a marker of being an EA organisation. This perception creates a feedback loop where organisations funded by Open Philanthropy are seen as high-status and placed centrally within the EA community. In contrast, organisations or sub-teams not viewed as fully aligned are often excluded from community metrics. This dynamic significantly influences which organisations are recognised as part of the EA community.
Additionally, in these kinds of statistics, there is little recognition of sub-teams within organisations or EA-adjacent groups that contribute to EA goals without formally associating with the movement. For example, many civil service roles are funded through diverse portfolios and remain underrepresented in these discussions. Similarly, some organisations prefer not to publicly align with EA for strategic reasons, despite their close alignment in practice.
Second, for figures like Dustin and Cari, donating to EA-branded organisations may make sense given their personal brands are closely tied to the community. However, for other donors, explicitly associating with EA poses reputational risks and diminishes the credit they receive for their philanthropic efforts. Schmidt Futures exemplifies thisâit is a family-funded organisation doing work aligned with EA interests but avoids formally associating with the movement, as I think thereâs little incentive to do so from their perspective.
Similarly, I work for an organisation largely funded by a donor not widely considered an âEA billionaire,â yet the organisation is almost entirely staffed by EAs doing aligned work. Despite this, the donor is unlikely to appear in lists of EA funding sources, highlighting a gap in how contributions are recognised.
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilođ¸. I donât always agree with him, but adding some numbers makes every discussion better!
Thank you for fixing this!
I also donât think itâs a good use of time, which is why Iâm asking the question.
However, I believe attending is worth significantly more than three hours. Thatâs why Iâve invested a lot of time in this previously, though Iâd still prefer to allocate that time elsewhere if possible.
E: Itâs very helpful to know that the acceptance rate is much higher than I had thought. It already makes me feel like I can spend less time on this task this year.
Hi, I hope this is a good time to ask a question regarding the application process. Is it correct that it is possible to apply a second time after an initial application has been rejected?
I understand that the bar for acceptance might be higher on a second attempt. However, I feel this would allow me to save considerable time on the application process. Since I was accepted last year and a few times before, I might be able to reuse an old application with minimal editing. This could help meâand potentially many othersâavoid spending three or more hours crafting an entirely new application from scratch.
Looking forward to your response! đ
Does anyone have thoughts on whether itâs still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I havenât:
I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if Iâm only available for that time slot, as this is something I would never do for an in-person conference.
I canât find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
One of my very minor complaints about EAG is that they give out T-shirts that I would not normally want to wear in daily life. I now have three (admittedly very nice) pyjama T-shirts from conferences. This is nice! But I would love to have a simple shirt with a small logo that I can wear in everyday life, not just at home. It would actually get more exposure than the current T-shirts do!
For inspiration, a subtle range of T-shirts from Cortex. Just imagine the small heart-lightbulb there!