I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donāt do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donāt do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
Should I Be Public About Effective Altruism?
TL;DR: Iāve kept my EA ties low-profile due to career and reputational concerns, especially in policy. But Iām now choosing to be more openly supportive of effective giving, despite some risks.
For most of my career, Iāve worked in policy rolesāfirst as a civil servant, now in an EA-aligned organization. Early on, both EA and policy work seemed wary of each other. EA had a mixed reputation in government, and I chose to stay quiet about my involvement, sharing only in trusted settings.
This caution gave me flexibility. My public profile isnāt linked to EA, and I avoided permanent records of affiliation. At times, Iāve even distanced myself deliberately. But Iām now wondering if this is limiting both my own impact and the spread of ideas I care about.
Ideas spread through visibility. I believe in EA and effective giving and want it to become a social normābut norms need visible examples. If no one speaks up, can we expect others to follow?
Iāve been cautious about reputational risksāespecially the potential downsides of being tied to EA in future influential roles, like running for office. EA still carries baggage: concerns about longtermism, elitism, the FTX/āSBF scandal, and public misunderstandings of our priorities. But these risks seem more manageable now. Most people I meet either donāt know EA, or have a neutral-to-positive view when I explain it. Also, my current role is somewhat publicly associated with EA, and that wonāt change. Hiding my views on effective giving feels less justifiable.
So, Iām shifting to increased openness: Iāll be sharing more and be more honest about the sources of my thinking, my intellectual ecosystem, and Iāll more actively push ideas around effective giving when relevant. Iāll still be thoughtful about context, but near-total caution no longer serves meāor the causes I care about.
This seems likely to be a shared challenge, curious how to hear how others are navigating it and whether your thinking has changed lately.
Voted! I also donated to get the digital download bundleābecause, secretly, Iām still a Nerdfighter!
EA š Nerdfighteria
The EA Focusmate group has been a massive productivity boost, and to my own surprise, I even made some friends through it!
I just wish the group element on Focusmate were actually a little bit stronger (e.g., more means of interaction, other shared accountability), but this is a limitation of the platform, not the group.
Yes!
TLDR: This forum post is well-written but overlooks two key points: the high-status perception of OP funding within EA, which skews community inclusion and metrics, and the reputational risks for non-EA-branded donors and organisations in aligning with EA, leading to a disconnect in recognising their contributions.
This forum post effectively outlines the options and includes a (partial) call to action to stop complaining, acknowledge issues, and take steps forward. However, it overlooks two important aspects that I believe warrant discussion:
First, the post does not account for how funding from OP is perceived as a marker of being an EA organisation. This perception creates a feedback loop where organisations funded by Open Philanthropy are seen as high-status and placed centrally within the EA community. In contrast, organisations or sub-teams not viewed as fully aligned are often excluded from community metrics. This dynamic significantly influences which organisations are recognised as part of the EA community.
Additionally, in these kinds of statistics, there is little recognition of sub-teams within organisations or EA-adjacent groups that contribute to EA goals without formally associating with the movement. For example, many civil service roles are funded through diverse portfolios and remain underrepresented in these discussions. Similarly, some organisations prefer not to publicly align with EA for strategic reasons, despite their close alignment in practice.
Second, for figures like Dustin and Cari, donating to EA-branded organisations may make sense given their personal brands are closely tied to the community. However, for other donors, explicitly associating with EA poses reputational risks and diminishes the credit they receive for their philanthropic efforts. Schmidt Futures exemplifies thisāit is a family-funded organisation doing work aligned with EA interests but avoids formally associating with the movement, as I think thereās little incentive to do so from their perspective.
Similarly, I work for an organisation largely funded by a donor not widely considered an āEA billionaire,ā yet the organisation is almost entirely staffed by EAs doing aligned work. Despite this, the donor is unlikely to appear in lists of EA funding sources, highlighting a gap in how contributions are recognised.
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilošø. I donāt always agree with him, but adding some numbers makes every discussion better!
Thank you for fixing this!
I also donāt think itās a good use of time, which is why Iām asking the question.
However, I believe attending is worth significantly more than three hours. Thatās why Iāve invested a lot of time in this previously, though Iād still prefer to allocate that time elsewhere if possible.
E: Itās very helpful to know that the acceptance rate is much higher than I had thought. It already makes me feel like I can spend less time on this task this year.
Hi, I hope this is a good time to ask a question regarding the application process. Is it correct that it is possible to apply a second time after an initial application has been rejected?
I understand that the bar for acceptance might be higher on a second attempt. However, I feel this would allow me to save considerable time on the application process. Since I was accepted last year and a few times before, I might be able to reuse an old application with minimal editing. This could help meāand potentially many othersāavoid spending three or more hours crafting an entirely new application from scratch.
Looking forward to your response! š
Does anyone have thoughts on whether itās still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I havenāt:
I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if Iām only available for that time slot, as this is something I would never do for an in-person conference.
I canāt find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
Hi Lauren!
Thank you for another excellent post! Iām becoming a big fan of the Substack and have been recommending it.
Quick question you may have come across in the literature, but I didnāt see it in your article: Not all peacekeeping missions are UN missions; there are also missions from ECOWAS, the AU, EU, and NATO.
Is the data you presented exclusively true for UN missions, or does it apply to other peacekeeping operations as well?
Iād be curious to know, since those institutions seem more flexible and less entangled in geopolitical conflicts than the UN. However, I can imagine they may not be seen as neutral as the UN and therefore may be less effective.
Could you say a bit more about your uncertainty regarding this?
After reading this, it sounds to me like shifting some government spending to peacekeeping would be money much better spent than on other themes.
Or do you mean it more from an outsider/āactivist perspectiveāthat the work of running an organization focused on convincing policymakers to do this would be very costly and might make it much less effective than other interventions?
Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.
Simple Forecasting Metrics?
Iāve been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: itās simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecasterās accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting.
What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a reportāsomeone who might be interested in platforms like Metaculus. Imagine if we could say, āWhen Metaculus predicts something with 80% certainty, it happens between X and Y% of the time,ā or āOn average, Metaculus forecasts are off by X%ā. This kind of clarity could make comparing forecasting sources and platforms far easier.
Iām curious whether anyone has explored creating such a concise metricāone that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. Iām interested in hearing any thoughts or seeing any work that has been done in this direction.
Hi there!
I really enjoy the curated EA forum podcast and appreciate all the effort that goes into it. However, I wanted to flag a small issue: in my podcast app, emojis cannot be included in filenames. With the increasing use of the āšøā in forum usernames, this has been causing some problems.
Would it be possible to remove emojis from the filenames?
Thanks for considering this!
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
Just to clarify, I assume that our distributions will not be made public /ā associated with our names?
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like āSuffering-freeā, or āCleanā, or ācruelty-freeā.
I am aware of Metaforecast, but from what I understood, it is no longer maintained. Last time I checked, it did not work with Metaculus anymore. It is also not very easy to use, to be honest.
Interesting idea! This got me thinking about this, and I think I find it tricky because I want to stay close to the truth, and the truth is, Iām not really a āmoderate EAā. I care about shrimp welfare, think existential risk is hugely underrated, and believe putting numbers on things is one of our most powerful tools.
Itās less catchy, but Iāve been leaning toward something like: āIām in the EA movement. To me, that means I try to ask what would do the most good, and I appreciate the community of people doing the same. That doesnāt mean I endorse everything done under the EA banner, or how itās sometimes portrayed.ā