London GWWC group co-lead: https://www.givingwhatwecan.org/london
Organiser of the EY Effective Altruism workplace group and EA London Quarterly Review coworking sessions.
In my day job, I’m an accountant turned product person in tax technology.
London GWWC group co-lead: https://www.givingwhatwecan.org/london
Organiser of the EY Effective Altruism workplace group and EA London Quarterly Review coworking sessions.
In my day job, I’m an accountant turned product person in tax technology.
I think MoreGood would be a great rebrand for the forum!
Magnify Mentoring is so great and I’m really glad they exist! I got a lot out of the conversations with my mentor. Hope they continue to get funding!!
Very excited about the “EA as a university” concept and am looking forward to hearing more!
Where do you see GWWC and commitments to effective giving fitting into this? Do you expect to promote this as a norm?
Really appreciate those diagrams—thanks for making them! I agree and think there are serious risks from EA being taken over as a field by AI safety.
The core ideas behind EA are too young and too unknown by most of the world for them to be strangled by AI safety—even if it is the most pressing problem.
Pulling out a quote from MacAskill’s comment (since a lot of people won’t click)
I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”.
One thing I’ll say is that core researchers are often (but not always) much more uncertain and pluralist than it seems from “the vibe”.
...
What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA. Currently, AI has an odd relationship to EA. Global health and development and farm animal welfare, and to some extent pandemic preparedness, had movements working on them independently of EA. In contrast, AI safety work currently overlaps much more heavily with the EA/rationalist community, because it’s more homegrown.
If AI had its own movement infrastructure, that would give EA more space to be its own thing. It could more easily be about the question “how can we do the most good?” and a portfolio of possible answers to that question, rather than one increasingly common answer — “AI”.
At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss. EA qua EA, which can live and breathe on its own terms, still has huge amounts of value: if AI progress slows; if it gets so much attention that it’s no longer neglected; if it turns out the case for AI safety was wrong in important ways; and because there are other ways of adding value to the world, too. I think most people in EA, even people like Holden who are currently obsessed with near-term AI risk, would agree.
The OECD are currently hiring for a few potentially high-impact roles in the tax policy space:
The Centre for Tax Policy and Administration (CTPA)
Executive Assistant to the Director and Office Manager (closes 6th October)
Senior programme officer (closes 28th September)
Head of Division—Tax Administration and VAT (closes 5th October)
Head of Division—Tax Policy and Statistics (closes 5th October)
Head of Division—Cross-Border and International Tax (closes 5th October)
Team Leader—Tax Inspectors Without Borders (closes 28th September)
I know less about the impact of these other areas but these look good:
Trade and Agriculture Directorate (TAD)
Head of Section, Codes and Schemes—Trade and Agriculture Directorate (closes 25th September)
Programme Co-ordinator (closes 25th September)
International Energy Agency (IEA)
Clean Energy Technology Analysts (closes 24th September)
Modeller and Analyst – Clean Shipping & Aviation (closes 24th September)
Analyst & Modeller – Clean Energy Technology Trade (closes 24th September)
Data Analyst—Temporary (closes 28-09-2023)
Financial Action Task Force
I agree and this is why I’m in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn’t claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but “EA” can’t—it isn’t a person, it’s a network.
I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum (effectivealtruism.org) from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA—communications and clarity around the EA brand are prioritised.
Great comment—I’d add that usually GWWC pledges in the UK are based on pre tax so it wouldn’t actually cost the full £5k. Donations reduce your income for income tax purposes (but not NI) - Payroll Giving (UK) or GAYE—EA Forum (effectivealtruism.org)
ie.
£50k salary
£3.75k donation which is grossed up by 25% from your taxes with gift aid to £5k
If you actually donated £5k then that would be a £7.5k total donation when grossed up with gift aid.
However, the higher rate tax (40%) band starts at ~£50k a year so every £1 donated above that costs 60p
(Working on a longer explainer on this which updates this piece UK Income Tax & Donations — EA Forum (effectivealtruism.org) but you can check out the underlying spreadsheet which create these graphs here: UK Income tax (including NI) - Google Sheets)
Potentially. However, Danaher’s current market share in LMIC can be traced to public funding and buy-down agreement with WHO and Unitaid in 2012 on the basis of projected annual sales of 4.7 million tests (number that was quickly eclipsed). There are potential competitors but they won’t LMIC market before 2024 and gaining market share will take years. Therefore, Cepheid will remain the dominant supplier of critical rapid molecular tests for LMICs for the next few years.
The campaign is advocating that they reduce their profit margin which given the potential sales volumes will still likely make this a profitable outcome. I am pretty unconvinced that it would serious decision relevant factor.
If you have evidence or case studies beyond this post that this has seriously influenced a for profits decision to commercialise in LMIC then I’d love to read it.
Thanks for writing this! A friend and were just discussing this the other day.
Should we be making it so difficult for users with an EA forum account to make updates to the forum wikis?
I imagine the platform vision for the EA forum is to be the “Wikipedia for do-gooders” and make it useful as a resource for people working out the best ways to do good. For example, when you google “Effective Altruism AI Safety” on incognito mode—the first result is the forum topic on AI safety: AI safety—EA Forum (effectivealtruism.org)
I was chatting to @Rusheb about this who has spent the last year upskilling to transition into AI Safety from software development. He had some great ideas for links (ie. new 80k guides, site that had links for newbies or people making the transition from software engineering)
Ideally someone who had this experience and opinions on what would be useful on a landing page for AI Safety should be able to suggest this on the wiki page (like you can do on Wikipedia with the caveat that you can be overruled). However, he doesn’t have the forum karma to do that and the tooltip explaining that was unclear on how to get the karma to do it.
I have the forum karma to do it but I don’t think I should get the credit—I didn’t have the AI safety knowledge—he did. In this scenario, the forum has lost out on some free improvements to its wiki plus an engaged user who would feel “bought in”. Is there a way to “lend him” my karma?
I got it from posting about EA Taskmaster which shouldn’t make me an authority on AI Safety.
Fantastic post and thank you for articulating this! I feel really similarly doing workplace organising—a lot of the value seems to be driven from connecting people to other people that take doing good seriously.
Some people struggle to work out what the EA community is supposed to do for them, or what the point of it all is. For what it’s worth, my experience has been that this confusion extends to all levels of seniority within the community. But for me, participating in the community was the obvious way to counter the attrition Brooks warned of. I tend to agree that you will tend to become more like those around you, but that applies to people other than your colleagues, and you can choose who those people are! Maybe those ‘EAs’ even find what you are doing praiseworthy, but a lot of the power is just in feeling less weird for trying.
I often feel like people working at core EA orgs forget how valuable this is for the vast majority of EAs, who do not work with other EAs. Almost everyone I know outside EA, from my parents to my colleagues to my neighbours, is not seeking to improve the wider world with any significant fraction of their resources. They’re just getting on with their lives and trying to do right by the people they meet. To the extent they are aware of my giving, their attitude is one of curious fascination.
Do you have thoughts on what you’d like to see more of in community building to support E2Gers? I’d be particularly curious about what you think made a difference when you were younger vs now
Reminds me of this scene from Glass Onion: https://twitter.com/KnivesOut/status/1611769636973854723?s=20
“It’s a dangerous thing to mistake speaking without thought for speaking the truth.”
Yeah aligning incentives here seems hard and tbh I don’t think it’s a sustainable option to have advocacy campaigns targeting pharma companies for every global health issue.
It was interesting to read about Advance Market Commitments from this piece (https://worksinprogress.co/issue/why-we-didnt-get-a-malaria-vaccine-sooner/#advance-market-commitments)
Quoting (on mobile so can’t seem to do the formatting):
A standard Advance Market Commitment (AMC) is a promise to subsidize the future purchase of a new vaccine in large quantities – if it’s invented – in return for the firm charging customers close to marginal cost (that is, with only a small mark-up).
Letʼs break it down. The subsidy incentivizes research by compensating innovators for their fixed cost investments in R&D and manufacturing capacity. The commitments to buy a certain quantity at a certain price ensure the vaccine is affordable and widely available. The subsidy is conditional on a co-payment (this is the part that is close to marginal cost) from governments in low and middle income countries – without it, the developer receives nothing. This incentivizes firms to develop vaccines countries will actually use, not just those that meet technical specifications.
So while patents trade-off innovation incentives with affordable access, AMCs help us achieve both. And the price strategy means that AMCs encourage deployment at scale in a way that most prizes do not. AMCs are a kind of inversion to typical ‘push funding’ – they instead ‘pull’ innovation towards a goal by paying for outputs and outcomes. They don’t require funders to choose which research efforts to back in advance – they can just commit to rewarding the innovations that succeed. And they’ve been successful at doing so in the past.
I started defaulting to saying people trying to do EA—less person focused more action focused
Thank you for so much for articulating this in such a thoughtful and considered way! It must have taken a lot of courage to share these difficult experiences but I’m so glad you did.
Your suggested actions are really helpful, and I would encourage anyone who cares about building a strong community based on altruism to take the time to think on this.
*CW*
As someone who has had a similar experience with a partner I trusted, this paragraph felt incredibly true:
“The realistic tradeoffs as a survivor of sexual harassment or assault often push the survivor to choose an ideal, like justice or safety for others, at the expense of their time, energy, and health. While reeling from the harm of the situation, the person experiencing the harm might engage in a process that hurts them in an effort to ensure their safety, protect other potential victims, educate the perpetrator, or signal that the perpetrator’s actions were harmful.”
I spent the weeks following the incident going over the facts in my head, considering his point of view, minimising the experience, wondering if I should have been more direct (anyone who has met me in person will know that’s not something I usually have a problem with), discussing with friends who were disgusted by the story, then finally organising a meeting with him to outline why his actions were unacceptable, the next steps he needed to take and to make clear that he was not to contact me again.
I’m lucky that I have an incredible support system, had read enough on consent to feel able to stand up for myself and that he was immediately full of regret and shame. I am lucky that I have been able to process what happened with professionals and my friends to the extent that I am in a great place now. But I am forever changed by it and would unfortunately rank that short event as one of my clearest memories.
Hopefully, readers of this comment can see that this is not a reasonable process. I would be horrified if someone I loved told me that this had happened to them and that they were planning to mediate the aftermath like I had done.
Harms can be caused by poor judgement and selfishness in the moment. Actions that the individual might regret or feel shame over and potentially learn and grow from. However, the responsibility to protect other people, educate the perpetrator and repair the damage should be distributed.
The purpose of this comment was to give an additional piece of anecdotal evidence of the problem. I don’t have any clear answers nor am I qualified to say what should be done in an ideal world here. If you’d like to discuss anything I’ve written here, feel free to DM me here or on Twitter @glpat99
Thanks again Emma—this is such an excellent post.