Correction: the Palestinian charity I considered donating to had a good chance of exceeding AMF’s bar in my opinion. However, before I actually donated the money, their supplies and means of making more were destroyed by a bomb blast, and because of the blockade they would not have been able to recoup them any time soon.
jenn
I really like your lamp post analogy! The world is big and the more EAs practice looking under their own lamp posts and writing up what they find the better the community will be for it.
I wrote about my 2023 donations at length here.
TL;DR:
90% to the Against Malaria Foundation (only GiveWell-recommended charity with a Canadian entity)
5% to a local harm reduction charity (local charity that probably saves the most lives per dollar donated)
5% to a local financial aid clinic (expected ROI of something like 20-30x cash)
Extended Navel-Gazing On My 2023 Donations
Many, many people think of government workers as lazy parasites who are overpaid and hard to fire for no reason, so I’m not sure that’s a super useful comparison lol. I worked in the federal public service for a bit and experienced some of this firsthand :’)
This also then loops right back to point 1; one reason they can afford to spend more time per client is that people are willing to be paid under market rate to work at Samaritans, because of its well deserved great reputation.
One more tactic for reputation management: they have a salary policy where everyone in the entire org has salaries that are within 15% of each other, and the founders, who work regular 60 hour weeks, earn around 55k a year (CAD). The messaging around this is that they both want more of their funding to go towards helping others, and that they want to pay around the Canadian median salary, and it helps with increasing solidarity between Samaritans workers and the people they help. Donors love this.
EAs have this take that you should pay people well, and it’s a good take. It unfortunately also burns reputation.
Thank you!
To go into the weeds a little, a lot of this goes back to funding restrictions. As an example, if you’re helping people find employment, you generally operate using funding for that from the state, and the funding generally comes with stringent quotas and reporting requirements.
The expectation is, and what organizations generally do, is deal with the reporting requirements by requiring people to fill out a 3+ page intake form before they will book an appointment to talk to a person[1]. Samaritans chooses to talk to people right away, subtly probing for answers during the meeting (since it’s all stuff they need to know to help with the job search anyways) and have their staff fill out the forms themselves afterwards. This truly creates a lot more work on their end; you can maybe intuit that by choosing to internalize filling out the intake forms, Samaritans are spending 30% more time per client for the same amount of funding.
This sort of “internalization” is standard across Samaritans ventures. It is a giant pain, but they do it because it removes barriers for so many people to access much needed services.
- ^
In a truly unfortunate number of cases too, the forms are only available in English and the clients are new immigrants/refugees with limited English skills.
- ^
Man, I really wish I had more to say about point 4! It’s so deeply counter-intuitive that people would literally risk eviction or starvation than go to places that treat them kind of crappily, but that’s honestly the extent of the observation.
I think point one is simultaneously really important and also a really risky thing for EA to pursue. I think a lot of the current discussion around EA’s reputation has been really defensive/reactive. Part of that is because we’re trying to put out controversy fires, but I also think that EAs are much more comfortable not interacting at all with illegible, traditional systems and undervalue what playing the game well could get us.
As a dramatic example, the UN spent $47 billion in 2021 on a hilariously imbalanced set of 17 development goals. These goals are revised every 15 years. We can try to grab a seat at the table in 2030 or 2045 and align the new goals with EA principles—that could mean billions dollars per year diverted to EA causes assuming current funding levels. (This is a very very conservative assumption btw—funding appears to be increasing steadily YoY)
But on the flipside, would an organization that can get a seat at the UN table still be recognizably EA? Or will we have destroyed the heart of it to get there?
EA has some essential weirdnesses that will mean that it’ll always be a black sheep in the nonprofit industrial complex, and I really don’t want to see us lose those weird things for the sake of increasing reputation/funding. So it’ll be a delicate balancing act.
I personally think that the ideal (but very difficult) way forward is to try to be (and also seem[1]) so staunchly ethical that it warps public opinion away from the rest of the nonprofit ecosystem and towards us, instead of trying to become part of the trad nonprofit in-group. I think the anti-slavery activist and quaker, Benjamin Lay, is an inspirational figure for this path.
- ^
To be explicit, this means stopping or setting much higher bars for doing things that are effective and actually are the rational things to do in a social vacuum but burn social capital. Buying castles, having really nice office spaces, paying above market rates for employees at EA charities, overly enthusiastic and conspicuously well funded university groups, etc.
My sense is that a lot of EAs think that trading off reputation to do these effective things is worth it, but my claim is that they think so because they don’t realize how good the upside of having a good reputation is. I certainly didn’t think of positive reputation as having any value before I started working at Samaritans; my model was that reputation is something that you strive to keep non-negative and then you’re basically good.
I also do want to state explicitly that even with the correct model of the upside of having a stellar reputation, it could still be more optimal in the long run to continue to do things that are slightly offputting. “Reputation is priceless” is not literally true; it could be more effort than it’s worth to pursue for EA considering that we have a pretty deep stock of like, google softengs who feel very alienated from the trad nonprofit world etc
- 14 Jun 2023 19:21 UTC; 1 point) 's comment on Things I Learned by Spending Five Thousand Hours In Non-EA Charities by (LessWrong;
- ^
Thank you! Honestly, I think all of the advice that I could give has been said better by Scott here: https://slatestarcodex.com/2016/02/20/writing-advice/
He’s been a really big influence on the way that I think, and also my writing style :)
oh hey i ran that rationality meetup on radical empathy and AI welfare. i think it went pretty well and it was directly prompted by AI welfare debate week happening on the forums, so thanks for organizing!
i can talk a little more about the takeaways from that meetup specifically, which had around a half dozen attendees:
it was really interesting to try to model how to even plausibly give moral weight to entities that were so bizarrely different from biotic life forms (e.g. can be shut down and rebooted/reverted, can change their own reward functions, can spin up a million copies of itself.) we kept running into assumptions around ideas like consciousness and pain that just sort of fell apart upon any sort of examination
i tried to construct a scenario/case study with an ai entity that was possibly developing sentience, and the response from basically everyone was “wow these behaviours are sus and we have to shoot the mainframe with a gun immediately”. this was kind of genuinely illuminating to me about the difficulties of trying to grant ~rights/freedoms to something more powerful than yourself and discussing the specifics of the case study turned the sense of danger from something abstract to something that felt real. we tried to come up with some possible ways for an AI entity to signal ~deservingness of moral weight without signalling dangerous capabilities and kind of came up blank, but this might say more about the collective intelligence of the meetup attendees than it does anything else haha.
like, i don’t think these are amazing take-aways, in that higher quality versions of these conclusions have surely been written up in the forums long before debate week. but i think it’s helpful to get them in the water a bit more, and i came out of it with a greater appreciation for the complexity of this question (and also like, more deeply grokking the difficulties of alignment research and just how different ai entities can be from humans).
out of curiousity, do you remember how you came across the meetup posting?