I’m a total noob when it comes to animal charity. I feel like ACE and its charity picks probably don’t get the attention they deserve, but I don’t have any kind of a feel as to how ACE goes about its research, what’s especially good or bad about each of AE/THL/MFA, and how they compare in units of goodness to human-focussed charities. Any pointers?
Giles
We could make one.
If everyone on the ladder is EA-aligned then each person benefits from teaching almost as much as learning. It makes me think of Skillshare.im, though the structure is a little different.
I think of earning to give as a strategy, rather than a place you’re at. Pursuing earning to give will nudge you in the direction of:
giving a larger % of income to charity
giving some conscious thought to your career, taking earning potential into consideration
keeping up to date with which charities seem the most cost effective
But you don’t have to do all those things, or do them especially well, or do them a certain amount to say you’re pursuing an earning to give strategy.
The idea of regularly talking to GWWC members makes me want to plug the EA Buddy System. The goals are much the same, it’s just decentralized and volunteer-based. Is it worth coordinating with GWWC on this, e.g. coming up with a set of suggestions that EA buddies can talk about with GWWC members?
If I read this recent blog post correctly, it sounds like GiveWell are concerned about bumping into the room for more funding ceiling for some of their top charities. Would this be a point against trying to recruit more donors and in favour of encouraging new projects to start up? (or promoting causes that GW doesn’t really cover, such as nonhuman animals or xrisk).
Good questions! I guess there are times when our feeling of nastiness can be exploited, and in those cases we have to bypass it. If you always give money to people at the door, they could just turn up the next day asking for more—it may or may not be a “nice feeling” strategy but it wouldn’t be a successful one.
I think that someone’s aliefs about eating meat are relevant to the cognitive dissonance concept. In the case where somebody eats meat and doesn’t alief that eating meat is nasty, I can imagine three subcases:
Person doesn’t care about nonhuman animals or is unaware of cruelty issue
Compartmentalization
Eating meat is actually the EA thing to do, and all the for/against arguments have been internalized
In the case where somebody eats meat and does alief that eating meat is nasty, I can imagine:
Cognitive dissonance
Compartmentalization
which have dropped by an order of magnitude every couple of years
This seems like a really important point, and I wonder if anyone has blogged on this particular topic yet. In particular:
How should we expect this trend to continue?
Does it increase the activation energy for getting involved in EA? (my interest in EA was first aroused by GW and how cheap they reckoned it was to save a life via VillageReach)
Does it affect the claim that a minority of charities are orders of magnitude more effective than the rest?
If we become able to put numbers to the effectiveness of a new area, such as xrisk or meta, would we expect to see the same exponential drop-off in our estimates even if we’re aware of this problem?
Do you mean that choosing to be nasty can cause us to come to prefer nastiness
Being nasty in order to achieve some greater good requires complicated reasoning which can feel wrong. I’d argue that it’s best to limit the amount of that kind of reasoning that we subscribe to—it feels like it could be demotivating, or that we could become desensitized to the feeling of wrongness, or something.
creating a positive feedback cycle
I agree.
Thinking about this a bit more… If I don’t trust my future self with these certificates, I can always send them to some other entity which will look after them in a way consistent with present-self’s wishes.
This could be an account in a different name, corresponding to an entity which I believed caused my behavior, and which I believe will responsibly hoard the certificates (e.g. GiveWell).
Alternatively it could be an ethereum-style contract which allows me to either hold on to the certificate or give it away (given enough other signatures verifying that I’m giving it to a reputable party and that I’m not benefitting financially from giving it away).
This sort of lock-in could also be a mechanical part of how certificates work, e.g. they allow themselves to be traded freely for a month and after that it gets more and more difficult.
The situation where certificates are bought and sold at near the original donation price is somewhat peculiar. Essentially, rather than giving away your assets to charity you’d be exchanging real money for some riskier and weirder asset, but which is nonetheless still worth money. Giving away certificates then feels like a sort of “second order altruism” which then maybe deserves a certificate of its own...
Wow, this is amazing! It brings to mind the idea of a “what kind of altruist are you?” quiz, with the answer providing a link to the most relevant essay or two which might change your mind about something...
I’ve just read through the comments on meteuphoric.
I think arguments about the knock-on effects of vegetarianism/veganism are irrelevant if the charity you believe is the most effective happens to be a vegan outreach charity. The same multiplier would apply to both sides of the comparison.
I think I’m supportive of the case that pain is greater in magnitude than 1 QALY/year. How bad we view pain must surely be anchored to how motivated we are to avoid it. In the ancestral environment, if you’re injured do you experience maximal pain for substantially less time on average than it reduces your lifespan? If so I’d expect us to experience a normal year as good and a year of pain as very, very bad.
I agree that the important factor is reducing meat rather than eliminating it entirely. Eliminating that last percent might be quite costly and not worth the “I’m 100% meat free” signalling points.
I didn’t know about creatine. That sounds like important information.
I agree about inconvenience budgets being tricky. Avoiding meat could be a good way of building up your tolerance to inconvenience (although admittedly it’s not usually marketed that way). It’s a good Schelling point (allowing free choice over which inconvenient thing you choose allows you to choose the least actually inconvenient one), and there are social supports for it.
To me, four important effective altruism barriers are cognitive dissonance, akrasia, arrogance and value erosion. More precisely:
cognitive dissonance as deliberately choosing to be nasty so as to gain some small amount of fungible resource which can be spent on effective charity
akrasia as choosing not to give because somehow you don’t feel like it, hoarding your money because you can always spend it later
arrogance as believing that because you have access to and trust in a specific piece of knowledge (in this case charity effectiveness), you will have vastly more effect on the world than an average person
value erosion as future selves deciding they don’t care about animals after all.
I think cognitive dissonance and value erosion work similarly here, and both point in favour of veganism.
Arrogance is a complicated one because it might actually be true that you have a huge positive effect compared to an average person (it’s kind of what we’re striving for). But actually alieving it might be problematic, and it might make sense to just be a vegan and unplug your phone chargers when not in use in order to feel more normal.
Akrasia could work both ways—there’s a possibility that veganism could “use up” your charitableness, which would certainly be a bad thing. But on the other hand veganism might help you integrate socially with other vegan activists, which might be a motivating factor to give.
I think the 0.05 is a per-day figure, and humans live around 600 times as long as chickens, so it implies indifference between 1 human and 12,000 chickens in the trolley problem. But the OP can correct me here.
I’m still a little confused as to whether these certificates are intended to confer social status. If not, why should I value universes in which I own certificates more highly than universes in which I don’t?
Should I just look at the big picture and decide it’s beneficial to self-modify so as to give ownership of certificates intrinsic value in my utility function?
One possible use for certificates other than bragging rights is A/B testing—pick two EAs with similar skills and resources but different strategies, and see who ends up with more certificates.
I worry about people’s preferences changing over time, either as they get older or as a result of running into financial difficulties. I can imagine buying a bunch of certificates in my idealistic youth and then selling them off again in my cynical old age. At any point in time I’d feel like I was doing the right thing, and whatever philanthropist bought the certificates off me would think they were restrospectively funding the original project when actually they were just putting money into my pocket.
This is sort of the opposite problem of what (I think) Owen_Cotton-Barret was describing with old certificates becoming valueless.
I like this idea. Thinking about the following case was helpful for me:
Suppose for the sake of argument
I have two career options, Charity Worker or Search Engine Optimizer.
CW generates 5 utilons in direct impact, and 0 utilons via earning-to-give
SEO generates 0 utilons in direct impact, and 3 utilons via earning-to-give
There are plenty of people who don’t identify as EAs and/or don’t take Paul_Christiano’s certificate idea seriously, but who want to work as CWs.
From first glance it looks like the system would fail here—if I’m trying to maximize my certificates, and most other people in the market don’t care, then I’d choose CW and crowd-out somebody else.
But I think what would actually happen is that I’d choose the SEO option, earn a bunch of money and then say “hey, charity worker, over here on the internet there’s an apparently meaningless collection of numbers with your name at the top. I’ll give you $5 if you log in and change it to my name”. I’d end up with certificates valued at more utilons than if I’d just taken the CW option.
Even if a typical person didn’t view these certificates as valuable or meaningful initially, they’d start to once they heard about this mysterious community who was willing to pay money for them.
If I click the “New to Effective Altruism” link in the sidebar, I get a page with very little information asking for my email address. Is this what we want?