That’s great, but the less actively I’m involved in the process the more likely I am to just ignore it. That might just be me though.
Giles
This is great!! Pretty sure I’d be giving more if it felt more like a coordinated effort and less like I have to guess who needs the money this time.
I guess my only concern is: how to keep donors engaged with what’s going on? It’s not that I wouldn’t trust the fund managers, it’s more that I wouldn’t trust myself to bother researching and contributing to discussions if donating became as convenient as choosing one box out of 4.
This by the way is what certificates of impact are for, although it’s not a practical suggestion right now because it’s only been implemented at the toy level.
The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).
Are you counting donations from people who aren’t EAs, or are only relatively loosely so?
Yes. Looking at the survey data was an attempt to deal with this.
I was also hesitant about CFAR, although for a slightly different reason—around half its revenue is from workshops, which looks more like people purchasing a service than altruism as such.
Good point regarding GPP: policy work is another of those grey areas between meta and non-meta.
Not sure about 80K: their list of career changes mostly looks like earning to give and working at EA orgs—I don’t see big additional classes of “direct work” being influenced. It’s possible people reading the website are changing their career plans in entirely different directions, but I have my doubts.
Not sure what you mean by e.g.3.
I totally get the point regarding GWWC and future earnings, but I’m not sure how to account for it. GWWC do a plausible-looking analysis that suggests expected future donations are worth 10x total donations to date. But I’m not sure that we can “borrow from the future” in this way when doing metaness estimates, and if we do I think we’d need a much sharper future discounting function to account for exponential growth of the movement.
Good point regarding OPP: My direct charity estimate only included the top recommended charities by GW,GWWC and ACE. The OPP grants come to an additional $7.8m in 2014 (“additional” because it’s not direct charities I’ve already considered and isn’t meta either).
Anyway, taking all this into consideration I get $3.2m meta, $62m non-meta for a ratio of 5%. (Plus $2.1 million in “grey area”). So we’re getting close to agreement!
Some other caveats:
It doesn’t measure non-financial contributions, such as running local chapters or volunteering for EA orgs.
Some of the money going to direct charities comes from people with no connection whatsoever to the EA movement (i.e. not influenced by GiveWell etc.)
Regarding the survey, do you feel that it’s biased specifically towards those who prefer meta, or just those who identify as EA?
I can’t emphasize the exponential growth thing enough. A look at the next page on this forum shows CEA wanting to hire another 13 people. Meanwhile GiveWell were boasting of having grown to 18 full time staff back in March; now they have 30.
But the direct charities are growing like crazy too! It all makes it very easy to be off by a factor of 2 (and maybe I am in my above reasoning) simply by using out of date figures. Anyone business-minded know about the sort of reasoning and heuristics to use under growth conditions?
I’m helping prepare a spreadsheet listing organizations and their budgets, which at some point will be turned into a pretty visualization...
Anyway, according to this sheet, meta budgets total around $4.2m (that’s $2.1m GiveWell, $0.8m CEA and $0.8m CFAR, plus a bunch of little ones). That’s more than “a couple”, but direct charities’ budgets total $52m so we’re still shy of 10%.
(Main caveats to this data: It’s not all for exactly the same year, so anything which is taking off exponentially will skew it. Also I haven’t checked the data particularly carefully).
I’ve also been counting x-risk organizations as not meta. That one’s a bit ambiguous—on the one hand they do a lot of “priorities research and marketing”, but on the other hand there isn’t really an object-level tier of organizations beneath them that works in the same areas.
As to what self-identified effective altruists are up to: a quick look at the 2014 EA survey only yields number of donations to each organization, not amount of money… but if we go with that, 20% of the donations are to organizations I’ve counted as “meta”.
So my working conclusion would be that if you favour a 50% split across the community, you’re looking good for putting all your eggs in meta. If you favour a 10-20% split, you may need to look a bit more carefully.
A final note of caution: you can only push in one direction. If you favoured a 20% meta split, and (just suppose it turned out that) only 5% of donations in your reference class went to meta, it doesn’t automatically mean that you should donate to meta. There might be some other category, e.g. direct animal welfare charities, which were also under-represented according to your ideal pie. It’s then up to you to decide which needs increasing more urgently.
Multiple donors could form coalitions to fund a single donee
Or to fund multiple donees.
Let me know if you’re expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.
I’m guessing that for these to work, the ownership of certificates should end up reflecting who actually had what impact. I can think of two cases where that might not be so.
Regret swapping:
Person A donates $100 to charity X. Person B donates $100 to charity Y.
Five years later they both change their minds about which charity was better. They swap certificates.
So person A ends up owning a certificate for Y, and person B ends up owning a certificate for X, even though neither of them can really be said to have “caused” that particular impact.
Mistrust in certificate system
Foundation F buys impact certificates. It believes that by spending $1 on certificates, it is causing an equivalent amount of good as if it had donated $2 to charity X.
Person A is skeptical of the impact certificate system. She believes that foundation F is only accomplishing $0.50 worth of good with every $1 it spends on certificates (she believes the projects themselves are high value, but that if foundation F didn’t exist then the work would have got done anyway).
Person A has a $100 budget to spend on charity.
Person A borrows $50 from her savings account and donates $150 to charity X. She sells the entire certificate to foundation F for $50 and deposits this back in her savings account.
Why would person A do this? She doesn’t care about certificates, just about maximizing positive impact. As far as she is concerned, she has caused foundation F to give $50 to charity X, where otherwise that money would only have accomplished half as much good.
Why would foundation F do this? It believes in certificates, so as far as F is concerned, it has spent $50 to cause a $150 donation to charity X, where the other certificates it could have bought would only be equivalent to a $100 donation.
I’ve just found out that Paul Christiano and Katja Grace are already buying certificates of impact.
Just one comment: the essay asks “Why doesn’t the Gates foundation just close the funding gap of AMF and SCI?” but doesn’t seem to offer an answer. The closest seems to be 3b/c which suggests it’s a coordination problem or donor’s dilemma: everyone is expecting everyone else to fund these organizations.
If that’s the case, the relevant question would seem to be: what does the Gates foundation want? If the EA community finds something that GF wants that we can potentially offer (such as new high-risk high-return charities doing something totally innovative), then we can potentially do a moral trade with them.
Oh one other thing—I think the trickiest part of this system will be verifying whether someone has actually donated to a charity at the time they said they did. Every charity does it a different way.
I’m interested in moving moral economics forward in a different way: by creating some kind of online “moral market” and seeing what your happens.
There are two possible systems I could implement:
Something based on certificates of impact (at least one person has asked for this)
A points-based system
I’ll describe the points-based system here, as it’s the one I’ve thought through a bit more. I presume it theoretically diverges from a certificate of impact system, but I haven’t thought through exactly how.
Users have points. The total number of points in the system is 1 billion.
At any time, a user with nonzero points can make a request that somebody else donate to a particular charity in exchange for some of those points.
Fundamentally that’s the only mechanic that I’m imagining right now. Other bells and whistles can be added, such as prediction markets, or other goodies that you can purchase with points such as volunteer time.
The requests stay on the table until someone takes them up, so a (new or existing) user can acquire points by seeing which requests are currently active and donating to one of the relevant charities.
Why would anyone want points? Points can be used to influence other people in which charities they give to, how much and when. (Although if everyone agrees that points are worthless then this leverage disappears).
What are some use cases? A charity is running a fundraiser, and its supporters all want each other to donate to the fundraiser as soon as possible, so that the charity’s staff aren’t tearing their hair out. If any of these supporters have points, they can use some of them to encourage other supporters to donate early, by raising the points-per-dollar value of the charity.
Any other use cases? Moral trade might be possible—donating to a charity becomes slightly more attractive if you get points in return, and those points are some reflection of how much other people like the charity. I don’t know how this would play out in practice though.
Trading points sounds like a lot of work. Yes it would be! Possibly enough to wipe out the value gained by moral trading. So the system would need one other major feature: automatic trading.
How does automatic trading work? Each user assigns a subjective utilons-per-dollar-donated value to each charity, as well as a value to holding onto the cash themselves. The system calculates a utilon-per-point value somehow. It can then automatically set the donation request price to be (utilon per dollar of charity divided by utilon per point). The system can also make suggestions to the user to donate when utility of charity + utility of points you’d get back > utility of holding onto the money.
Aren’t you glossing over some things here? Yes, several.
These prices and valuations are all at-the-margin, and will change as stuff gets bought and sold and spent. The system shouldn’t ever suggest that you donate a million dollars to charity X, because your marginal value of holding onto the money would have gone way up in the middle of that.
The utilons-per-point value is calculated “somehow”, possibly by looking at historical transactions and seeing which is the highest-utility charity whose donations can be bought with points.
This doesn’t actually work though, because if you trade a donation for points, it doesn’t mean 100% of that donation is as a consequence of your points. The person may have donated anyway, or someone else may have offered up the points anyway.
How is this any use to me if I’m not a consequentialist and don’t believe in utilons? I haven’t thought about that yet.
This is all just chit-chat, and we’re never going to see this happen, right? Wrong. I’m working on it here, although it’s currently little more than a login page and a couple of database tables. Development help welcome! https://github.com/edkins/moral-market
I’m a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): “they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare.”
It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can’t necessarily assume many people in EA agree with me though.
There’s also some confusion in the language between what people in EA do, and what their representatives in GW and GWWC do. I’m thinking of:
(Effective altruists) assess the scale of global problems by looking at major reports and publications that document their impact on global well-being, often using cost-effectiveness analysis.
There’s another response that EAs could have to the priority/ultrapoverty strand, which is to bend their utility functions so that ultrapoverty is rated as even more bad, and improvements at the ultrapoverty end would be calculated as more important. Of course, however concave the utility function is, you can still construct a scenario where the people at the ultrapoverty end would be ignored.
I think that the priority/ultrapoverty strand of this argument is one place where you can’t ignore nonhuman animals. My intuition says that they’re among the worst off, and relatively cheap to help.
My first thought on reading the “Two villages” thought experiment was that the village that was easier to help would be poorer, because of the decreasing marginal value of money. If this was so, you’d want to give all your money to the poorer one if your goal was to reduce “the influence of morally arbitrary factors on people’s lives”.
On the other hand that gets reversed if the poorer village is the one that’s harder to help. In that case fairness arguments would still seem to favour putting all your money in one village, just the opposite one to what consequentialists would favour. (So that this problem can’t be completely separated from the Ultrapoverty one).
One thing I find interesting about all the thought experiments is that they assume a one donor, many recipient model. That is, the morality of each situation is analyzed as if a single agent is making the decision.
Reality is many donors, many recipients and I think this affects the analysis of the examples. Firstly because donors influence each others’ behaviour, and secondly because moral goods may aggregate on the donor end even if they don’t aggregate on the recipient end. I’ll try and explain with some examples:
Two villages (a): each village currently receives 50% of the donations from other donors. Enough of the other donors care about equality that this number will stay at 50% whichever one you donate to (because they’ll donate to whichever village receives less than 50% of the funds). So whether you care about equality or not, as a single donor your decision doesn’t matter either way.
Two villages (b): each village currently receives 50% of the donations from other donors, but this time it’s because the other donors are donating carelessly. Moral philosophers have decided that the correct allocation (balancing equality with overall benefit) is for one village to receive 60% of donations and the other to receive 40%. As a relatively small donor, your moral duty then is to give all your money to one village, to try and nudge that number up as close to 60% as you can.
Medicine (a): Philosophers have decided the ideal distribution is 90% condoms and 10% ARVs. Depending what the actual distribution is, it might be best to put all your money into funding condoms, or all your money into funding ARVs, and only if it’s already right on the mark should you favour a 90⁄10 split.
I don’t think the Ultrapoverty, Sweatshop and Participation examples are affected by this particular way of thinking though.
I just get the feeling that something like consequentialism will emerge, even if you start off with very different premises, once you take into account other donors giving to overlapping causes but with different agendas. Or at least, that this would be so for as long as people identifying with EA remain a tiny minority.
A Mindful Approach to Tackling those Yucky Tasks You’ve Been Putting Off
For many of us, procrastination is a problem. This can take many forms, but we’ll focus on relatively simple tasks that you’ve been putting off long-term.
Epistemic status: speculative, n=1 stuff.
Yucky Tasks
Yucky tasks may be thought of several ways:
things you’ve been putting off
tasks which generate complex, negative emotions.
that vague thing that you know is there but it’s hard to get a grip on and you’re all like uhggggg
The connection to EA?
EA is not about following well-trodden paths. We’re all trying to do something different and new, and stepping out of comfort zones.
donating big sums of money to unusual causes
seeing the world through an unusual lens
reaching out to people we don’t know
planning our careers and our finances
and more
all while staying organized in our personal lives
For some of us, we may be exceptionally talented or productive in some domains, but find some of the tasks elusive or hard to get a grip on.
So what happens?
Most commonly avoidance. This can go on until there’s some kind of shift: maybe we avoid something until it becomes super urgent, or maybe we just wait until our feelings around it become clearer.
Forcing ourselves to jump right in, tackling the task “forcefully” using all our available willpower. Though this can get the job done it can be unpleasant and unsustainable—we’ll remember all that negativity for the next time, and thus make the next task more difficult. Especially disruptive when working with others.
What’s an alternative?
This talk is about discovering and mapping our mental landscapes surrounding a problem. Tasks, and their associated thoughts and emotions, can be mapped out in a rich web. Often, different sub-tasks will be associated with different emotions, and seeing this laid out can help with getting our emotional bearings, as well as practical problem-solving.
The result is unpacking a complex, muddied anxiety or resentment into something cleaner and truer. We’re still at early stages but we’re hoping to build this technique out into something robust that can help those of us in the EA movement overcome the blocks to personal effectiveness.
------------
(I would like to be part of the late session)