Is this the opportunity for GWWC to expand into the EU—i.e. to see if there’s a format that would enable transnational donations to be tax-deductible from anywhere in the EU?
I keep getting feedback that tax-deductibility isn’t a big deal. For example, in the UK, it seems to be limited to the charity being able to claim back an extra 25%. (I’m not an expert on this).
But my point is that in much of the EU, tax-deductibility allows you to roughly DOUBLE your net donation.
In Belgium, I could choose to give 100 euros to GWWC, or, with the same net cost, give 100/(1-.45) = 182 euros—almost double—to a registered charity like the Red Cross. In Germany, I could donate up to 20% of my income, and it would be fully tax-deductible, meaning that if I’m paying tax at 50% on my marginal income, I could donate twice as much for the same net cost. I could give 100 euros to GWWC, or give 200 euros to the Red Cross and receive 100+ euros back as a tax rebate (or a reduced tax bill).
Even if EA arguments might convince me personally that it would still be much better to support the most effective charities, whose impact may be 100 times greater, it’s unlikely that most tax-payers will be convinced.
But there’s another thing. Being a registered charity is a mark of trust. If someone tells me to support Charity X who are doing amazing work, but I don’t know the, but I find them on the list of registered charities, that reassures me that they are a reputable, vetted organisation.
I haven’t done much research into this, but it would be interesting to see if a model exists to formally do this across all the EU countries. Failing that, it would be worth prioritising based on potential benefits.
(to be clear, although I’m writing this comment here, I’m very conscious that both GWWC and Charity Entrepreneurship are aware of this opportunity—so really this comment is aimed at anyone else who might have knowledge or ideas).
Denis
Thanks for sharing.
My conclusion from this is that there is still a massive opportunity, perhaps especially in Europe, to increase the funds going to effective charities by creating organisations like effectiv spenden—or by expanding your model. For example, there is no analogous charity in Belgium, and Belgians cannot donate tax-deductibly to effectiv spenden.
Probably a much better use of people’s time than reading my post would be to listen to today’s talk by Carl Robichaud about nuclear weapons from the EA Virtual conference today:
A Turning Point in the Story of Nuclear Weapons? (swapcard.com)
Thanks Jason,
Some really good ideas there. The last paragraph is particularly interesting. Because, indeed, my idea is that this should be absolutely a last-resort scenario, and so, while I too would struggle to find a justification for this, it is the kind of scheme that would fit well.
Your second paragraph is the key challenge. All i can say is that I haven’t investigated this in depth, especially since I’m not only not a tax-expert, but also not US-based, and this point would be different in every country. But I believe that it’s not an impossibly difficult calculation to figure out a way to ensure this, the challenge might be just in convincing anyone to add even more complexity to the tax-laws.
Really appreciate your thoughtful input and ideas!
Cheers
Denis
Thanks Daniel,
This is all good perspective. Mostly I don’t disagree with what you wrote, just a few comments:
In terms of decisions, I’m not necessarily saying that the public should decide, but that the public at least should be aware and involved.
Your comment about alternative uses for the money is correct—my original point was a bit simplistic!
My original post didn’t talk enough about deterrence, but in a response to another comment I mentioned the key point I missed: the US will still have 900 submarine-based missiles as their deterrent. Much as I personally would love to be nuclear-weapon-free, I am not suggesting that the US could safely get rid of these, and I believe they provide an adequate deterrent.
Your insight that some of the upgrades may increase safety is a good one—I hadn’t considered that.
Maybe I’m just idealistic, but I believe we need to see more efforts at more reduction of nuclear arsenals, and that this might be a time to try. I totally agree it won’t be easy!
Overall, thanks for this. It is always appreciated when someone takes the time and effort to critique a post in some depth. Cheers!
This is a fair push-back.
The article contains only the explanation of one immediate spend of $100 billion on a new Sentinel missile which was ordered in 2021. The precise details of the $1.5 trillion number are not outlined in the article itself, but are available at the following link, which reference this original source. The estimate is based on a 30 year time-frame, with a low-end estimate of $1.25 trillion. It is true that the original source comes from a group in favour of Arms Control.
That said, my point in writing this post was not to focus on the precise quantity (even if it’s “only” $1.0 trillion, that doesn’t make it OK). Rather to highlight that the US is spending huge amounts of money upgrading their nuclear weapons in a world which would be far better off without more nuclear weapons, and there has been (to my knowledge) almost no public debate or even political debate over whether upgrading nuclear missiles is the right thing to do. It just goes on behind the scenes.
To be clear, this is not about some utopian vision of a world without the need for nuclear deterrence. The US still has 900 submarine-based nuclear missiles. So there is no credible argument that the new and improved land-based missiles are needed for deterrence, since the submarine-based missiles would be impossible to destroy in a first-strike attack.
Somebody is peddling the notion that, with the right missiles, the US could win a nuclear war.
I would love to see more public debate about this, rather than these matters being decided in secret discussions between politicians (looking for campaign funds), armed forces personnel (looking for relevance and power) and arms producers (looking for profit). I’m not sure which of these actors actually represents the interests of the majority of citizens, of America or of the world.
Really enjoyed and appreciated this wonderful piece of analysis. Thank you!
Considering this post was written 7 years ago, I’m wondering if some of the insights you made have not been fully exploited by the EA community.
EAs do some of the vital things you identify extremely well. One of them is intellectual rigour, which fits with the academic angle that neoliberalism exploited. In an argument between an EA and a non-EA, you typically feel that the more intelligent and critical the audience, the better the chance that the EA will win, because we really test arguments to the nth degree. This is great.
One where we do less well maybe the the Utopian aspect. I believe that this may be because we do not necessarily recognise the importance of making our message “visionary” in a way that resonates with the general public. EAs are sometimes perceived as a group of nerdy, elitist intellectuals, which is not the reality. But it may be true that we allow this image to exist by not proactively changing it.
The tragedy of this is: EAs do have a very aspirational world-vision—a world without poverty or malaria or nuclear war or pandemics or animal suffering or existential AI risks … maybe we just don’t talk about it enough. Maybe in addition to all the critical, quantitative arguments and focus on the risks and the problems, we should have more “I have a dream” type communication, talking about the kind of world EA’s would create, using positive language (not “no poverty” but “everyone has a good standard of life and access to good education and health-care”; not “no animal suffering in factory farms” but “we have access to as much nutritious, delicious food as we want, while animals roam the fields in freedom with no worries about being slaughtered for our food.” … well, we can find better words …).
It could be that we do this already and I’m just not seeing it (I’m in Belgium!) - but when I see the press-coverage of EA during the SBF trial, it was so negative and so divorced from the reality that I actually see in the EA community.
Thank you Jason for this really helpful comment!
Part of the reason I posted here was to get feedback exactly like this, from people more knowledgeable than I am. So I really appreciate both the feedback and your ideas as to how it can still work.
Given the complexity you describe, I am tempted to suggest a two-pronged approach:
Short-term:Your donation is NOT initially tax-deductible, but (in return for that), if/when you decide to make part or all of a donation permanent, that would then be tax-deductible. I’m not sure, but this would seem to be fair according to the way the tax-laws are supposed to work.
Long-term:
If this becomes a common phenomenon, big enough to warrant the effort, it would be interesting to re-negotiate the way it works with the tax-authorities, so that at donation would initially be tax-deductible but the amount that would be recovered would be only the non-tax-deductible part (e.g. the $80 in the example you described) - and then (TBD) maybe the remaining $20 must be paid back to the tax-authorities, or maybe not.
I really appreciate your perspective of looking at this from the tax-authorities point-of-view, which indeed would probably have to be very cynical. And let’s face it, in most cases, I agree with them. It already bothers me that very rich people can give millions to a very rich church rather than pay taxes that would be used to help provide better services for the poor—even when nobody breaks any laws.
But that’s my top-of-mind reaction—I will give this some more thought!
Cheers!
Thank you Quentin!
Your criticism is valid. For me this is maybe the single biggest watchout.
The reason I believe this can be managed comes from looking at the way companies who market consumer goods (everything from shampoo to cars to iphones) manage what they call “cannibalisation”.
Very simple example: Let’s say Company A has a shampoo S and a conditioner C on the market, and these each have 10% share of their relative markets, each earning $10m in profit. Now an inventor in Company A says “Hey, I have a new product, SC, a shampoo with conditioner!”The company tests this product and discovers they could sell enough to make $12m in profit. So it seems like a no-brainer. But before they do, they will first check the cannibalisation—how much of that $12m is actually coming from the profit they already make on products S and C.
in reality, the whole thing will be much more complex, but the gist of it is: company A will have a time-tested methodology to ensure that the combination S plus C plus SC is a better business model than just S plus C (the current model).
In an analogous way, I don’t think it’s realistic to say that there will be no loss of pure donations, but there will be quantitative ways to make sure that the net total donations to charities will be better than before. I didn’t go into this in detail, but obviously doing so would be a critical step in designing any real model.Depending on the calculations, you would then adjust the parameters of the model. For example, you might decide that only donations after a certain minimum “pure donation” would be eligible if the model showed that this would ensure that the net total would necessarily be increased.
Happy to talk this in further detail if we get to the point of actually doing it. I’m sure there are econ majors and business majors who can cite papers and literature on the best way to do this, my experience is more on seeing the already completed analysis and how cannibalisation was factored into every potential new launch.
As for your second point, I am definitely curious to learn more and will email you! thanks!
Wow, that is cool. Thanks for this great connection!!
I didn’t know about this. But it is indeed close to what I had in mind, albeit a more modest version. Great minds think alike and all that :)
I will contact them and share my post and see if there’s anything in there that might be useful to them—or alternatively, if they have some feedback on the idea based on their first year of operation. I would be especially interested to see if they have any data to confirm or refute my ideas about expected value, testing, optimisation, etc.
When I first started thinking about this, a couple of years ago, I didn’t find anyone doing anything similar, but it wasn’t easy to search. And anyhow I wouldn’t have found Basefund since they started since then.
Thanks for sharing this info!
Thanks for this comment.
First, there are indeed parallels.
I think the difference is that a charitable remainder trust is a very major commitment, not many people will go for that. It seems geared towards people who have a lot of money and do not really intend to earn any more. I would imagine that many people who commit to this are older, retired. Or else, they are extremely rich.
(and I love the idea of charitable remainder trusts too, by the way!)
In principle, in my idea, the charity is NOT going to be giving you a regular income, or anything at all, barring unforeseen circumstances. So it is a more limited connection.
But to answer the central thrust of your comment: In an ideal world, these and other ideas would be very visible and popular options for people of all ages. In terms of managing them, insurance contracts could indeed play a role, whether that be an insurance taken out by the individual but paid from the funds, or an insurance taken out by the charity which would be used to pay for individual problems.
If I were to re-write this post, I would have included a reference to these—but frankly I’ve only just read about them now following your comment!
One point here is that insurance policies have negative expected value—we pay more in return for the company assuming the unpredictability. So in an ideal world this would be a large enough scheme that we could avoid anyone paying additional fees to insurance companies. For example, if 1000 people were doing this and it was expected that 5% would claim back, it might be more cost-effective to maintain at least 5% liquidity than to spend money on an insurance contract. But obviously, this is an executional detail.
If I could give this post 20 upvotes I would.
Being relatively new to the EA community, this for me is the single biggest area of opportunity to make the community more impactful.
Communication within the EA community (and within the AI Safety community) is wonderful, clear, crisp, logical, calm, proportional. If only the rest of the world could communicate like that, how many problems we’d solve.
But unfortunately, many, maybe even most people, react to ideas emotionally, their gut reaction outweighing or even preventing any calm, logical analysis.
And it feels like a lot of people see EA’s as “cold and calculating” because of the way we communicate—with numbers and facts and rationale.
There is a whole science of communication (in which I’m far from an expert) which looks at how to make your message stick, how to use storytelling to build on humans’ natural desire to hear stories, how to use emotion-laden words and images instead of numbers, and so on.
For example: thousands of articles were written about the tragic and perilous way migrants would try to cross the Mediterranean to get to Europe. We all knew the facts. But few people acted. Then one photograph of a small boy who washed up dead on the beach almost single-handedly engaged millions of people to realise that this was inhumane, that we can’t let this go on. (in the end, it’s still going on). The photo was horrible and tragic, but was one of thousands of similar tragedies—yet this photo did more than all the numbers.
We could ask ourselves what kind of images might represent the dangers of AI in a similar emotional way. In 2001 A Space Odyssey, Stanley Kubrick achieved something like this. He captured the human experience of utter impotence to do anything against a very powerful AI. It was just one person, but we empathised with that person, just like we empathised with the tragic boy or with his parents and family.
What you’re describing is how others have used this form of communication—very likely fine-tuned in focus groups—to find out how to make their message as impactful as possible, as emotional as possible.
EA’s need to learn how to do this more. We need to separate the calm, logical discussion about what is the best course of action from the challenge of making our communication effective in bringing that about. There are some groups who do this quite well, but we are still amateurs compared to the (often bad guys) pushing alternative viewpoints using sophisticated psychology and analysis to fine-tune their messaging.
(full disclosure: this is part of what I’m studying for the project I’m doing for the BlueDot AI Safety course)
This is really valuable research, thank you for investigating and sharing!
My take from this is that there are people out there who are very good (and sometimes lucky) at organising effective protests. As @Geoffrey Miller comments below (above?), the protesters were not always right. But this is how the world works. We do not have a perfect world government. We can still learn from what they did that worked!
I believe that the world would be a much better place if EA’s influenced more policy decisions. And if protesters were supporting EA-positions, it’s highly likely that they’d will be at least mostly right!So hoping that this work will help us figure out how to get more EA-aligned protesters.
I will be investigating a related area in my BlueDot Research Project, so may post something more on this later. But this article has already been a great help!
This is some great research and a really nice summary. I read it on the High-Impact Engineers Portal, but great to see it being cross-posted here, as it’s just a very good summary of the state of nuclear risk in general, and the challenges we face.
It’s great how you highlight the real risks. I see a lot of analogy to AI here; you can go into this field with the best of intentions, and if you’re not careful, you could end up working on something that is increasing rather than reducing the risk.
Thanks for doing and writing up this research!
What a great build! Thank you for this!
I hadn’t looked at it that way, but it makes so much sense.But I would still say (as you might too) that EA’s tend to be more affectively empathetic than average. We do care.
There is this misrepresentation of EA’s as if it’s some kind of game for us, like Monopoly, but that is absolutely not representative of the EA’s I’ve interacted with.
Thanks Jeff,
It’s helpful to have the facts. I will look for a better example next time!
Cheers
Denis
“Sam became a vegan and an effective altruism because he thought through the arguments and concluded they were correct, not because of feelings of guilt or empathy.”
There is so much in this sentence that captures the entire EA / non-EA division. Most especially, though, it captures the general public’s misunderstanding of what empathy is.
First is the implication that “following the arguments” means you don’t have empathy. This is so patently false. If anything, EA’s have more empathy—so much empathy that we want to do the most effective things for those who need help, rather than the things that give us personal satisfaction or assuage our guilty feelings.
The non-EA would say “I saw a blind person today, with a guide dog. I felt so much empathy that I decided to donate $50K to the charity that trains guide dogs.” (net result: one more guide dog trained for a person in a wealthy country, but massive feelings of self-satisfaction for the owner, who may even get to personally meet the dog and get thanked by its new owner).
The EA sees the same thing, and thinks “Imagine how terrible it must be to be blind and not even have a guide dog, maybe living in a country which doesn’t accommodate blind people the way the US does.” And, after some research, donates $50K to a charity that prevents blindness, saving the sight of maybe 1000 people in a poor country.
But still, in the eyes of many, the non-EA has shown more empathy, while the EA has just “followed the arguments”. People think empathy is about a warm, fuzzy feeling they get when they help someone. But it’s not. Empathy is about getting inside someone’s head, seeing the world from their perspective and understanding what they need.
Second is the focus on the giver rather than the receiver.
The EA understands that a person in need needs help. They do not need empathy or sympathy or guilt. They need help. If they get that help from a cynical crypto-billionaire or from a generous kid who gives away her birthday savings, it makes no difference to them.
The non-EA focuses on the generosity of the kid, giving up toys and chocolate to help (and that is wonderful and fully to be encouraged) and on the calculated logic of the billionaire who will not even notice the money donated.
The EA focuses on the receiver, and on whether that person’s needs are met. This is far closer to true empathy.
I wonder if there’s a way for EA’s to fight back against our critics by explaining (in a positive way) that what we do and the way we think is empathy to the power of n, that the suggestion that we don’t have empathy is utterly false.
This is a great talk. This and another recent post by Gemma Paterson highlight the importance of standing up for EA and funding it.
This is even more important today, in the context of the SBF trial, when it feels like EA is under attack from many sides.
In an ideal world, people would be supportive or, at worst, indifferent towards a group of people trying to make the world better and safer for everyone.But in our bizarre world, no good deed goes unpunished, and the SBF affair seems to have put new wind in the sails of those who would find fault with EA. I couldn’t resist doing an internet search for the term you mentioned, “Defective Altruism”, and predictably I found more, and more recent, articles along a similar theme, referring to EA as “elitist” or even “repugnant”. I am purposefully not including links to these diatribes.
But we need to be aware that this mentality is out there, and it’s impacting the narrative that many people hear.
I think it’s important to realise that, outside of this community, many people either have no idea what EA is, or have only heard one of these negative views, often ridiculing an idea they are intentionally misunderstanding.It’s rarely a good idea to try to argue with someone whose views are the opposite of yours, especially if you believe their views are misguided. Most people with negative views of EA are not bad people. But they may have been raised to view charity the way I was raised to view it—with a deep focus on the giver rather than the receiver.
Growing up as a Christian, I was taught that what matters is how much I personally sacrifice to support a good cause. If I suffer, all the better. Whether the net result of my suffering makes a difference is almost an afterthought. This is absolutely not to say that Christians did not care about the poor or the sick—I had so many wonderful role-models, including my parents and several priests who did great work to help the poor. In the Bible, Jesus Himself cares deeply for the poor, yet insists that the poor man who donates a tiny amount is better than the rich man who makes a huge donation.
So when EA’s come and put the focus firmly on the impact, we are challenging some very deeply-held beliefs. Actually, we’re not challenging them at all, rather we’re separating two different things. We’re saying that we make no moral or ethical judgments about what or why you donate—be it time or money. We focus only on the question of how to maximise the good that you can do with that.
But in the eyes of someone raised on traditional Christian values, this can easily be misperceived—or intentionally misunderstood—as our equating more impactful donors with “morally superior” people.
Cases like SBF are extreme examples, which unfortunately get a lot of publicity. To an EA, before his fall, SBF was a great example of an effective altruist, whose altruism was doing more than 1000 average charity volunteers. Yet there he was living in extreme luxury while they struggled to survive. How could this make sense, morally?
I have even read a description of EA as being essentially a vehicle to enable mega-rich people like SBF to justify their wealth and even their criminality by claiming that they were earning to give. That the causality of this is backwards doesn’t make it less harmful when a credible journalist write that to an audience who know little of EA.
In our sound-bite world, EA is very easy to denigrate, and quite difficult to defend, because one look at the EA Forum will show the depth of thought and complex analysis that underlies EA’s positions.
In that context, I think videos like this one are wonderful. It shows a side of EA that is much closer to what people view as “charity”. It shows deep concern about people in poverty and a desire to help them. It speaks of people devoting their whole careers to doing good. It does not get tangled up in philosophical arguments, but just looks at how we can do more to help more people.
It would be great if there were a way to get this, or something similar, on TED, to help counter some of the misguided negativity and disinformation that is out there right now.
I think it’s clear from your search results and the answers below that there isn’t one representative position.
But even if there were, I think it’s more useful for you to just make your arguments in the way you’ve outlined, without focusing on one general article to disagree with.
There’s a very specific reason for this: Think about the target audience. These questions are now vital topics being discussed at government level, which impact national and international policies, and corporate strategies at major international companies.
If your argument is strong, surely you’d want it to be accessible to the people working on these policies, rather than just to a small group of EA people who will recognise all the arguments you’re addressing.
If you want to write in a way that is very accessible, it’s better not to just say “I disagree with Person X on this” but rather, “My opinion is Y. There are those, such as person X, who disagree, because they believe that … (explanation of point of disagreement).”
There is the saying that the best way to get a correct answer to any question these days is to post an incorrect answer and let people correct you. In the same spirit, if you outline your positions, people will come back with objections, whether original or citing other work, and eventually you can modify your document to address these. So it becomes a living document representing your latest thinking.
This is also valuable because AI itself is evolving, and we’re learning more about it every day. So even if your argument is accurate based on what we know today, you might want to change something tomorrow.
(Yes, I realise what I’ve proposed is a lot more work! But maybe the first version, outlining what you think, is already valuable in and of itself).