Here are some comments on the article that I sent to my family.
In 1972 philosopher Peter Singer suggested using metrics rather than emotion to direct charitable giving.
Not sure what he’s talking about. I think the main point of Famine, Affluence, and Morality is that if you can help someone without a significant cost to yourself, you should.
Effective altruism also seems to be related to the “work to give” movement. Workers will rationalize high-paying jobs by giving most of their income away. Actually, when you work, you already give to society, but that is too complex for some to understand.
Earning to give is only a small part of EA, and I don’t think it’s typically a post hoc rationalization. And EAs understand very well that working directly on problems can give to society—see the first WSJ article I sent.
An organization known as GiveWell will tell you what charities are effective. I did a little digging, and I’m not so sure they’re effective at all. Yes, they direct money toward malaria nets and treatments for parasitic worms, but they also supply supplements for vitamin A deficiency, though genetically modified “golden” rice already provides vitamin A more effectively. Hmmm, seems like a move backward.
It’s plausible that the best way to reduce vitamin A deficiency is to invest in multiple strategies at once. But if he gave a thorough argument that donating to “golden” rice infrastructure fights vitamin A deficiency more effectively per dollar than vitamin A supplementation, then I wouldn’t be surprised to see GiveWell change its recommendations.
William MacAskill, a major effective-altruism booster, told the Washington Post that more should be spent on “preparing for low-probability, high-cost events such as pandemics.” That’s a bit like closing the barn door after the horse has bolted.
The author’s comment seems quite silly to me.
And Mr. Bankman-Fried’s various entities, along with Cari Tuna and others, have put up about $19 million for a future California ballot measure, the California Pandemic Early Detection and Prevention Act, which would add a 0.75% tax on incomes over $5 million to raise up to $15 billion over 10 years. Catch that? Someone else pays. Effective, but not exactly selfless.
I don’t see anything wrong with SBF promoting a tax on extremely wealthy people to prevent pandemics (unless the resulting pandemic prevention efforts are less valuable than what the wealthy people would do with their money otherwise). In general, I’m sure some taxes are totally worth promoting.
I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.
Pandemic prevention is not a “quirky” concern!
And “effective” is in the eye of the beholder. Effective altruism proponent Steven Pinker said last year, “I don’t particularly think that combating artificial intelligence risk is an effective form of altruism.”
Yes, EAs don’t agree on everything, nor do I think they should. There’s an emphasis within EA on updating your beliefs in response to new evidence, such as reasonable arguments from other people.
Development economist Lant Pritchett finds it “puzzling that people’s [sic] whose private fortunes are generated by non-linearity”—Facebook, Google and FTX can write code that scales to billions of users—“waste their time debating the best (cost-effective) linear way to give away their private fortunes.”
So the argument is that when deciding where to donate your money, you should use the same tactics that earned you that money in the first place? It’s unclear how “cost-effectiveness” is the same as “linearity.” Maybe he’s advocating for donating to interventions that are like unicorn startups—interventions that could be hugely beneficial if they succeed, but probably won’t do much. If so, this is kind of exactly what Open Philanthropy is doing (“hits-based giving”).
He notes that “national development” and “high economic productivity” drive human well-being. So true. History has proved that capitalism is the most effective and altruistic system.
It’s fully possible to believe in EA principles and support capitalism. But high economic productivity can come with damaging externalities, such as increased risk of global catastrophes from new technologies.
There are only four things you can do with your money: spend it, pay taxes, give it away or invest it. Only the last drives productivity and helps society in the long term.
Eric Hoffer wrote in 1967 of the U.S.: “What starts out here as a mass movement ends up as a racket, a cult, or a corporation.” That’s true even of allegedly altruistic ones.
This is one of the few points in the article that I like. EA (which EA headquarters likes to describe as “a project”) resembles a cult in some ways: people worry about future catastrophes, care about “doing good,” think about weird ideas, and dream about growing the movement.
Here are some comments on the article that I sent to my family.
Not sure what he’s talking about. I think the main point of Famine, Affluence, and Morality is that if you can help someone without a significant cost to yourself, you should.
Earning to give is only a small part of EA, and I don’t think it’s typically a post hoc rationalization. And EAs understand very well that working directly on problems can give to society—see the first WSJ article I sent.
It’s plausible that the best way to reduce vitamin A deficiency is to invest in multiple strategies at once. But if he gave a thorough argument that donating to “golden” rice infrastructure fights vitamin A deficiency more effectively per dollar than vitamin A supplementation, then I wouldn’t be surprised to see GiveWell change its recommendations.
The author’s comment seems quite silly to me.
I don’t see anything wrong with SBF promoting a tax on extremely wealthy people to prevent pandemics (unless the resulting pandemic prevention efforts are less valuable than what the wealthy people would do with their money otherwise). In general, I’m sure some taxes are totally worth promoting.
Pandemic prevention is not a “quirky” concern!
Yes, EAs don’t agree on everything, nor do I think they should. There’s an emphasis within EA on updating your beliefs in response to new evidence, such as reasonable arguments from other people.
So the argument is that when deciding where to donate your money, you should use the same tactics that earned you that money in the first place? It’s unclear how “cost-effectiveness” is the same as “linearity.” Maybe he’s advocating for donating to interventions that are like unicorn startups—interventions that could be hugely beneficial if they succeed, but probably won’t do much. If so, this is kind of exactly what Open Philanthropy is doing (“hits-based giving”).
It’s fully possible to believe in EA principles and support capitalism. But high economic productivity can come with damaging externalities, such as increased risk of global catastrophes from new technologies.
That seems totally incorrect. GiveWell estimates that donations to its recommended charities have averted over 100,000 deaths.
This is one of the few points in the article that I like. EA (which EA headquarters likes to describe as “a project”) resembles a cult in some ways: people worry about future catastrophes, care about “doing good,” think about weird ideas, and dream about growing the movement.