Thanks. I was inspired yesterday to do a point by point addressing of the piece. Feels a little “when you wrestle with a pig, you get muddy and the pig likes it”, but spoiler alert I think there’s nonzero worthy critique hiding in the bad writing.
Workers will rationalize high-paying jobs by giving most of their income away. Actually, when you work, you already give to society, but that is too complex for some to understand.
I think EAs live in the space between the extreme “capitalism is perfectly inefficient such that a wallstreet compensation package is irrelevant to the (negligible) social value that a wall street worker produces” and the equally extreme “capitalism is perfectly efficient such that a wallstreet compensation package is in direct proportion to the (evidentially high) social value that a wallstreet worker produces”. Also, insofar as capitalism is designed and not emergent, is it really optimized for social value? It seems optimized for metrics which are proxies for social value, and very much subject to goodhart, but I’ll stop before I start riots in every last history and economics department. Moreover, how about we want more number go up? If number go up is good, and working some arbitrary gig in fact makes number go up, donating some of the proceeds will make number go up more, so E2G people are correct to do both!
Animal rights and veganism are big in the movement as well.
Sorry this reads to me like applause lights for the “I hate those smug virtue signaling vegans because I love bacon” crowd. OP’s thesis about EA doesn’t really relate to our unusually high vegan population, they might as well have pointed out our unusually high queer or jewish or computer programmer population.
Yes, they direct money toward malaria nets and treatments for parasitic worms, but they also supply supplements for vitamin A deficiency, though genetically modified “golden” rice already provides vitamin A more effectively. Hmmm, seems like a move backward.
Sorry one sec I’m laughing a little at this “what have the romans ever done for us?” moment. “yeah, besides the malaria nets and deworming, which I admit are a plus, what have the EAs ever done for the poor?” it’s like monty python! Anyway, my friend, if you think golden rice is a neglected approach to vitamin A deficiency, are you rolling up your sleeves and advancing the argument? Do you even bother to cite evidence that it’s more effective? “Hmmm, seems like a move backward” is a completely unjustified and frivolous sentence.
That’s a bit like closing the barn door after the horse has bolted.
EAs do not subscribe to the interpretation of the theory of random variables that you imply! We do not believe that random variables conserve a supply of events out in the universe of potentiality, such that an event of a particular class drains the supply of events of that class from the future. We instead believe that events of a class occurring does not imply that there’s less of that class of event available to occur in the future. In fact, if anything we believe the opposite, if anything we believe that observing an event of a class should update us to think they’re more likely than we did before we observed it! Moreover, EAs are widely on record advocating for pandemic preparedness well before covid.
Partly as a result of his and his brother’s efforts, $30 billion for pandemic preparation was written into the Biden administration’s thankfully stalled Build Back Better porkfest.
Catch that? Someone else pays. Effective, but not exactly selfless. It’s the classic progressive playbook: Raise taxes to fund their pet projects but not yours or mine. I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.
Not wrong. Policy efforts inevitably lead to this line (from this crowd at least), unless they’re, like, tax-cutting. Policy EAs are advancing a public goods argument. It opens us up to every lowering-my-taxes-is-ITN guy that every single public goods argument in the world is opened up to. I don’t need to point out that OP surely has pet projects that they think ought to be funded, by taxes even, and I can omit conjectures about what they are and about how I personally feel about them. But this is a legitimate bit of information about EA policy efforts. (Obviously subject to framing devices: tax increments are sufficiently complex that a hostile reader would call something “increase by 0.75%” while another reader would say “pushing numbers around the page such that the 0.75% came from somewhere else so it’s not a real increment” and neither would be strictly lying).
And “effective” is in the eye of the beholder. Effective altruism proponent Steven Pinker said last year, “I don’t particularly think that combating artificial intelligence risk is an effective form of altruism.”
I’ll omit how what I actually think about Pinker, but in no worlds is this settled. Pinker is one guy who lots of people disagree with!
There are other critics. Development economist Lant Pritchett finds it “puzzling that people’s [sic] whose private fortunes are generated by non-linearity”—Facebook, Google and FTX can write code that scales to billions of users—“waste their time debating the best (cost-effective) linear way to give away their private fortunes.” He notes that “national development” and “high economic productivity” drive human well-being.
Seems valid to me. Nonlinear returns on philanthropy would be awesome, wouldn’t they? It’s sort of like “if a non-engineer says ‘wouldnt a heat-preserving engine be great?’ we don’t laud them as a visionary inventor” in this case, because I don’t expect OP to roll up their sleeves and start iterating on what that nonlinearly returning mechanism would look like! But that doesn’t mean we shouldn’t take a look ourselves.
There are only four things you can do with your money: spend it, pay taxes, give it away or invest it. Only the last drives productivity and helps society in the long term.
Eric Hoffer wrote in 1967 of the U.S.: “What starts out here as a mass movement ends up as a racket, a cult, or a corporation.” That’s true even of allegedly altruistic ones.
This seems underjustified and not of a lot of substance. I think what OP has portrayed may qualify as a racket to people of a particular persuasion regarding government spending, or as a cult to the “I intuitively dislike virtue signaling and smugness so I look for logical holes in anyone who tries to do good” crowd, but OP could have been more precise and explicit about which of those they think is important to end on. But alas, when you’re in a given memeplex that you know you share with your audience, you only have to handwave! lol
As Scott Alexander recently addressed, EAs are like a borg: we assimilate critics of any quality bar whatsoever. As much as we respect Zvi “guys, I keep telling you I’m not an EA” Mowshowitz’ wishes to not carry a card with a lightbulb heart stamped on it, it’s pretty hard not to think of him as an honorary member. My point is we really should consider borg-ing up the “taxation is theft” sort of arguments about public goods and the “investment beats aid” sort of arguments about raising global welfare.
I’ll oblige others who don’t want the WSJ op-ed to get more attention by only responsible to the article in this comment thread. I don’t expect specifically trait-feign or any other person to respond to this comment, though it’d be appreciated if anyone can provide relevant information.
FTX bought the naming rights to the Miami Heat’s arena and lots of umpire and referee uniforms. Since May, he has been bailing out failing crypto firms
This is the first I’ve read of this. This doesn’t sound good or effective at face value. It’s being used to make EA look bad. That doesn’t mean the attempt has succeeded. Others have commented about how many bogus assumptions in this op-ed are easy to debunk. It’s also written in a sensational way that makes this second claim not sound too serious.
Surely SBF has some argument for why he has been, if the characterization is accurate, bailing out failing crypto firms. They could be poorly reasoned arguments for why it’s a good idea. Bailing out these failing crypto firms may end up being what was a big, foreseeable mistake. Whether SBF’s giving really is effective matters more than only one op-ed spins it as ineffective. Knowing the reasons and evidence is what matters.
Assuming for the sake of argument that so many of SBF’s and FTX’s giving or investments turn out to be very bad bets, it’s not hard for the EA community to make clear that the EA community at large neither advised nor endorsed such choices. Nobody would say buying the naming rights to the Miami Heat’s arena was one of EA’s top recommendations for what FTX should do with that money.
The EA community has already begun making distinctions like that clear. A few of the most upvoted articles of 2022 so far have been ones critical of FTX’s giving approach/methodology, or the prospect of EA community getting particularly involved with SBF’s political plays. Even given assumptions of such a bad scenario, which could easily be false anyway, a future scenario like this with EA having a PR team or whatever seems like it wouldn’t be that hard to deal with.
And Mr. Bankman-Fried’s various entities, along with Cari Tuna and others, have put up about $19 million for a future California ballot measure, the California Pandemic Early Detection and Prevention Act, which would add a 0.75% tax on incomes over $5 million to raise up to $15 billion over 10 years. Catch that? Someone else pays. Effective, but not exactly selfless.
It’s the classic progressive playbook: Raise taxes to fund their pet projects but not yours or mine. I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.
The author isn’t a grifter only for doing a job but he is kind of a hack in that it’s evident he is pandering to a particular “own the libs” readership he has in mind.
Politically progressive or liberal multi-millionaires or billionaires are far from the only ones who advocate raising taxes or even back campaigns that will raise taxes.
There are a lot of billionaires who’ve advocated their own taxes being raised. Warren Buffet has. Elon Musk has said stuff about how he doesn’t want to pay higher taxes because he doesn’t trust the government to spend the effectively but he’d support it more if the government were to spend the money more effectively, e.g., on high-impact existential risk reduction. Even some conservative billionaires are in favour of the government expanding in a least some ways even if they don’t want to have to pay higher taxes for it.
The author puts almost no effort into an argument for why not even billionaires have a right to support a campaign to raise taxes on other billionaires.
Who he is pandering too are readers who are themselves probably not super-wealthy either. He refers to “American taxpayers” as if it’s taxpayers in general as opposed to the very small minority o them that earn more than $5 million per year.
The author conflates how some x-risk scenarios superficially sound like silly science fiction with pandemic preparedness being silly. Before the pandemic, most people would think that’s ridiculous. After the pandemic, it could be even most conservatives in the USA who’d think that’s ridiculous.
The last couple paragraphs are a couple random quotes out of context critical of some aspects of EA. (Other claims made in the article have been commented on by others and I’ve got nothing else to add.)
There seems to me to be a fallacy here that assumes every action SBF takes needs to be justifiable on its first order EA merits.
The various stakes FTX have taken in crypto companies during this downturn are obviously not done in lieu of donations—they are business decisions, presumably done with the intention of making more money, as part of the process of making FTX a success. Whether they are good decisions in this light is hard for me to say, but I’d be inclined to defer to FTX here.
I was thinking through such a possibility descriptively, and how the EA community might respond, without trying to prescribe the EA community in a real-world scenario. I didn’t indicate that well, though, so please pardon me for the error.
To clarify, given the assumptions that criticisms of SBF’s or FTX’s investments or donations might be used to attack EA as a movement by association, and the EA community also had some responsibility to distance itself from those efforts, it wouldn’t be that hard to do so. I personally disagree with the second assumption.
I’m of the opinion the EA community has no such responsibility but it seems at least some others do.
SBF seems to have made some mistakes with his recent forays into politics but they don’t strike me to have been as bad as at least a significant minority of the EA community believes. My opinion is that the need some felt for the EA community to distance itself from SBF’s political activities was excessive.
The various stakes FTX have taken in crypto companies during this downturn are obviously not done in lieu of donations—they are business decisions, presumably done with the intention of making more money, as part of the process of making FTX a success. Whether they are good decisions in this light is hard for me to say, but I’d be inclined to defer to FTX here.
I agree with all of this. There are plenty of companies that have taken long(er)-term bets like the one FTX is making that have turned out to be among the best business decisions of the 21st century. Facebook, Amazon and companies Elon Musk has bought were not profitable for almost a decade. They were marred by criticisms and predictions of how they were always on the brink of imminent collapse. That was all bogus.
It’s worth keeping survivorship bias in mind and the fact that some bets made like this wound up as catastrophic business decisions. Yet it’s not justified to assume by default FTX’s investments in this way will end up as bad rather than good decisions. That’s especially true in the absence of more information. The author hasn’t provided any such information and is not likely to have access to such information either.
It seems like more pandering. I’m guessing the author is the kind who would’ve maligned Musk when he was a Democrat but now because Musk is a Republican defend decisions he might have criticized before.
Here are some comments on the article that I sent to my family.
In 1972 philosopher Peter Singer suggested using metrics rather than emotion to direct charitable giving.
Not sure what he’s talking about. I think the main point of Famine, Affluence, and Morality is that if you can help someone without a significant cost to yourself, you should.
Effective altruism also seems to be related to the “work to give” movement. Workers will rationalize high-paying jobs by giving most of their income away. Actually, when you work, you already give to society, but that is too complex for some to understand.
Earning to give is only a small part of EA, and I don’t think it’s typically a post hoc rationalization. And EAs understand very well that working directly on problems can give to society—see the first WSJ article I sent.
An organization known as GiveWell will tell you what charities are effective. I did a little digging, and I’m not so sure they’re effective at all. Yes, they direct money toward malaria nets and treatments for parasitic worms, but they also supply supplements for vitamin A deficiency, though genetically modified “golden” rice already provides vitamin A more effectively. Hmmm, seems like a move backward.
It’s plausible that the best way to reduce vitamin A deficiency is to invest in multiple strategies at once. But if he gave a thorough argument that donating to “golden” rice infrastructure fights vitamin A deficiency more effectively per dollar than vitamin A supplementation, then I wouldn’t be surprised to see GiveWell change its recommendations.
William MacAskill, a major effective-altruism booster, told the Washington Post that more should be spent on “preparing for low-probability, high-cost events such as pandemics.” That’s a bit like closing the barn door after the horse has bolted.
The author’s comment seems quite silly to me.
And Mr. Bankman-Fried’s various entities, along with Cari Tuna and others, have put up about $19 million for a future California ballot measure, the California Pandemic Early Detection and Prevention Act, which would add a 0.75% tax on incomes over $5 million to raise up to $15 billion over 10 years. Catch that? Someone else pays. Effective, but not exactly selfless.
I don’t see anything wrong with SBF promoting a tax on extremely wealthy people to prevent pandemics (unless the resulting pandemic prevention efforts are less valuable than what the wealthy people would do with their money otherwise). In general, I’m sure some taxes are totally worth promoting.
I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.
Pandemic prevention is not a “quirky” concern!
And “effective” is in the eye of the beholder. Effective altruism proponent Steven Pinker said last year, “I don’t particularly think that combating artificial intelligence risk is an effective form of altruism.”
Yes, EAs don’t agree on everything, nor do I think they should. There’s an emphasis within EA on updating your beliefs in response to new evidence, such as reasonable arguments from other people.
Development economist Lant Pritchett finds it “puzzling that people’s [sic] whose private fortunes are generated by non-linearity”—Facebook, Google and FTX can write code that scales to billions of users—“waste their time debating the best (cost-effective) linear way to give away their private fortunes.”
So the argument is that when deciding where to donate your money, you should use the same tactics that earned you that money in the first place? It’s unclear how “cost-effectiveness” is the same as “linearity.” Maybe he’s advocating for donating to interventions that are like unicorn startups—interventions that could be hugely beneficial if they succeed, but probably won’t do much. If so, this is kind of exactly what Open Philanthropy is doing (“hits-based giving”).
He notes that “national development” and “high economic productivity” drive human well-being. So true. History has proved that capitalism is the most effective and altruistic system.
It’s fully possible to believe in EA principles and support capitalism. But high economic productivity can come with damaging externalities, such as increased risk of global catastrophes from new technologies.
There are only four things you can do with your money: spend it, pay taxes, give it away or invest it. Only the last drives productivity and helps society in the long term.
Eric Hoffer wrote in 1967 of the U.S.: “What starts out here as a mass movement ends up as a racket, a cult, or a corporation.” That’s true even of allegedly altruistic ones.
This is one of the few points in the article that I like. EA (which EA headquarters likes to describe as “a project”) resembles a cult in some ways: people worry about future catastrophes, care about “doing good,” think about weird ideas, and dream about growing the movement.
Mirror of ‘Effective Altruism’ Is Neither, the article in question. As it is a non-direct mirror should not affect readership numbers.
Thanks. I was inspired yesterday to do a point by point addressing of the piece. Feels a little “when you wrestle with a pig, you get muddy and the pig likes it”, but spoiler alert I think there’s nonzero worthy critique hiding in the bad writing.
I think EAs live in the space between the extreme “capitalism is perfectly inefficient such that a wallstreet compensation package is irrelevant to the (negligible) social value that a wall street worker produces” and the equally extreme “capitalism is perfectly efficient such that a wallstreet compensation package is in direct proportion to the (evidentially high) social value that a wallstreet worker produces”. Also, insofar as capitalism is designed and not emergent, is it really optimized for social value? It seems optimized for metrics which are proxies for social value, and very much subject to goodhart, but I’ll stop before I start riots in every last history and economics department. Moreover, how about we want more number go up? If number go up is good, and working some arbitrary gig in fact makes number go up, donating some of the proceeds will make number go up more, so E2G people are correct to do both!
Sorry this reads to me like applause lights for the “I hate those smug virtue signaling vegans because I love bacon” crowd. OP’s thesis about EA doesn’t really relate to our unusually high vegan population, they might as well have pointed out our unusually high queer or jewish or computer programmer population.
Sorry one sec I’m laughing a little at this “what have the romans ever done for us?” moment. “yeah, besides the malaria nets and deworming, which I admit are a plus, what have the EAs ever done for the poor?” it’s like monty python! Anyway, my friend, if you think golden rice is a neglected approach to vitamin A deficiency, are you rolling up your sleeves and advancing the argument? Do you even bother to cite evidence that it’s more effective? “Hmmm, seems like a move backward” is a completely unjustified and frivolous sentence.
EAs do not subscribe to the interpretation of the theory of random variables that you imply! We do not believe that random variables conserve a supply of events out in the universe of potentiality, such that an event of a particular class drains the supply of events of that class from the future. We instead believe that events of a class occurring does not imply that there’s less of that class of event available to occur in the future. In fact, if anything we believe the opposite, if anything we believe that observing an event of a class should update us to think they’re more likely than we did before we observed it! Moreover, EAs are widely on record advocating for pandemic preparedness well before covid.
From a writing style perspective, this is blatant applause lights for the tribe of those who think build back better is bad.
Not wrong. Policy efforts inevitably lead to this line (from this crowd at least), unless they’re, like, tax-cutting. Policy EAs are advancing a public goods argument. It opens us up to every lowering-my-taxes-is-ITN guy that every single public goods argument in the world is opened up to. I don’t need to point out that OP surely has pet projects that they think ought to be funded, by taxes even, and I can omit conjectures about what they are and about how I personally feel about them. But this is a legitimate bit of information about EA policy efforts. (Obviously subject to framing devices: tax increments are sufficiently complex that a hostile reader would call something “increase by 0.75%” while another reader would say “pushing numbers around the page such that the 0.75% came from somewhere else so it’s not a real increment” and neither would be strictly lying).
I’ll omit how what I actually think about Pinker, but in no worlds is this settled. Pinker is one guy who lots of people disagree with!
Seems valid to me. Nonlinear returns on philanthropy would be awesome, wouldn’t they? It’s sort of like “if a non-engineer says ‘wouldnt a heat-preserving engine be great?’ we don’t laud them as a visionary inventor” in this case, because I don’t expect OP to roll up their sleeves and start iterating on what that nonlinearly returning mechanism would look like! But that doesn’t mean we shouldn’t take a look ourselves.
This should clearly be in our overton window about how to do the most good. It almost alludes to the excellent Hauke Hillebrandt essay doesn’t it?
This seems underjustified and not of a lot of substance. I think what OP has portrayed may qualify as a racket to people of a particular persuasion regarding government spending, or as a cult to the “I intuitively dislike virtue signaling and smugness so I look for logical holes in anyone who tries to do good” crowd, but OP could have been more precise and explicit about which of those they think is important to end on. But alas, when you’re in a given memeplex that you know you share with your audience, you only have to handwave! lol
As Scott Alexander recently addressed, EAs are like a borg: we assimilate critics of any quality bar whatsoever. As much as we respect Zvi “guys, I keep telling you I’m not an EA” Mowshowitz’ wishes to not carry a card with a lightbulb heart stamped on it, it’s pretty hard not to think of him as an honorary member. My point is we really should consider borg-ing up the “taxation is theft” sort of arguments about public goods and the “investment beats aid” sort of arguments about raising global welfare.
Ironic the title of his column is “Inside View”
[content warning: sarcasm; less-than-charitable commentary on critics of EA]
It’s not that ironic if the title is meant to imply the inside view is the only one he cares about.
I’ll oblige others who don’t want the WSJ op-ed to get more attention by only responsible to the article in this comment thread. I don’t expect specifically trait-feign or any other person to respond to this comment, though it’d be appreciated if anyone can provide relevant information.
This is the first I’ve read of this. This doesn’t sound good or effective at face value. It’s being used to make EA look bad. That doesn’t mean the attempt has succeeded. Others have commented about how many bogus assumptions in this op-ed are easy to debunk. It’s also written in a sensational way that makes this second claim not sound too serious.
Surely SBF has some argument for why he has been, if the characterization is accurate, bailing out failing crypto firms. They could be poorly reasoned arguments for why it’s a good idea. Bailing out these failing crypto firms may end up being what was a big, foreseeable mistake. Whether SBF’s giving really is effective matters more than only one op-ed spins it as ineffective. Knowing the reasons and evidence is what matters.
Assuming for the sake of argument that so many of SBF’s and FTX’s giving or investments turn out to be very bad bets, it’s not hard for the EA community to make clear that the EA community at large neither advised nor endorsed such choices. Nobody would say buying the naming rights to the Miami Heat’s arena was one of EA’s top recommendations for what FTX should do with that money.
The EA community has already begun making distinctions like that clear. A few of the most upvoted articles of 2022 so far have been ones critical of FTX’s giving approach/methodology, or the prospect of EA community getting particularly involved with SBF’s political plays. Even given assumptions of such a bad scenario, which could easily be false anyway, a future scenario like this with EA having a PR team or whatever seems like it wouldn’t be that hard to deal with.
The author isn’t a grifter only for doing a job but he is kind of a hack in that it’s evident he is pandering to a particular “own the libs” readership he has in mind.
Politically progressive or liberal multi-millionaires or billionaires are far from the only ones who advocate raising taxes or even back campaigns that will raise taxes.
There are a lot of billionaires who’ve advocated their own taxes being raised. Warren Buffet has. Elon Musk has said stuff about how he doesn’t want to pay higher taxes because he doesn’t trust the government to spend the effectively but he’d support it more if the government were to spend the money more effectively, e.g., on high-impact existential risk reduction. Even some conservative billionaires are in favour of the government expanding in a least some ways even if they don’t want to have to pay higher taxes for it.
The author puts almost no effort into an argument for why not even billionaires have a right to support a campaign to raise taxes on other billionaires.
Who he is pandering too are readers who are themselves probably not super-wealthy either. He refers to “American taxpayers” as if it’s taxpayers in general as opposed to the very small minority o them that earn more than $5 million per year.
The author conflates how some x-risk scenarios superficially sound like silly science fiction with pandemic preparedness being silly. Before the pandemic, most people would think that’s ridiculous. After the pandemic, it could be even most conservatives in the USA who’d think that’s ridiculous.
The last couple paragraphs are a couple random quotes out of context critical of some aspects of EA. (Other claims made in the article have been commented on by others and I’ve got nothing else to add.)
There seems to me to be a fallacy here that assumes every action SBF takes needs to be justifiable on its first order EA merits.
The various stakes FTX have taken in crypto companies during this downturn are obviously not done in lieu of donations—they are business decisions, presumably done with the intention of making more money, as part of the process of making FTX a success. Whether they are good decisions in this light is hard for me to say, but I’d be inclined to defer to FTX here.
I was thinking through such a possibility descriptively, and how the EA community might respond, without trying to prescribe the EA community in a real-world scenario. I didn’t indicate that well, though, so please pardon me for the error.
To clarify, given the assumptions that criticisms of SBF’s or FTX’s investments or donations might be used to attack EA as a movement by association, and the EA community also had some responsibility to distance itself from those efforts, it wouldn’t be that hard to do so. I personally disagree with the second assumption.
I’m of the opinion the EA community has no such responsibility but it seems at least some others do.
SBF seems to have made some mistakes with his recent forays into politics but they don’t strike me to have been as bad as at least a significant minority of the EA community believes. My opinion is that the need some felt for the EA community to distance itself from SBF’s political activities was excessive.
I agree with all of this. There are plenty of companies that have taken long(er)-term bets like the one FTX is making that have turned out to be among the best business decisions of the 21st century. Facebook, Amazon and companies Elon Musk has bought were not profitable for almost a decade. They were marred by criticisms and predictions of how they were always on the brink of imminent collapse. That was all bogus.
It’s worth keeping survivorship bias in mind and the fact that some bets made like this wound up as catastrophic business decisions. Yet it’s not justified to assume by default FTX’s investments in this way will end up as bad rather than good decisions. That’s especially true in the absence of more information. The author hasn’t provided any such information and is not likely to have access to such information either.
It seems like more pandering. I’m guessing the author is the kind who would’ve maligned Musk when he was a Democrat but now because Musk is a Republican defend decisions he might have criticized before.
Here are some comments on the article that I sent to my family.
Not sure what he’s talking about. I think the main point of Famine, Affluence, and Morality is that if you can help someone without a significant cost to yourself, you should.
Earning to give is only a small part of EA, and I don’t think it’s typically a post hoc rationalization. And EAs understand very well that working directly on problems can give to society—see the first WSJ article I sent.
It’s plausible that the best way to reduce vitamin A deficiency is to invest in multiple strategies at once. But if he gave a thorough argument that donating to “golden” rice infrastructure fights vitamin A deficiency more effectively per dollar than vitamin A supplementation, then I wouldn’t be surprised to see GiveWell change its recommendations.
The author’s comment seems quite silly to me.
I don’t see anything wrong with SBF promoting a tax on extremely wealthy people to prevent pandemics (unless the resulting pandemic prevention efforts are less valuable than what the wealthy people would do with their money otherwise). In general, I’m sure some taxes are totally worth promoting.
Pandemic prevention is not a “quirky” concern!
Yes, EAs don’t agree on everything, nor do I think they should. There’s an emphasis within EA on updating your beliefs in response to new evidence, such as reasonable arguments from other people.
So the argument is that when deciding where to donate your money, you should use the same tactics that earned you that money in the first place? It’s unclear how “cost-effectiveness” is the same as “linearity.” Maybe he’s advocating for donating to interventions that are like unicorn startups—interventions that could be hugely beneficial if they succeed, but probably won’t do much. If so, this is kind of exactly what Open Philanthropy is doing (“hits-based giving”).
It’s fully possible to believe in EA principles and support capitalism. But high economic productivity can come with damaging externalities, such as increased risk of global catastrophes from new technologies.
That seems totally incorrect. GiveWell estimates that donations to its recommended charities have averted over 100,000 deaths.
This is one of the few points in the article that I like. EA (which EA headquarters likes to describe as “a project”) resembles a cult in some ways: people worry about future catastrophes, care about “doing good,” think about weird ideas, and dream about growing the movement.