This seems like a kind of crazy assertion to me. Eg., in 1945, as part of the war against Japan, the US firebombed dozens of Japanese cities, killing hundreds of thousands of civilians. (The bombs were intentionally designed to set cities on fire.) Not being a general or historian, I don’t have an exact plan in mind for an alternative way for the past US to have spent its military resources. Maybe, if you researched all the options in enough detail, there really was no better alternative. But it seems entirely reasonable to say that the firebombing was bad, and to argue that (if you were around back then) people should maybe think about not doing that. (The firebombing is obviously not comparable to the pledge, I’m just arguing the general principle here.)
We may have an intractable disagreement here and it’s pretty tangential to the point at hand, but for posterity I’ll state my general position below anyway*.
More to the point at hand though, if you could actually spell out what you think should be done instead of the GWWC pledge, that’d really help direct the discussion. ‘Maybe there should be no pledge at all’ is a completely fine response, and for the avoidance of doubt I’m being completely not-sarcastic there.
The exact numbers aren’t important here, but the US federal budget is $3.8 trillion, and the US also has a great deal of influence over both private money and foreign money (through regulations, treaties, precedent, diplomatic pressure, etc.). There are three branches of government, of which Congress is one; Congress has two houses, and there are then 435 representatives in the lower house. Much of the money flow was committed a long time ago (eg. Social Security), and would be very hard to change; on the other hand, a law you pass may keep operating and directing money decades into the future. Averaged over everything, I think you get ~$1 billion a year of total influence, order-of-magnitude; 0.1% of that is $1 million, or 57x the $17,400 personal donation. This is fairly conservative, as it basically assumes that all you’re doing is appropriating federal dollars to GiveDirectly or something closely equivalent; there are probably lots of cleverer options.
The orders of magnitude here aren’t even comparable. This might reduce the net cost to your effectiveness from 5% to 2%, or something like that; it’s not going to reduce it to 0.0001%, or whatever the number would have to be for the math to work out.
I did do the fermi myself. 0.1% improvement seemed crazy high to me for the time someone might spend deciding their annual donation, so I wouldn’t exactly call your calculation ‘conservative’, but I certainly concede its not crazy.
Re. outsourcing, your own calculation suggested a x57 difference. I had a x2 difference. rohinmshah elsewhere had a x3 difference. Given that I don’t see why I need to cover more than a couple of orders of magnitude with outsourcing, and we both seem to think that outsourcing can credibly do that. I wouldn’t expect outsourcing to help once we’re above x50-ish and didn’t mean to imply otherwise. So I think we basically agree on the limits of what outsourcing can do, you just seem to have a implied multiplier well in excess of x1000 (otherwise I don’t know where 0.0001% and the ‘orders of magnitude’ comment come from), which I wasn’t at all anticipating. Taking that for granted though, your position seems reasonable.
Let’s compromise by not promoting the GWWC pledge to congresspeople or anyone else who can credibly influence billions of dollars?
I think the average federal dollar you can influence is quite a bit worse than Give Directly FWIW, though in my fermi I assumed they were equivalent as well.
It seems extremely implausible that someone making a middle-class salary, or someone making an upper-middle-class salary but under very high time pressure and with high expenses, could give away 10% of their income for life and literally never think about it again.
Why not? Seriously. It’s not uncommon for people to move countries in the developed world and incur a 10% higher tax rate in the process. I really doubt most people in that situation ever think about that again after the first couple of years.
I can’t prove a negative. If they do exist, where are they? If you link to some, I’ll happily add them to the post, as I did for 80K’s metrics.
The GWWC pledge count is used as a metric for EA as a whole, rather than for any specific org like MIRI, CFAR, etc.
Ah, this is interesting. Can you clarify what you mean by ‘a metric for EA as a whole’? Do you not think that, e.g., Givewell’s money moved numbers fill a similar function? If not, why not?
There’s a selection effect where pledge-takers are much less likely to be the type of people who’d be turned off by donating to a “weird” charity, taking a “weird” career, etc., since people like that would probably not pledge in the first place.
I sort of get this argument, it was when you said ‘risk-averse’ that I got stuck. To clarify, is this specific to the GWWC pledge or would any “weird” behaviour do? To take a slightly silly example, would you expect people who shower very irregularly (a ‘weird’ behaviour) to be more risk-seeking on similar grounds?
*
For firebombing to even happen, someone had to think it was the best of the available options. In fact, probably lots of someones had to think that. Those someones probably know lots more about the US military options than you or I. So to argue that firebombing is bad in the face of that probably-superior expertise, providing a concrete alternative (or set of alternatives) seems like the bare minimum you need to do.
Note that I said you need a counterfactual in the background. That caveat was there precisely to pre-empt cases like the one you gave, where the counterfactual is clearly and directly implied by the criticism. But as soon as you make multiple criticisms implicitly suggesting different counterfactuals, as you have here, it’s worth spelling out exactly what alternative you are suggesting. Discussions get terribly confusing otherwise.
The above points are practical rather than technical. But on a technical level, criticism is clearly meaningless in cases where there is no choice. Nobody criticizes gravity for pulling you to your death if you step off a cliff. So to criticize something you need to establish that it is not like gravity; it can be fixed/improved/eliminated. Put another way, meaningful criticism is not ‘this is not perfect’, rather it is ‘this is not optimal’. Which in turns requires a counterfactual, albeit potentially an implicit one.
So yeah, in short I’m generally pretty comfortable with the ‘all unconstructive criticism is meaningless’ approach. I consider it both technically true and practically useful.
We may have an intractable disagreement here and it’s pretty tangential to the point at hand, but for posterity I’ll state my general position below anyway*.
More to the point at hand though, if you could actually spell out what you think should be done instead of the GWWC pledge, that’d really help direct the discussion. ‘Maybe there should be no pledge at all’ is a completely fine response, and for the avoidance of doubt I’m being completely not-sarcastic there.
I did do the fermi myself. 0.1% improvement seemed crazy high to me for the time someone might spend deciding their annual donation, so I wouldn’t exactly call your calculation ‘conservative’, but I certainly concede its not crazy.
Re. outsourcing, your own calculation suggested a x57 difference. I had a x2 difference. rohinmshah elsewhere had a x3 difference. Given that I don’t see why I need to cover more than a couple of orders of magnitude with outsourcing, and we both seem to think that outsourcing can credibly do that. I wouldn’t expect outsourcing to help once we’re above x50-ish and didn’t mean to imply otherwise. So I think we basically agree on the limits of what outsourcing can do, you just seem to have a implied multiplier well in excess of x1000 (otherwise I don’t know where 0.0001% and the ‘orders of magnitude’ comment come from), which I wasn’t at all anticipating. Taking that for granted though, your position seems reasonable.
Let’s compromise by not promoting the GWWC pledge to congresspeople or anyone else who can credibly influence billions of dollars?
I think the average federal dollar you can influence is quite a bit worse than Give Directly FWIW, though in my fermi I assumed they were equivalent as well.
Why not? Seriously. It’s not uncommon for people to move countries in the developed world and incur a 10% higher tax rate in the process. I really doubt most people in that situation ever think about that again after the first couple of years.
Ok, sure. Givewell, money moved: http://www.givewell.org/about/impact REG, money moved: https://reg-charity.org/reg-second-semi-annual-report-on-money-moved-2015/
There’s also a whole bunch of metrics in the EA Survey that people often reference: http://effective-altruism.com/ea/zw/the_2015_survey_of_effective_altruists_results/
Ah, this is interesting. Can you clarify what you mean by ‘a metric for EA as a whole’? Do you not think that, e.g., Givewell’s money moved numbers fill a similar function? If not, why not?
I sort of get this argument, it was when you said ‘risk-averse’ that I got stuck. To clarify, is this specific to the GWWC pledge or would any “weird” behaviour do? To take a slightly silly example, would you expect people who shower very irregularly (a ‘weird’ behaviour) to be more risk-seeking on similar grounds?
*
For firebombing to even happen, someone had to think it was the best of the available options. In fact, probably lots of someones had to think that. Those someones probably know lots more about the US military options than you or I. So to argue that firebombing is bad in the face of that probably-superior expertise, providing a concrete alternative (or set of alternatives) seems like the bare minimum you need to do.
Note that I said you need a counterfactual in the background. That caveat was there precisely to pre-empt cases like the one you gave, where the counterfactual is clearly and directly implied by the criticism. But as soon as you make multiple criticisms implicitly suggesting different counterfactuals, as you have here, it’s worth spelling out exactly what alternative you are suggesting. Discussions get terribly confusing otherwise.
The above points are practical rather than technical. But on a technical level, criticism is clearly meaningless in cases where there is no choice. Nobody criticizes gravity for pulling you to your death if you step off a cliff. So to criticize something you need to establish that it is not like gravity; it can be fixed/improved/eliminated. Put another way, meaningful criticism is not ‘this is not perfect’, rather it is ‘this is not optimal’. Which in turns requires a counterfactual, albeit potentially an implicit one.
So yeah, in short I’m generally pretty comfortable with the ‘all unconstructive criticism is meaningless’ approach. I consider it both technically true and practically useful.