PhD in Economics (focus on applied economics and climate & resource economics in particular) & MSc in Environmental Engineering & Science. Key interests: Interface of Economics, Moral Philosophy, Policy. Public finance, incl. optimal redistribution & tax competition. Evolution. Consciousness. AI/ML/Optimization. Debunking bad Statistics & Theories. Earn my living in energy economics & finance and by writing simulation models.
FlorianH
Incentivizing Donations through Mutual Matching
I find this a rather challenging post, even if I like the high-level topic a lot! I didn’t read the entire linked paper, but I’d be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):
The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken seriously, can trivially be seen to lead to all sorts of preposterous choices that are quite miraculously improved by adding a smaller or larger portion of utilitarianism (one does by no means need to be a full utilitarian to agree with this), end of story.
Love the endeavor. But the calculation method really should be changed before anyone interested in the quantification of the combined CO2+animal suffering harm should use it, in my opinion: a weighted product model is inappropriate to express the total harm level of two independent harms, I really think you want to not multiply CO2 and animal suffering harm, but instead separately sum them, with whichever weight the user chooses. In that sense, I fully agree with what MichaelStJules also mentioned. But I want to give an example that makes this very clear—and please let me know if instead, it seems like I misread your calculation details in https://foodimpacts.org/methods :
Imagine a product A with 0 CO2 but a huge animal suffering impact, B with huge CO2 but 0 suffering, and C with non-zero but tiny impact on both dimensions. Your weighting would favor either A or B (or both), while for any rational person C would necessarily be preferable. Your WPM may sound nicer in theory but it cannot be applied here, I’d really want to see it changed before considering the model usable for quantitative indications of the harm on a general level!
NB: I actually have an interest in using your model in the medium-term future! We’re trying to set up an animal food welfare compensation scheme, and happen to have CO2 on our list in addition to animal suffering itself, www.foodoffset.org (very much work in progress).
Just re Anxiety prevalence: It seems to me that Anxiety would be a kind of continuum, and you may be able to say 50% of people are suffering from anxiety or 5%, depending on where you make the cutoff. Your description implicitly seems to support exactly this view (“Globally, 284 million people—3.8% of all people—have anxiety disorders. Other estimates suggest that this might be even higher: according to the CDC, 11% of U.S. adults report regular feelings of worry, nervousness, or anxiety and ~19% had any anxiety disorder in the past year according to the NIH and Anxiety and Depression Association of America.”), plus maybe that the Anxiety America guys like to quote impressive numbers for their domain. ⇒ Could be useful if you found more tangible ways to express what’s going on anxiety-wise in how many heads.
There are two factors mixed up here: @kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weight we humans put on that animals’ welfare. For a meaningful conversation about the topic, we should not mix these two up.*
Let’s briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: “We thus have no welfare problem” is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human2x who happened to have developed care about animal welfare—or who simply happened to be curious about absolute welfare in his universe.In the same vein: There’s no strict need to account for usual human’s care when analyzing whether, “Net global welfare may be negative” (title!). On the contrary, it would lead to an unnecessary bias, that just comes on top of the analysis’ necessarily huge uncertainty (that the author does not fail to emphasize, although as others comment, it could deserve even stronger emphasis).
Research vegan cat food as ideal EA cause!? Might also be ideal for human vegan future as ‘side’-effect too.
Cats are obligate carnivores; must eat meat (or animal products) according to typical recommendations (and cats tend to refuse most non-animal foods). At least, there seems to exist no vegan cat food that is recommended as a main diet for cats without further warnings; often cats would seem to not accept mostly non-animal foods
I guess—but am not sure (?) - animals fed to cats mean significantly more animals are grown in factory farms
Somewhat counterintuitively, in the whole cat food domain, the concept of animal welfare standards, does not even seem to exist. You can find some seemingly higher-welfare standard products but they are extremely rare
Even if often large shares of the ingredients are “chicken meal”, “fish meal” etc., I guess lots of this meal one way or another still could have been replacing some human foods in some places. What I have definitely seen, often major shares of ingredients in the cat foods are “meat” and not just inner organs or broth (although I cannot exclude that ‘meal’-based ones may dominate total sales volumes)
I guess we’re pretty good feed all sorts of animal pieces to (i) ourselves in sausages, chicken nuggets, and the like and/or (ii) other food industry animals. So my prior is that cats do not only get stuff that is completely redundant in the food industry.
I calculate* (very rough) for ca. 220 mio. house cats worldwide, and considering 50% of their meat food to correspond to extra meat production, 6 600 ton/day quality-adjusted meat consumption, or around 0.9% of human’s meat consumption.
The few articles I read online about to which degree cats require a meat diet, point mainly to elements that sound like those that we can easily mix/synthesise from non-animal foods and chemical processes (Taurine, Vitamins A, Arginine, Niacin, maybe some other fatty/amino-acids)
Oddly, the pages tend to list these few elements, insisting that therefore the cat must eat meat, while I’d think: “Euhm, if it’s just that, it would seem simple to mix the right thing” ⇒ maybe the pages just do not enter into more subtle details that are crucial for an obligate carnivore
IMHO, we could very easily test out food/supplements mixtures to check how easily one can replace which share of meat for cats without impairing their health. Given the billions of factory animal lives at stake, even some risk for the corresponding “test animals” might be completely justifiable in the worst case, and naive me thinks we might make extremely quick progress on this front if we really want
If we nail this, the positive side-effects could be: “Hey look, they even feed the obligate carnivores with this mix nowadays, surely you can also become vegan with zero hesitation with a human adjusted formula!”—i.e. finally the stories of your vegan friend who end up at the doctor who recommends him to eat meat (!) etc. could finally really be stories only of the past. (I know many think it already is; maybe you’re right; but I know in practice at least for many this is simply not how they see it)
In fact, for each of (i) the cat-not-eating animals, and (ii) side-effect for human diet, I’d not be surprised if expediently trying to get vegan food that even cats can eat, would be justified
EA dietitians, am I just naive or could this be a thing?
I reckon one drawback of an ideal vegan cat diet could be that many more might want to keep cats. I see some possibilities on net impact from cats+food directly then:
Only few more cats: lower net animal consumption and lower net land-use and lower food costs for poor people (and for cat holders)
Much more cats: vegan diet more than offsetting the spared animal food industry footprint, i.e. larger net land-use change for agriculture, higher food prices for poor
Whether house cats are at all net “happy” or not, I do not know.
* Calculation, based on rough values:
220 mio domestic cats (ignoring 480 mio stray)
3 kg avg. weight (might be slightly low side)
2% of cat weight meat food/day
=60g/cat daily meat = 30g/cat daily “extra” animal meat if quality-adjusting with 50% (see text above)
=6 600 t/day extra meat productionAnd with approx. 90g meat/day per human (beef veal pork poultry and sheep acc. to OECD) for the 8 bn humans, i.e. with 750 000 t meat/day human consumption, the cat’s share is
= 0.9%, bit simplistically approximated.
You’re right. I see two situations here:
(i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly.
(ii) the project has strongly decreasing ‘utility’-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, but you let donors judge whether they find they discount the leverage for large contributions, as they judge the value of the money being less valuable on the upper tail; they may then accordingly decide to not contribute, or to contribute with less.
Finally, there is simply the possibility to use a cutoff point, above which the scheme simply must be cancelled, to address the issue that you raise, or the one I discuss in the text: to prevent individual donors to have to contribute excessive amounts, when more than expected commitments are received. If that cutoff point is high enough so that it is unlikely enough to be reached, you as organizer may be happy to accept it. Of course one could then think about dynamics, e.g. cooling-off period before you can re-run the cancelled collection, without indirectly (too strongly) undermining the true marginal effect in a far-sighted assessment of the entire situation.
In reality: I fear even with this scheme, if in some cases it hopefully turns to be practical, many public goods problems remain underfunded (hopefully simply a bit less strongly) rather than overfunded, so, I’m so far not too worried about that one.
Thank you! I was actually always surprised by H’s mention of the taxation case as an example where maximin would be (readily) applicable.
IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.
On the other hand, if you asked me whether I’d be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin—es, I’d possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that’s part of the point; in this case, fair enough!
Thanks, interesting case!
1. We might have loved to see the cartel here succeed, but we should probably still be thankful for the more general principle underlying the ruling:
As background, it should be mentioned that it is a common thing to use so-called green policies/standards for disguised protectionist measures, aka green protectionism: protecting local/domestic industry by imposing certain rules, often with minor environmental benefits (as here at least according to the ruling), but helping to keep out (international) competition.
So for the ‘average’ citizen, say those for whom animal welfare may be relevant but not nearly as central as for many EAs, the principles underlying the ruling seem very sensible. Potentially even crucial for well-functioning international trade without an infinitude of arbitrary rules just to rip off local consumers.
Governmental policy (minimal welfare standards) is the place for addressing the public goods problem that Paul and other commentators describe: here that would mean binding animal welfare standards, agreed in the democratic process.
2. An potentially much larger issue w.r.t. trading laws preventing higher welfare standards, is related to WTO/GATT rules, making it (seemingly) ambiguous whether a country is even allowed to politically raise welfare standards and apply these to imports (which is necessary for the effectiveness of the domestic rule):
Free trade rules are regularly used by industry lobbies to delegitimize proposals for higher domestic animal welfare standards, with the claim that imposing welfare restrictions on imported foods would be impossible as it violated free trade rules. In reality, it is not trivial to interpret the relevant paragraphs of the trade agreements & case rulings, although I would imagine it to be difficult for anyone to attack a country for imposing high-welfare standards in a reasonably transparent way; nevertheless, the uncertainty around the issue is being successfully abused in the political discourse.
I miss a clear definition of economic growth here, and the discussion strongly reminds me of the environmental resources focused critique of growth that has started with 1970′s Club of Rome—Limits to Growth, there might be value to examine the huge literature around that topic that has been produced ever since on such topics.
Economic growth = increase in market value, is a typical definition.
Market value can increase if we paint the grey houses pink, or indeed if we design good computer games, or if we find great drugs to constantly awe use in insanely great ways without downsides. Or maybe indeed if we can duplicate/simulate brains which derive lot of value, say, literally out of thin air—and if we decide to take into account their blissful state also in our growth measure.
If we all have our basic needs met, and are rich way beyond it, willingness to pay for some new services may become extremely huge; even for the least important services—merely as we have nothing to do with our wealth, and as we’re willing to pay so little on the margin for the traditional ‘basic’ goods who are (in my scenario assumed to be) abundant and cheaply produced.
So the quantitative long-run extent of “economic growth” then becomes a bit an arbitrary thing: economic growth potentially being huge, but the true extra value possibly being limited.
‘Economic growth’ may therefore be too intangible, too arbitrary a basis for discussing the nature of long-run fate of human (or what ever supersedes us) development.
Maybe we should revert back to directly discussing limits to increase in utility (as come comments here already do).
Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,
Some people seem to achieve orders of magnitudes more than others in the same job.
suggest the work focuses essentially on people’s performance, but already in the motivational examples
For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it’s their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group.
(emphasis and [] added by me)
I think I have not explicitly seen discussed whether at all it is the people, or more the exact project (the startup, the book(s)) they work on, that is the successful element, although the outcome is a sort of product of the two. Theoretically, in one (obviously wrong) extreme case: Maybe all Y-Combinator CEOs were similarly performing persons, but some of the startups simply are the right projects!
My gut feeling is that making this fundamental distinction explicit would make the discussion/analysis of performance more tractable.
Addendum:
Of course, you can say, book writers and scientists, startuppers, choose each time anew what next book and paper to write, etc., and this choice is part of their ‘performance’, so looking at their output’s performance is all there is. But this would be at max half-true in a more general sense of comparing the general capabilities of the persons, as there are very many drivers that lead persons to very specific high-level domains (of business, of book genres, etc.) and/or of very specific niches therein, and these may have at least as much to do with personal interest, haphazard personal history, etc.
I find “new enlightenment” very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).
Indeed, I think I’m not the only one to whom the nudge towards eating more fully vegan would seem a highly welcome side-effect of a stay in the hotel.
Enjoyed the post, thanks! But it starts with an invalid deduction:
Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.
(I added the emphasis)
Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs. Of course, we have to take into account all types of costs, as you advocate in your post. Maybe that includes even some unknown unknowns in terms of risks from an imposed pause. Still, in the end, we should be even-handed. That we don’t impose pauses on most technologies, surely is not a strong reason to the contrary: We might (i) for bad reasons fail to impose pauses also in other cases, or, maybe more clearly, (ii) simply not see so many other technologies with so large potential downside warranting making pause a major need—after all, that’s why we have started the debate in particular about this new technology, AI.
This is just a point on stringency in your provided motivation for the work; changing that beginning of your article would IMHO avoid an unnecessary ‘tendentious’ passage.
I think you’re describing is exactly (or almost exactly) Mutual Matching that I wrote about here on the forum a while ago: Incentivizing Donations through Mutual Matching
I find it a GREAT idea (have not tested it yet)!
This post calls out un-diversities in EA. Instead of attributable to EA doing something wrong, I find these patterns mainly underline a basic fact about what type of people EA tends to attract. So I don’t find the post fair to EA and its structure in a very general way.
I find to detect in the article an implicit, underlying view of the EA story being something like:
‘Person becoming EA → World giving that person EA privileges’
But imho, this completely turns upside down the real story, which I mostly see as:
‘Privileged person ->becoming EA → trying to put their resources/privileges to good use, e.g. to help the most underprivileged in the world’,
whereby privileged refers to the often a bit geeky, intellectual-ish, well-off person we often find particularly attracted to EA.
In light of this story, the fact that white dudes are over-represented relative to the overall global world population, in EA organizations, would be difficult to avoid in today’s world, a bit like it would be difficult to avoid a concentration of high-testosterone males in a soccer league.
Of course, this does not deny that many biases exist everywhere in the selection process for higher ranks within EA, and these may be a true problem. Call them out specifically, and we have a starting point to work from. Also in EA, people tend to abuse of power, and this is not easy to prevent. Again, welcome to all enlightenment about how, specifically, to improve on this. Finally, that skin color is associated with privileges worldwide may be a huge issue in itself, but I’d not reproach this specifically to ‘EA’ itself. Certainly, EAs should also be interested in this topic, if they find cost-effective measures to address it (although, to some degree, these potential measures have tough competition, just because there is so much poverty and inequality in the world, absorbing a good part of EA’s focus for not only bad reasons).
Examples of what I mean (I add the emphasize):
However, the more I learn about the people of EA, the more I worry EA is another exclusive, powerful, elite community, which has somehow neglected diversity. The face of EA appears from the outside to be a collection of privileged, highly educated, primarily young, white men.
Let’s talk once you have useful info on whether they focus on the wrong things, rather than that they have the wrong skin colors. In my model, and in my observations, there is simply a bias in who feels attracted to EA, and as much as anyone here would love the average human to care about EA, it is sadly not the case (although in my experience, it is mostly true that more generally slightly geeky, young, logical, possibly well-off persons like and join EA, and can and want to use resources towards it, than simply the “white men” you mention).
The EA organizations now manage billions of dollars, but the decisions, as far as I can tell, are made by only a handful of people. Money is power, and although the decisions might be carefully considered to doing the most good, it is acutely unfair this kind of power is held by an elite few. How can it be better distributed? What if every person in low-income countries were cash-transferred one years’ wage?
The link between the last bold part and the preceding bold parts surprises me. I see two possible readings:
a. ‘The rich few elite EAs get the money, but instead we should take that money to support the poorest?’ That would have to be answered by: These handful work with many many EAs or other careful employees, to try to figure out what causes to prioritize based on decent cost-benefit analysis, and they don’t use this money for themselves (and indeed, at times, it seems like cash-transfers to the poorest show up among promising candidates for funding, but these still compete with other ways to try to help the poorest beings or those most at risk in the future).
b. ‘Give all poorest some money, so some of these could become some of the “handful of people” with the power (to decide on the EA budget allocation)’. I don’t know. Seems a bit a distorted view on the most pressing reason for alleviating the most severe poverty in the world.
While it might be easy to envy some famous persons in our domain, none has chosen ‘oh, whom could we give a big privilege of running the EA show’, but instead there is a process, however imperfect, trying to select some of the people who seem most effective for also the higher rank EA positions. And as many skills useful for it correlate with privileged education, I’d not necessarily want to force more randomization or anything—other than through compelling, specific ways to avoid biases.
Interesting. Curious: If such hair is a serious bottleneck/costly, do some hair cutters as a default collect cut hair and sell/donate it for such use?
I also wonder about the same thing. Further Pledge does not answer this particular desire of committing to a limited personal annual consumption while potentially saving for particular—or yet to be defined—causes later on. This can make sense also if one believes one’s future view on what to donate towards being significantly more enlightened.
I could see such a pledge to not consume above X/year to be valued not overly much by third parties, as we cannot trust our future selves so much I guess, and even investing in own endeavors, even if officially EA, might at times be quite self-indulgent in some ways.
Still, I guess it would be possible to invest one’s money into an EA-aligned fund that would later be able to disburse money only to aligned causes, incl. possibly your own project. That could provide some value in some situations.
Maybe it’d be easier, and worthwhile, to simply have an organization collecting pledges (and accompany one’s verification of it) to not spend more than X/year; I think there might be a bunch of people interested in it.
Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.
Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.
Nevertheless, some potential upside of the current term – equally I’m not sure it matters much at all, but I attribute a small chance to them being really important: If some people are kept away by the name’s bit geeky/partly unfashionable connotation, maybe these are exactly the people that would anyways be mostly distractors. I think the bit narrow EA community has this extraordinary vibe along a few really important dimensions, and it seems invaluable (in that sense while RyanCarey mentions we may not attract the core audience with different names, I find the problem might be more another way round, we might simply dilute the core).
Maybe I’m completely overestimating this, and maybe it’s not outweighing at all the downside of attracting/appealing to fewer. But in a world where the lack of fruitful communication threatens entire social systems, maybe having a particularly strong core in that regard is highly valuable.