PhD in Economics (focus on applied economics and climate & resource economics in particular) & MSc in Environmental Engineering & Science. Key interests: Interface of Economics, Moral Philosophy, Policy. Public finance, incl. optimal redistribution & tax competition. Evolution. Consciousness. AI/ML/Optimization. Debunking bad Statistics & Theories. Earn my living in energy economics & finance and by writing simulation models.
FlorianH
Florian Habermacher’s Quick takes
From what I read, Snowdrift is not quite “doing this”, at least not in as far as the main aim here in Mutual Matching is to ask more from a participant only if leverage increases! But there are close links, thanks for pointing out the great project!
Snowdrift has people contribute as an increasing function of the # of co-donors, but the leverage, which is implicit, stays constant = 2, always (except for those cases where it even declines if others’ chosen upper bounds are being surpassed), if my quick calculation is right (pretty sure*). This may or may not be a good idea with +- rational contributors (either way, I btw think it would be valuable for transparency to indicate this leverage explicitly to readers of the snowdrift page, it’s a crucial factor for donors imho). Pragmatically it may turn out to be a really useful simplification though.
Here instead, Mutual Matching tries to motivate people by ensuring that they donate more only as leverage really increases. I see this as key innovation also relative to Buchholz et al. (maybe worth looking at that paper, it might be closer to snowdrift, as it also does not make donations directly conditional on leverage I think, tbc). As I discuss, this has pros and cons; the main risk being that the requested donation increases quickly with the leverage and thus with the # of participants.
Thanks to your links I just saw also the Rational Street Performer Protocol, which I should also look at, even if it equally seems to focus on donating more as more is given in total, rather than like here explicitly as leverage is increased; it makes the timing question very explicit, which is a dimension I have here not much looked at yet.
Will expand the text & make the connections to both asap!
*snowdrift: Each gives 0.1 ct per participant, meaning for 1000 (or 5000) you give 1$ (or 5$) and thanks to you all these others give 1$ (or $5) more in total than without you, i.e. extra leverage of constantly 1 in addition to your own contribution itself, meaning total leverage of your contribution = 2 always.
You’re right. I see two situations here:
(i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly.
(ii) the project has strongly decreasing ‘utility’-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, but you let donors judge whether they find they discount the leverage for large contributions, as they judge the value of the money being less valuable on the upper tail; they may then accordingly decide to not contribute, or to contribute with less.
Finally, there is simply the possibility to use a cutoff point, above which the scheme simply must be cancelled, to address the issue that you raise, or the one I discuss in the text: to prevent individual donors to have to contribute excessive amounts, when more than expected commitments are received. If that cutoff point is high enough so that it is unlikely enough to be reached, you as organizer may be happy to accept it. Of course one could then think about dynamics, e.g. cooling-off period before you can re-run the cancelled collection, without indirectly (too strongly) undermining the true marginal effect in a far-sighted assessment of the entire situation.
In reality: I fear even with this scheme, if in some cases it hopefully turns to be practical, many public goods problems remain underfunded (hopefully simply a bit less strongly) rather than overfunded, so, I’m so far not too worried about that one.
Incentivizing Donations through Mutual Matching
Agree with the “easily tens of millions a year”, which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.
I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let’s consider the idea seriously, but let’s also not forget that we’re obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.
I miss a clear definition of economic growth here, and the discussion strongly reminds me of the environmental resources focused critique of growth that has started with 1970′s Club of Rome—Limits to Growth, there might be value to examine the huge literature around that topic that has been produced ever since on such topics.
Economic growth = increase in market value, is a typical definition.
Market value can increase if we paint the grey houses pink, or indeed if we design good computer games, or if we find great drugs to constantly awe use in insanely great ways without downsides. Or maybe indeed if we can duplicate/simulate brains which derive lot of value, say, literally out of thin air—and if we decide to take into account their blissful state also in our growth measure.
If we all have our basic needs met, and are rich way beyond it, willingness to pay for some new services may become extremely huge; even for the least important services—merely as we have nothing to do with our wealth, and as we’re willing to pay so little on the margin for the traditional ‘basic’ goods who are (in my scenario assumed to be) abundant and cheaply produced.
So the quantitative long-run extent of “economic growth” then becomes a bit an arbitrary thing: economic growth potentially being huge, but the true extra value possibly being limited.
‘Economic growth’ may therefore be too intangible, too arbitrary a basis for discussing the nature of long-run fate of human (or what ever supersedes us) development.
Maybe we should revert back to directly discussing limits to increase in utility (as come comments here already do).
I see enormous value in it and think it should be considered seriously.
On the other hand, the huge amount of value in it is also a reason I’m skeptical about it being obvious to be achievable: there are already individual giant firms who’d internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack ‘for everything’, yet none seems to have something close to it (though I guess many may have something like that in some sub-systems/niches).
So just wondering whether we might underestimate the cost of development/use—despite from gut feeling strongly agreeing that it would seem like such a tractable problem.
I find it a GREAT idea (have not tested it yet)!
Thank you! I was actually always surprised by H’s mention of the taxation case as an example where maximin would be (readily) applicable.
IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.
On the other hand, if you asked me whether I’d be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin—es, I’d possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that’s part of the point; in this case, fair enough!
I find this a rather challenging post, even if I like the high-level topic a lot! I didn’t read the entire linked paper, but I’d be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):
The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken seriously, can trivially be seen to lead to all sorts of preposterous choices that are quite miraculously improved by adding a smaller or larger portion of utilitarianism (one does by no means need to be a full utilitarian to agree with this), end of story.
Just re Anxiety prevalence: It seems to me that Anxiety would be a kind of continuum, and you may be able to say 50% of people are suffering from anxiety or 5%, depending on where you make the cutoff. Your description implicitly seems to support exactly this view (“Globally, 284 million people—3.8% of all people—have anxiety disorders. Other estimates suggest that this might be even higher: according to the CDC, 11% of U.S. adults report regular feelings of worry, nervousness, or anxiety and ~19% had any anxiety disorder in the past year according to the NIH and Anxiety and Depression Association of America.”), plus maybe that the Anxiety America guys like to quote impressive numbers for their domain. ⇒ Could be useful if you found more tangible ways to express what’s going on anxiety-wise in how many heads.
Part of your critique is mostly valid in cases where donors have a fixed donation budget and allocate it to the best cause they come across, taking into account a potential leverage factor. I wonder whether instead a lot of the donors—mind EAs are rare—donate on a whim, incentivized by the announcement of the matching, without that they would have donated that money anywhere else with any particularly high probability.
I see another critique to apply with the schemes that have matching “up to a specified level, say $500,000”, and I think you have not mentioned exactly that one explicitly. That additional critique is as follows: If that level of $500k is expected to be reached in due time, then anyone whose donation had been matched before the fund ran dry, has in fact led not to a total donation increase > his personal contribution, but instead in fact to one < than his personal contribution (in the most extreme case 0): because of his donation, the fund has run dry a bit earlier, leaving room for one other person less to donate within the scheme; the total matchmaker contribution remains anyway $500k, but one third person less was incentivized to contribute (because you ‘dried out’ the matching fund earlier). So the matching in reality means your donation has had less impact rather than more, even if you and all donors would not have had other opportunities to donate, i.e. even independently of what I see as one of the main critiques you mention.
I also wonder about the same thing. Further Pledge does not answer this particular desire of committing to a limited personal annual consumption while potentially saving for particular—or yet to be defined—causes later on. This can make sense also if one believes one’s future view on what to donate towards being significantly more enlightened.
I could see such a pledge to not consume above X/year to be valued not overly much by third parties, as we cannot trust our future selves so much I guess, and even investing in own endeavors, even if officially EA, might at times be quite self-indulgent in some ways.
Still, I guess it would be possible to invest one’s money into an EA-aligned fund that would later be able to disburse money only to aligned causes, incl. possibly your own project. That could provide some value in some situations.
Maybe it’d be easier, and worthwhile, to simply have an organization collecting pledges (and accompany one’s verification of it) to not spend more than X/year; I think there might be a bunch of people interested in it.
Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,
Some people seem to achieve orders of magnitudes more than others in the same job.
suggest the work focuses essentially on people’s performance, but already in the motivational examples
For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it’s their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group.
(emphasis and [] added by me)
I think I have not explicitly seen discussed whether at all it is the people, or more the exact project (the startup, the book(s)) they work on, that is the successful element, although the outcome is a sort of product of the two. Theoretically, in one (obviously wrong) extreme case: Maybe all Y-Combinator CEOs were similarly performing persons, but some of the startups simply are the right projects!
My gut feeling is that making this fundamental distinction explicit would make the discussion/analysis of performance more tractable.
Addendum:
Of course, you can say, book writers and scientists, startuppers, choose each time anew what next book and paper to write, etc., and this choice is part of their ‘performance’, so looking at their output’s performance is all there is. But this would be at max half-true in a more general sense of comparing the general capabilities of the persons, as there are very many drivers that lead persons to very specific high-level domains (of business, of book genres, etc.) and/or of very specific niches therein, and these may have at least as much to do with personal interest, haphazard personal history, etc.
Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.
Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.
Nevertheless, some potential upside of the current term – equally I’m not sure it matters much at all, but I attribute a small chance to them being really important: If some people are kept away by the name’s bit geeky/partly unfashionable connotation, maybe these are exactly the people that would anyways be mostly distractors. I think the bit narrow EA community has this extraordinary vibe along a few really important dimensions, and it seems invaluable (in that sense while RyanCarey mentions we may not attract the core audience with different names, I find the problem might be more another way round, we might simply dilute the core).
Maybe I’m completely overestimating this, and maybe it’s not outweighing at all the downside of attracting/appealing to fewer. But in a world where the lack of fruitful communication threatens entire social systems, maybe having a particularly strong core in that regard is highly valuable.
Love the endeavor. But the calculation method really should be changed before anyone interested in the quantification of the combined CO2+animal suffering harm should use it, in my opinion: a weighted product model is inappropriate to express the total harm level of two independent harms, I really think you want to not multiply CO2 and animal suffering harm, but instead separately sum them, with whichever weight the user chooses. In that sense, I fully agree with what MichaelStJules also mentioned. But I want to give an example that makes this very clear—and please let me know if instead, it seems like I misread your calculation details in https://foodimpacts.org/methods :
Imagine a product A with 0 CO2 but a huge animal suffering impact, B with huge CO2 but 0 suffering, and C with non-zero but tiny impact on both dimensions. Your weighting would favor either A or B (or both), while for any rational person C would necessarily be preferable. Your WPM may sound nicer in theory but it cannot be applied here, I’d really want to see it changed before considering the model usable for quantitative indications of the harm on a general level!
NB: I actually have an interest in using your model in the medium-term future! We’re trying to set up an animal food welfare compensation scheme, and happen to have CO2 on our list in addition to animal suffering itself, www.foodoffset.org (very much work in progress).
I find “new enlightenment” very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).
Spontaneously I find “Broad Rationality” a plausible candidate (I spontaneously found it being used as a very specific concept mainly by Elster 1983, but I find on google only 46 hits on ‘”broad rationality” elster ’, though there are of course more hits more generally on the word combination)
Thanks, interesting case!
1. We might have loved to see the cartel here succeed, but we should probably still be thankful for the more general principle underlying the ruling:
As background, it should be mentioned that it is a common thing to use so-called green policies/standards for disguised protectionist measures, aka green protectionism: protecting local/domestic industry by imposing certain rules, often with minor environmental benefits (as here at least according to the ruling), but helping to keep out (international) competition.
So for the ‘average’ citizen, say those for whom animal welfare may be relevant but not nearly as central as for many EAs, the principles underlying the ruling seem very sensible. Potentially even crucial for well-functioning international trade without an infinitude of arbitrary rules just to rip off local consumers.
Governmental policy (minimal welfare standards) is the place for addressing the public goods problem that Paul and other commentators describe: here that would mean binding animal welfare standards, agreed in the democratic process.
2. An potentially much larger issue w.r.t. trading laws preventing higher welfare standards, is related to WTO/GATT rules, making it (seemingly) ambiguous whether a country is even allowed to politically raise welfare standards and apply these to imports (which is necessary for the effectiveness of the domestic rule):
Free trade rules are regularly used by industry lobbies to delegitimize proposals for higher domestic animal welfare standards, with the claim that imposing welfare restrictions on imported foods would be impossible as it violated free trade rules. In reality, it is not trivial to interpret the relevant paragraphs of the trade agreements & case rulings, although I would imagine it to be difficult for anyone to attack a country for imposing high-welfare standards in a reasonably transparent way; nevertheless, the uncertainty around the issue is being successfully abused in the political discourse.
Great question!
Achieving a direct matching by the gvmt (even if it would be only with, say, a +25% match-factor or so, to keep it roughly in line with what tax-deductability means), instead of tax deductability, could indeed be more just, removing the bias you mention that unnecessarily favors the rich. Spot on imho.
That said, democracies seem to love “tax deductability”: stealing from the state that feels a bit less like stealing. So, deductability can be the single mostly easily acceptable policy. If so, it might pragmatically be worthwhile to support it, despite the negative redistributional consequence just mentioned.
The best action might be to try to set up local EA charities that can easily get certified for tax deductability in Belgium, and use the money either directly to support the intl EA organizations, or, if that is difficult, then in the worst case simply to support similar work in parallel (?).
Mostly independently of 1. vs. 2., whether the (donation elasticity-adjusted) average donation is more effective than gvmt tax collection (or reducing standard gvmt tax burden*), feels difficult to say, and will depend a lot on how much you value different types of social goods. Atm, most donations will not be EA type donations, but one might expect people to give to causes often significantly better than for tax revenues, so I’d personally rather err towards the pro-deduct (or pro-match) side so far.
*It would be wrong to consider only ‘tax revenue lost’ for the gvmt as effect of the tax deductability. In expectation, in a simple model, gvmt will in the medium term partly respond with (i) lower expenditures, and (ii) increase standard tax rates in response to higher tax deductions.
Btw, I personally would not worry about the €25 threshold. Avoiding to register/count too small sums seems a reasonable thing, even if you’re right that it becomes less relevant in the digital world.