PhD in Economics (focus on applied economics and climate & resource economics in particular) & MSc in Environmental Engineering & Science. Key interests: Interface of Economics, Moral Philosophy, Policy. Public finance, incl. optimal redistribution & tax competition. Evolution. Consciousness. AI/ML/Optimization. Debunking bad Statistics & Theories. Earn my living in energy economics & finance and by writing simulation models.
FlorianH
This post calls out un-diversities in EA. Instead of attributable to EA doing something wrong, I find these patterns mainly underline a basic fact about what type of people EA tends to attract. So I don’t find the post fair to EA and its structure in a very general way.
I find to detect in the article an implicit, underlying view of the EA story being something like:
‘Person becoming EA → World giving that person EA privileges’
But imho, this completely turns upside down the real story, which I mostly see as:
‘Privileged person ->becoming EA → trying to put their resources/privileges to good use, e.g. to help the most underprivileged in the world’,
whereby privileged refers to the often a bit geeky, intellectual-ish, well-off person we often find particularly attracted to EA.
In light of this story, the fact that white dudes are over-represented relative to the overall global world population, in EA organizations, would be difficult to avoid in today’s world, a bit like it would be difficult to avoid a concentration of high-testosterone males in a soccer league.
Of course, this does not deny that many biases exist everywhere in the selection process for higher ranks within EA, and these may be a true problem. Call them out specifically, and we have a starting point to work from. Also in EA, people tend to abuse of power, and this is not easy to prevent. Again, welcome to all enlightenment about how, specifically, to improve on this. Finally, that skin color is associated with privileges worldwide may be a huge issue in itself, but I’d not reproach this specifically to ‘EA’ itself. Certainly, EAs should also be interested in this topic, if they find cost-effective measures to address it (although, to some degree, these potential measures have tough competition, just because there is so much poverty and inequality in the world, absorbing a good part of EA’s focus for not only bad reasons).
Examples of what I mean (I add the emphasize):
However, the more I learn about the people of EA, the more I worry EA is another exclusive, powerful, elite community, which has somehow neglected diversity. The face of EA appears from the outside to be a collection of privileged, highly educated, primarily young, white men.
Let’s talk once you have useful info on whether they focus on the wrong things, rather than that they have the wrong skin colors. In my model, and in my observations, there is simply a bias in who feels attracted to EA, and as much as anyone here would love the average human to care about EA, it is sadly not the case (although in my experience, it is mostly true that more generally slightly geeky, young, logical, possibly well-off persons like and join EA, and can and want to use resources towards it, than simply the “white men” you mention).
The EA organizations now manage billions of dollars, but the decisions, as far as I can tell, are made by only a handful of people. Money is power, and although the decisions might be carefully considered to doing the most good, it is acutely unfair this kind of power is held by an elite few. How can it be better distributed? What if every person in low-income countries were cash-transferred one years’ wage?
The link between the last bold part and the preceding bold parts surprises me. I see two possible readings:
a. ‘The rich few elite EAs get the money, but instead we should take that money to support the poorest?’ That would have to be answered by: These handful work with many many EAs or other careful employees, to try to figure out what causes to prioritize based on decent cost-benefit analysis, and they don’t use this money for themselves (and indeed, at times, it seems like cash-transfers to the poorest show up among promising candidates for funding, but these still compete with other ways to try to help the poorest beings or those most at risk in the future).
b. ‘Give all poorest some money, so some of these could become some of the “handful of people” with the power (to decide on the EA budget allocation)’. I don’t know. Seems a bit a distorted view on the most pressing reason for alleviating the most severe poverty in the world.
While it might be easy to envy some famous persons in our domain, none has chosen ‘oh, whom could we give a big privilege of running the EA show’, but instead there is a process, however imperfect, trying to select some of the people who seem most effective for also the higher rank EA positions. And as many skills useful for it correlate with privileged education, I’d not necessarily want to force more randomization or anything—other than through compelling, specific ways to avoid biases.
I have experience with that: eating meat at home but rather strictly not at restaurants for exactly the reasons you mention: it tends to simply be almost impossible to find a restaurant that seems to serve not-crazily-mistreated animals.
Doing that as vegan-in-restaurants (instead of vegetarian-in-restaurants) is significantly more difficult, but from my experience, one can totally get used to try to remain veg* outside but non-veg* at home where one can go for food with some expectation of net positive animal lives.
Few particular related experiences:
Even people who knew me rather well, would intuitively totally not understand the principle. I at times kind of felt bad to buy meat when they’re there as I knew they thought I’m vegan and will be confused, even though I would have told them time and again I simply avoid conventional meat/in restaurants and/or at their place etc.
I’m always astonished at the so many people who supposedly care about animals do the other way round: In restaurants they eat meat but not at home. Weird, given it’s so obvious in the restaurants is the worst stuff ((and they’re not the kind of perfect EA where a dollar saved would be used towards most effective causes, which could naturally complicate the choices))
Restaurants indeed do, behaviorally, absolutely not care about animal welfare. For a food animal welfare compensation project we tried to get a bunch of restaurants to accept that we source higher-welfare meat for them, without them having to pay anything for it. It was in almost all places not possible at all: (i) Even just the slightest potential logistical extra step and/or (ii) potentially a reputational fear from anything about their usual sourcing being leaked to the unconscious public, seemed to make them reluctant to participate.
(Then, I don’t want to praise my habits; I hope I find the courage again to become more vegan sometime, as everything else feels like inflicting unacceptable suffering and/or wasting a lot of money on expensive food, and I’m not sure my ‘maybe it helps my health’ justifies it/there must be better ways. All my sympathy if someone calls health a bad excuse for non-veganism, but I definitely maintain, if it’s not about health questions, once one gets used to avoid meat and/or animal products, it only becomes easier over time, in terms of logistics and getting to know tasty alternatives, either simply only outside or also at home)
Surprised. Maybe worth giving it another try, looking longer for good imitations—given today’s wealth of really good ones (besides admittedly a ton of bad, assuming you really need them to imitate the original so much): I’ve made friends taste veg* burgers and chicken nuggets and they were rather surprised when I told them post-hoc that these had not been meat. I once had to double-check with the counter at the restaurant as I could not believe what I had in my plate was really not chicken. Maybe that speaks against the fine taste of me and some, but I really find it’s rather easily possible to find truly great textures too if one really cares.
Then, I personally don’t know any “uncanny valley” in that domain; make it a bit more or less fake feeling, it doesn’t really matter much to me, so maybe you really experience that very differently.
*I don’t know/remember whether vegan or vegetarian.
Interesting. Curious: If such hair is a serious bottleneck/costly, do some hair cutters as a default collect cut hair and sell/donate it for such use?
I tried to account for the difficulty to pin down all relevant effects in our CBA by adding the somewhat intangible feeling about the gun to backfire (standing for your point that there may be more general/typical but less easy to quantify benefits of not censoring etc.). Sorry, if that was not clear.
More importantly:
I think your last paragraph gets to the essence: You’re afraid the cost-benefit analysis is done naively, potentially ignoring the good reasons for which we most often may not want to try to prevent the advancement of science/tech.
This does, however, not imply that for pausing we’d require Pause Benefit >> Pause Cost. Instead, it means, simply you’re wary of certain values for E[Pause Benefit] (or of E[Pause Cost]) to be potentially biased in a particular direction, so that you don’t trust in conclusions based on them. Of course, if we expect a particular bias of our benefit or our cost estimate, we cannot just use the wrong estimates.
When I’m advocating to be even-handed, I refer to a cost-benefit comparison that is non-naive. That is, if we have priors that there may exist positive effects that we’ve just not yet managed to pin down well, or to quantify, we have (i) used reasonable placeholders for these, avoiding bias as good as we can, and (ii) duly widened our uncertainty intervals. It is therefore, that in the end, we can remain even-handed, i.e. pause roughly iif E[Pause Benefit] > E[Pause Cost]. Or, if you like, iif E[Pause Benefit*] > E[Pause Cost*], with * = Accounting with all duty of care for the fact that you’d usually not want to stop your professor or so/usually not want to stop tech advancements because of yadayada..
I have some sympathy with ‘a simple utilitarian CBA doesn’t suffice’ in general, but I do not end at your conclusion; your intuition pump also doesn’t lead me there.
It doesn’t seem to require any staunch utilitarianism to arrive at ‘if a quick look at the gun design suggests it has 51% to shoot in your own face, and only 49% to shoot at the tiger you want to hunt as you otherwise starve to death’*, to decide to drop the project of it’s development. Or, to halt, until a more detailed examination might allow you to update with a more precise understanding.
You mention that with AI we have ‘abstract arguments’, to which my gun’s simple failure probability may not do full justice. But I think not much changes, even if your skepticism about the gun would be as abstract or intangible as ‘err, somehow it just doesn’t seem quite right, I cannot even quite perfectly pin down why, but overall the design doesn’t make me trust; maybe it explodes in my hand, it burns me, it’s smoke might make me fall ill, whatever, I just don’t trust it; i really don’t know, but HAVING TAKEN ALL EVIDENCE AND LIVE EXPERIENCE, incl. the smartest EA and LW posts and all, I guess, 51% I get the harm, and only 49% the equivalent benefit, one way or another’ - as long as it’s still truly the best estimate you can do at the moment.
The (potential) fact that we more typically have found new technologies to advance us, does very little work in changing that conclusion, though, of course, in a complicated case as in AI, this observation itself may have informed some of our cost-benefit reflections.
*Yes you guessed correctly, I better implicitly assume something like, you have 50% of survival w/o catching the tiger, and 100% with him (and you only care about your survival) to really arrive at the intended ‘slightly negative in the cost-benefit comparison’; so take the thought experiment as an unnecessarily complicated quick and dirty one, but I think it still makes the simple point.
There are two factors mixed up here: @kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weight we humans put on that animals’ welfare. For a meaningful conversation about the topic, we should not mix these two up.*
Let’s briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: “We thus have no welfare problem” is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human2x who happened to have developed care about animal welfare—or who simply happened to be curious about absolute welfare in his universe.In the same vein: There’s no strict need to account for usual human’s care when analyzing whether, “Net global welfare may be negative” (title!). On the contrary, it would lead to an unnecessary bias, that just comes on top of the analysis’ necessarily huge uncertainty (that the author does not fail to emphasize, although as others comment, it could deserve even stronger emphasis).
One of my favorite passages is your remark on AI in some ways being rather more white-boxy, while instead humans are rather black boxy and difficult to align. Some often ignored truth in that (even if, in the end, what really matters, arguably is that we’re so familiar with human behavior, that overall, the black boxy-ness of our inner workings may matter less).
Enjoyed the post, thanks! But it starts with an invalid deduction:
Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.
(I added the emphasis)
Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs. Of course, we have to take into account all types of costs, as you advocate in your post. Maybe that includes even some unknown unknowns in terms of risks from an imposed pause. Still, in the end, we should be even-handed. That we don’t impose pauses on most technologies, surely is not a strong reason to the contrary: We might (i) for bad reasons fail to impose pauses also in other cases, or, maybe more clearly, (ii) simply not see so many other technologies with so large potential downside warranting making pause a major need—after all, that’s why we have started the debate in particular about this new technology, AI.
This is just a point on stringency in your provided motivation for the work; changing that beginning of your article would IMHO avoid an unnecessary ‘tendentious’ passage.
Agree with the testing question. I think there’s a lot of scope in trying to implement Mutual Matching (versions) in small or large scale, though I have not yet stumbled upon the occasion to test it in real life.
I would not say my original version of Mutual Matching is in every sense more general. But it does indeed allow the organizer some freedom to set up the scheme in a way that he deems conducive. It provides each contributor the ability to set (or know) her monotonously increasing contribution directly as a function of the leverage, which I think is really is a core criterion for an effective ‘leverage increasing’. I’m not yet 100% sure whether we here have the same knowable relationship between i’s leverage and her contribution.
Thought provoking in any case, and looking also fwd to studying it hopefully once also in more detail! Every way we can improve on the Quadratic Funding is good. Imho, really, QF mainly deserves a name of information eliciting or funds allocation rather than funding mechanism, as, while it sounds good to be able to get the ‘first best’, asymptotically all money has to come from the central funder if there are many ‘small’ donors.
I think you’re describing is exactly (or almost exactly) Mutual Matching that I wrote about here on the forum a while ago: Incentivizing Donations through Mutual Matching
Great! I propose a concise 1-sentence summary that gets to the core of one of the main drawbacks of QF, and link to Mutual Matching, a ‘decentralized donor matching on steroids’, overcoming some of QF issues, that might have been interesting for the reader of this article.
QF really is an information eliciting mechanism, but much less a mechanism for solving the (obviously!) most notorious problem with public goods: the lack of funding due to free-riding and lacking incentives to contribute.
Yes, QF elicits the WTP, helping to inform about value & optimal size of the public good (PG). Is that what prevented us from solving the PG issues? Nope. It’s our lack of the central funder. As shown here, this funder would require deep pockets—sponsoring nearly 100% of the cost when grows large (!), see the vs. in the text. Lacking that funder, people again have insufficient incentive to contribute.
Mutual Matching, fully ‘decentralized donor matching on steroids’: With it, I address some core issues with QF. Donors mutually co-incentivize each other to donate more, by hard, direct incentives mutually created purely by their own conditional donations.
Arbitrarily high matching factor (->incentive) are theoretically achievable—well, in practice all depends on statistical distribution of contributors etc. It is the first attempt I’m aware of that most directly tries to scale up the simple idea of incentivizing with “if you give, I give” → to people, each one with each one, without requirement to negotiate.
Thanks for the post, resonates a lot with my personal experience.
Couldn’t agree more with
In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn’t exist in Switzerland.[2] Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture
In a similar direction, there’s more that struck me as rather discouraging in terms of intelligent public debate:
In addition to this lie you pointed to apparently being popular*, from my experience in discussions about the initiative, the population also showed a basic inability to follow most basic logical principles:
Even many of the kindest people who would not want to harm animals, believed, as a sort of fundamental principle, it’d be bad to prescribe what animals people can and cannot eat, thinking that, therefore, it is fundamentally not okay to impose (such) animal welfare protection measures.
All while no single person (there will be the odd exception; but it really is an exception) would have claimed that the existing animal welfare laws would be unwarranted/should be abolished/relaxed.
Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).
We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).
My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I’m less sure and might also be biased now, as in not wanting to feel different, oops.
Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?
I think this might be the case when I ask myself whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the ‘opposite’ conclusion, if instead they had been offered the same experiment ‘the other way round’:
Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to destroy them all, for the sake of making very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others). It is just a gut feeling, but I’d guess this would evoke similar types of feelings of repugnance very often (maybe even more so than in the original RC experiment?)! A sort of Repugnant Conclusion 2.
Interesting suggestion! It sounds plausible that “barely worth living” might intuitively be mistaken as something more akin to ‘so bad, they’d almost want to kill themselves, i.e. might well have even net negative lives’ (which I think would be a poignant way to say what you write).
What about this to reduce the pbly often overwhelming stigma attached to showcasing one’s own donations?!
Maybe the main issue is that I’m showing off with the amount I donate, rather than towards which causes. So: Just show where to, or maybe which share to where I donate, avoiding to show the absolute amount of donations.
Ok, so you do your donations, I do some other donations. But: You showcase my donations, I showcase yours. No clue whether that’s stupid and not much better than simply not showing any personal donations. Maybe then, a bunch of us are donating to whatever each of us likes, but each of us simply showcases the group’s aggregate donations: “Hey, I’m in this group of five and it donated these amounts to here and here; I gave only a small share, but you could participate in sth similar”.
Research vegan cat food as ideal EA cause!? Might also be ideal for human vegan future as ‘side’-effect too.
Cats are obligate carnivores; must eat meat (or animal products) according to typical recommendations (and cats tend to refuse most non-animal foods). At least, there seems to exist no vegan cat food that is recommended as a main diet for cats without further warnings; often cats would seem to not accept mostly non-animal foods
I guess—but am not sure (?) - animals fed to cats mean significantly more animals are grown in factory farms
Somewhat counterintuitively, in the whole cat food domain, the concept of animal welfare standards, does not even seem to exist. You can find some seemingly higher-welfare standard products but they are extremely rare
Even if often large shares of the ingredients are “chicken meal”, “fish meal” etc., I guess lots of this meal one way or another still could have been replacing some human foods in some places. What I have definitely seen, often major shares of ingredients in the cat foods are “meat” and not just inner organs or broth (although I cannot exclude that ‘meal’-based ones may dominate total sales volumes)
I guess we’re pretty good feed all sorts of animal pieces to (i) ourselves in sausages, chicken nuggets, and the like and/or (ii) other food industry animals. So my prior is that cats do not only get stuff that is completely redundant in the food industry.
I calculate* (very rough) for ca. 220 mio. house cats worldwide, and considering 50% of their meat food to correspond to extra meat production, 6 600 ton/day quality-adjusted meat consumption, or around 0.9% of human’s meat consumption.
The few articles I read online about to which degree cats require a meat diet, point mainly to elements that sound like those that we can easily mix/synthesise from non-animal foods and chemical processes (Taurine, Vitamins A, Arginine, Niacin, maybe some other fatty/amino-acids)
Oddly, the pages tend to list these few elements, insisting that therefore the cat must eat meat, while I’d think: “Euhm, if it’s just that, it would seem simple to mix the right thing” ⇒ maybe the pages just do not enter into more subtle details that are crucial for an obligate carnivore
IMHO, we could very easily test out food/supplements mixtures to check how easily one can replace which share of meat for cats without impairing their health. Given the billions of factory animal lives at stake, even some risk for the corresponding “test animals” might be completely justifiable in the worst case, and naive me thinks we might make extremely quick progress on this front if we really want
If we nail this, the positive side-effects could be: “Hey look, they even feed the obligate carnivores with this mix nowadays, surely you can also become vegan with zero hesitation with a human adjusted formula!”—i.e. finally the stories of your vegan friend who end up at the doctor who recommends him to eat meat (!) etc. could finally really be stories only of the past. (I know many think it already is; maybe you’re right; but I know in practice at least for many this is simply not how they see it)
In fact, for each of (i) the cat-not-eating animals, and (ii) side-effect for human diet, I’d not be surprised if expediently trying to get vegan food that even cats can eat, would be justified
EA dietitians, am I just naive or could this be a thing?
I reckon one drawback of an ideal vegan cat diet could be that many more might want to keep cats. I see some possibilities on net impact from cats+food directly then:
Only few more cats: lower net animal consumption and lower net land-use and lower food costs for poor people (and for cat holders)
Much more cats: vegan diet more than offsetting the spared animal food industry footprint, i.e. larger net land-use change for agriculture, higher food prices for poor
Whether house cats are at all net “happy” or not, I do not know.
* Calculation, based on rough values:
220 mio domestic cats (ignoring 480 mio stray)
3 kg avg. weight (might be slightly low side)
2% of cat weight meat food/day
=60g/cat daily meat = 30g/cat daily “extra” animal meat if quality-adjusting with 50% (see text above)
=6 600 t/day extra meat productionAnd with approx. 90g meat/day per human (beef veal pork poultry and sheep acc. to OECD) for the 8 bn humans, i.e. with 750 000 t meat/day human consumption, the cat’s share is
= 0.9%, bit simplistically approximated.
You were of course right; I now fixed the A B & C round to make them consistent. Thanks!