Brazilian legal philosopher and financial supervisor
May I use the doc on definitions to talk about iidm with outsiders? For instance, in a group studies on Political Philosophy?
Good point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good—even if it’s not immediately effective.
Well, if your EA were particularly well placed to tackle this problem, then the answer is likely yes: they would probably realize its scalable and (partially neglected). Plus, if God is reliable, then the Holy Advice would likely dominate other matters—AGI and x-risks are uncertain futures, and reducing present suffering would be greatly affected by the financial crisis.
In addition, maybe this is not quite the answer you’re looking for, but I believe personal features (like fit and comparative advantages) would likely trump other considerations when it comes to choosing a cause area to work on (but not to donate to).
Obviously. But then, first, Effective Environmentalists are doing great harm, right? We should be arguing more about it.
On the other hand, if your basic welfare theory is hedonistic (at least for animals), then one good long life compensates for thousands of short miserable ones—because what matters is qualia, not individuals. And though I don’t deny animals suffer all the time, I guess their “default welfare setting” must be positive if their reward system (at least for vertebrates) is to function properly.
So I guess it’s more likely that we have some sort of instance of the “repugnant conclusion” here.
Ofc, this doesn’t imply we shouldn’t intervene on wild environments to reduce suffering or increase happiness. What is at stake is: U(destroying habitats) > U(restoring habitats)
Is there some tension between population ethics + hedonic utilitarianism and the premises people in wild animal suffering use (e.g., negative utilitarianism, or the negative welfare expectancy of wild animals) to argue against rewilding (and in favor of environment destruction)?
Plus, “Julia the Wise” would evoke Saruman. Too risky.
Thanks for the post. Are there concrete examples of organizations that use quadratic voting for collective decisions?
What I miss when I read about the morality of discounting is a disanalogy that explains why hyperbolic or exponential discount rates might be reasonable for individuals with limited lifespans and such and such opportunity costs, but not for intertemporal collective decision-making. Then we could understand why pure discount is tempting, and maybe even realize there’s something that temporal impartiality doesn’t capture. If there’s any literature about it, I’d like to know. Please, not the basic heuristics & bias stuff—I did my homework.
For instance, if human welfare was something that could grow like compound interests, it’d make sense to talk about pure exponential discount. If you could guarantee that all of the dead in the battle of Marathon would have, in expectancy, added good to the overall happiness (or whatever you use as a goal function) in the world and transmitted it to their descendants, then you could say that those deaths are a greater evil than the millions of casualties in WW2; you could think of that welfare as “investment” instead of “consumption”. But that’s implausible.
On the other hand, there’s a small grain of truth here: a tragedy happening in the past will reverberate longer in the world historical trajectory. That’s just causality + temporal asymmetry.
This makes me think about cluelessness… I do have a tendency to think good facts have a tendency to lead to better consequences, in general; you don’t have to be an opmitist about it: bad facts just tend to lead to worse consequences, too. The opposite thesis, that a good/bad fact is as likely to cause good as evil, seems quite implausible. So you might be able to think about goodness as investment a little bit; instead of pure discount, maybe we should have something like a proxy for “relative impact in world trajectories”?
I was thinking about Urukagina, the first monarch ever mentioned for his benevolence instead of military prowess. Are there any common traces among them? Should we write something like that Forum post on dark trait rulers—but with opposite sign?
I googled a bit about benevolent kings (I thought it’d provide more insight than looking to XXth century biographies), but, except maybe for enlightened despots, most of the guys (like Suleiman, the magnificent) in these lists are conquerors who just weren’t brutal and were kind law-givers to their people—which you could also say about Napoleon. I was thinking more about guys like Ashoka and Marcus Aurelius, who seem to have despised the hunger for conquests in other people and were actually willing to improve human welfare for moral reasons
I love the subject, and thanks for the post. I’d even include some sort of “manslaughter-like” humanicide—i.e., assuming a highrisk of destroying humanity.But I don’t even dream with anything like that before we criminalize nuclear (or WMD in general) first strike.
In the “voice of God” example, we’re guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right.Now, I’m really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I’m not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we’re using the wrong priors?
As I see, the point is to estimate when extinction would occur by estimating the distribution of population accross time, right? So we use a Rule of Succession-like reasoning… I’m ok with that, so far. N humans have lived, so we can expect more N humans to live, we can update our estimate each time a new one is born...But then, why don’t we use the time humnas have already lived on Earth as input instead? I mean, that’s Toby Ord’s Precipice argument, right? So 200k years without extinction lead you to a very different guesstimate.
Thanks for the post. I’m often very surprised that people ignore income distribution when arguing about economics and welfare. Which leads me to ask:1) which one is the best (or more robust) estimate of inequality-adjusted income for welfare analysis: median income or Gini-adjusted average income? Or they are supposed to converge (which does not seem to be the case, according to this article)?(I guess one advantage of Gini-adjustment is that it seems to be used in other welfare metrics, like HDI)
2) How relevant is wealth distribution—vis-à-vis income distribution? I can see how it’s important for distribution of power in society (if you’re comparing different groups, for instance), and I suppose wealth is important for one’s own life evaluation and as a hedge against uncertainty and economic shocks… but it’s hard for me to “put a number” on that.
Actually, this was the argument for OCC to threaten to strike down some banks’ blacklist policies against polluters last year.
Animal welfare has been an interesting case where pressure on corporations concerned with ESG policies has had some results. That’s an area where changes in antitrust law would be welcome; I think ESG regulations should make explicit reference to this area, lest regulators may proscribe some animal welfare policies as collusion.
Your post has inspired me to investigate if (and maybe later posting something about) EAs should contribute to public consultations issued by financial regulators on ESG standards to argue for explicitly inserting mentions to animal welfare. For instance, would EBA include something like this in European Banking regulations? That’s why Mercy for Animals (and others) have recently asked Brazilian SEC (CVM) to mention animal welfare in regulatory norms about financial disclosures (we could provide a translation if necessary).
Sorry if this is a lame question, but do you think that regulations and standards on ESG that explicitly mentioned animal welfare—something more like soft law, or “comply or explain”, e.g., “companies must disclose animal welfare policies”, or “social and environmental risks include losses due to… animal cruelty”—could be enough to start a change in US antitrust law interpretation on blacklisting products out of animal welfare concerns?
you may be willing to incur a loss of (say) 50% on the value of the bad egg in order to achieve a benefit of (say) 3% on all of the rest of the portfolio
Curiously, I saw the idea of “universal ownership” (without this name) mentioned in this post (courtesy of Scott Alexander’s March links) about how investments are super correlated lately and how diversified investment funds have a piece of each part of the whole economy. It’s the closest I’ve seen to computing “how much will x lose if this company drops 50%, but everyone else increases by 3%”.That would explain why BlackRock (and the financial sector, since TCFD’s creation) has been so responsible lately.
Btw, could you link the Symposium you mentioned in the text relating universal ownership and fiduciary duty?
Super thanks for this post. I’ve seen some people arguing over this subject, yet nothing so well articulated so far. I’ll post my comments remarks separately. But I’d like to begin with a very simple question:- Is there some sort of “EA ESG Group” or “EA Financial Ethics Group”? Would it be be interesting to have it? And to link it with other groups and areas (like IIDM? Legal Topics?)?
It reminds me (I’ll have to share it) this weird sonnet (On fate & future) I drafted (sorry for any lousy rhyme or offense I may have caused to this beautiful language, but I’m not a native speaker) for some friends working with Generation Pledge:
Unhealing stains, sons to be slain / As it’s written: jihad and submission / We let Samsara ourselves drain / While Lord Shiva stated a mission.
Mystics, and yet, we don’t believe / For no told miracles anticipate / What brought us luck, skill and fate / The true great wonder we might live:
In a century – in History, just a moment – / The length of happiness has grown six-fold / And more than doubled the expected life /
Now, let it be your faith and my omen / As their fears and promises grow old / No more be bound to ancestors’ strife.