It’s well within the bounds of possibility the electric shock is excruciating and the cold numbing, yes. Or indeed that they’re both neutral, compared with slaughter methods that produce clear physiological stress indicators like asyphxiation in carbon-dioxide rich water. or that they’re different for different types of water dwelling species depending on their natural hardiness to icy water, which also seems to be a popular theory. Rightly or wrongly, ice cold slurry is sometimes recommended as the humane option, although obviously the fish farming industry is more concerned with its ability to preserve the fish marginally better than kiliing prior to insertion into the slurry...
David T
Thanks for the response Vasco and apologies for the tardy reply :)
The necessity of making funding decisions means interventions in animal welfare and global health and development are compared at least implicitly. I think it is better to make them explicit for reasoning transparency, and having discussions which could eventually lead to better decisions. Saying there is too much uncertainty, and there is nothing we can do will not move things forward.
I agree on the first part. But it appears OP is perfectly transparent about their reasoning. They acknowledge that the level of uncertainty permits differences of opinion, that they believe a portfolio allocation approach incorporating different views on utilities and moral priorities and risk tolerance is better than adopting a single set of weights and fanatically optimising for them, and that the implicit moral weights are therefore a residual resulting from preference heterogenity of people whose decision making OP/Dustin/Cari value rather than an unjustifiable knowledge claim about the absolute intensity of animals’ experiences which others must prove wrong if they are to consider allocating budget in any other way.
It is, of course, perfectly reasonable to disagree with any/all individuals in OP’s preferences and the net result of that funding allocation, and there are many individual funding decisions OP have made which can be improved upon (including for relatively non-contentious reasons like “they didn’t achieve their aims”). But I don’t tend to think that polemical arguments with suspicious convergence like “donating to most things in cause area X is many times more effective than everything in cause area Y” are particularly helpful in moving things forward, particularly when they’re based not on spotting a glaring error or possible conflict of interest but upon a preference for the moral weights proposed by another organization OP are certainly aware of.
What do you think about humane slaughter interventions, such as the electrical stunning interventions promoted by the Centre for Aquaculture Progress? “Most sea bream and sea bass today are killed by being immersed in an ice slurry, a process which is not considered acceptable by the World Organisation for Animal Health”. “Electrical stunning reliably renders fish unconscious in less than one second, reducing their suffering”. Rough analogy, but a human dying in an electric chair suffers less than one dying in a freezer?
Honestly, I have no idea whether it would be more uncomfortable to die on an electric chair or in a freezer, and I’m actually pretty familiar with the experience of human discomfort and descriptions of electrical shocks and hypothermia written from human perspectives. I’m not volunteering to test it experimentally either! Needless to say I have even less knowledge about the experience of a cold blooded, water dwelling creature with completely different physiology and nervous system and plausibly no conscious experience at all
A consequence of this is that I don’t think transferring all the money currently spent on eradicating malaria to funding campaigns of indeterminate efficacy to promote an alternative slaughter method which has an indeterminate impact on the final moments of fish can be stated with a high degree of certainty to be a net positive use of resources.
Relatedly, I estimated the Shrimp Welfare Project’s Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell’s top charities. I would be curious about which changes to the parameters you would make to render the ration lower than 1.
This is a good question, and my honest answer is probably all of them, and the fundamental premise. I’ve discussed how lobbying organizations’ funding isn’t well measured at the margin and doesn’t scale well in my previous post, I don’t think the evidence base for ice slurry being a particularly painful slaughter method is particularly robust,[1] I don’t think RP’s numbers or your upward revisions of the pain scales they use are particularly authoritative, and above all I’m not sure it’s appropriate to use DALYs to trade human lives for thousand-point-scale estimates of the fleeting suffering of organisms where there isn’t even a scientific consensus they have any conscious experience at all. Titotal’s post does a much better job than I could of explaining how easily it is to end up with orders of magnitude difference in outcomes even if one accepts the basic premises, and there’s no particular reason to believe that premises like “researchers have made some observations about aversion to what is assumed to be pain stimuli amidst an absence of evidence of other traits associated with consciousness, and attached a number to it” are robust.
For related reasons, I don’t think fanaticism is the best approach to budget allocation.
One does not need to worry about the meat eater problem to think the best animal welfare interventions are way more cost-effective than the best in global health and development. Neglecting that problem, I estimated corporate campaigns for chicken welfare are 1.51 k times as cost-effective as GiveWell’s top charities, and Shrimp Welfare Project’s Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell’s top charities.
There’s a reason why I used the word universal. Yes, it is entirely reasonable to believe that a couple of causes from one area are clearly and obviously better than the best known in another area, though shrimp welfare certainly isn’t the one I’d pick. But that’s not the framing of the debate (which is the debate week’s, not yours specifically) is on Cause Area X vs Cause Area Y, not “is Charity Z the most effective charity overall”.
And if I did believe your numbers were a fairly accurate representation of reality and that fanaticism was better for budget allocation than a portfolio strategy, I’d be concerned that chicken charities were using money specifically allocated to AW despite being ~28x worse than shrimp,[2] There’s more money in the GHW buckets, but the chicken ⇒ shrimp reallocation decision is more easily made.
Imagine a relay race team before a competition. The second-leg runner on the team thinks—let us assume correctly—‘If I run my leg faster than 12 seconds, then my team will finish first; if I don’t, then my team won’t finish first.’ She then runs her leg faster than 12 seconds. As the fourth-leg runner on her team crosses the finish line first, the second-leg runner thinks, ‘I won the race.’ Is she right?
Yes, of course she’s right. Even if she’s the weakest member of the team. They don’t give Olympic relay teams 1⁄4 of a medal each.
-
For the record I don’t describe myself as an EA and don’t really hang out in EA circles. I’m far too old to be susceptible to arguments that I’m going to save the world with the power of my intellect and good intentions. If the founding fathers of EA’s bios are accurate I discovered Peter Singer’s solution to world poverty slightly before them, thought he had a [somewhat overstated] point and haven’t done anywhere near enough to suggest I absorbed the lesson. I think utilitarianism’s utility is limited but don’t have the academic pedigree to argue about it for any length of time, and I think a lot of EA utilitarian maths is a bit shoddy.[1] So I don’t think I’m making a particularly partisan argument here.
But you aren’t half leading with your weakest arguments[2] GiveWell’s estimation that if x bednets are distributed, on average about y% Malawian mothers receiving the nets will succeed in using them to protect their kids, so z% fewer kids will die isn’t stealing credit from Malawian mothers or Chinese manufacturers in a zero sum karmic accounting game, it’s a simple counterfactual (with or without appropriate sized error bars). Or put another way, if a Malawian kid thanks her mother for going hungry for two days to pay for a malaria net herself,[3] the mother shouldn’t feel obliged to say “no, don’t thank me, thank the Chinese people that manufactured it and the supply chain that brought it all the way here, and the white Westerners for doing enough research into malaria nets to convince vendors in my village to stock it.” The argument that installing a few more stakeholders in the way introduces a qualtitative difference between donating and diving into a pond might make Peter Singer’s thought experiment a little bit trite, but it isn’t an argument against the quantitative outcomes of donating at all.
- ^
in particular, the tendency to confuse marginal and average costs and wild speculative guesses with robust expected value estimation. I don’t actually think this is bad per se: people overestimating how much their next fiver can help a chicken or prevent Armageddon certainly isn’t worse than people overestimating how much they want the next beer. I just think it looks a lot like the “donor illusion” certain leading EAs used to chastise mainstream charity for; actually the average “child sponsorship” scheme is probably more accurate, in accounting terms, about how much your recurring contribution to the charity pool is helping Jaime from Honduras than many EA causes. (I guess not liking that type of charity either is where you and the median EA agree and I differ :))
- ^
Judging by your book reviews, you’ve researched sufficiently to be able to offer more nuanced criticisms of development aid. So I’m not sure why you’d lead with this, or in other articles with anecdotes how about profoundly the whinging of a single drunk teenage voluntourist crushed your dreams of changing the world. It’s not even like there aren’t much better glib criticisms of EA or charity in general....
- ^
maybe because donations dried up...
- ^
I can’t speak for OP but I thought the whole point of its “worldview diversification buckets” was to discourage this sort of comparison by acknowledging the size of the error bars around these kind of comparisons, and that fundamentally prioritisation decisions between them are influenced more by different worldviews rather than the possibility of acquiring better data or making more accurate predictions around outcomes. This could be interpreted as an argument against the theme of the week and not just this post :-)
But I don’t think neuron counts are by any means the most unfavourable [reasonable] comparison for animal welfare causes: the heuristic that we have a decent understanding of human suffering and gratification whereas the possibility a particular intervention has a positive or negative or neutral impact on the welfare of a fish is guesswork seems very reasonable and very unfavourable to many animal related causes (even granting that fish have significant welfare ranges and that hedonic utiitarianism is the appropriate method for moral resource allocation). And of course there are non-utilitarian moral arguments in favour of one group of philanthropic causes or another (prioritise helping fellow moral beings vs prioritise stopping fellow moral beings from actively causing harm) which feel a little less fuzzy but aren’t any less contentious.
There are also of course error bars wrapped around individual causes within the buckets, which is part of the reason why GHW funds both GiveWell recommended charities and neartermist policy work that might affect more organism life years per dollar than Legal Impact for Chickens (but might actually be more likely to be counterproductive or ineffectual)[1] but that’s another reason why I think blanket comparisons are unhelpful. A related issue is that it’s much more difficult to estimate marginal impacts of research and policy work than dispensing medicine or nets. The marginal impact of $100k more nets is easy to predict; the marginal impact of $100k more to a lobbying organization is not even if you entirely agree with the moral weight they apply to their cause, and average cost-effectiveness is not always a reliable guide to scaling up funding, particularly not if they’re small, scrappy organizations doing an admirable job of prioritising quick wins and also likely to face increase opposition if they scale.[2] Some organizations which fit that bill fit in the GHW category, but it’s much more representative of the typical EA-incubated AW cause. Some of them will run into diminishing returns as they run out of companies actually willing to engage with their welfare initiatives, others may become locked in positional stalemates, some of them are much more capable of absorbing significant extra funding and putting it to good use than others. Past performance really doesn’t guarantee future returns to scale, and some types of organization are much more capable of achieving it than others, which happens to include many of the classic GiveWell type GHW charities, and not many of the AW or speculative “ripple effect” GHW charities[3]
I guess there are sound reasons why people could conclude that AW causes funded by OP were universally more effective than GHW ones or vice versa, but those appear to come more from strong philosophical positions (meat eater problems or disagreement with the moral relevance of animals) than evidence and measurement.
- ^
For the avoidance of doubt, I’m acknowledging that there’s probably more evidence about negative welfare impacts of practices Legal Impact for Chickens is targeting and their theory of change than of the positive welfare impacts and efficacy of some reforms promoted in the GHW bucket , even given my much higher level of certainty about the significance of the magnitude of human welfare. And by extension pointing out that sometimes comparisons between individual AW and GHW charities run the opposite way from the characteristic “AW helps more organisms but with more uncertainty” comparison.
- ^
There are much more likely to be well-funded campaigns to negate the impact of an organization targeting factory farming than ones to negate the impact of campaigns against malaria . Though on the other hand, animal cruelty doesn’t have as many proponents as the other side of virtually any economic or institutional reform debate.
- ^
There are diminishing returns to healthcare too: malaria nets’ cost-effectiveness is broadly proportional to malaria prevalence. But that’s rather more predictable than the returns to scale of anti-cruelty lobbying, which aren’t even necessarily positive beyond a certain point if the well-funded meat lobby gets worried enough.
- ^
“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
This is an absurd claim.
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...
Positing an interstellar civilization seems to be exactly what Thorstad might call a “speculative claim” though. Interstellar civilization operating on technology indistinguishable from magic is an intriguing possibility with some decent arguments against (Fermi, lightspeed vs current human and technological lifespans) rather than something we should be sufficiently confident of to drop our credences in the possibility of humans becoming extinct down to zero in most years after the current time of perils,[1] and even if it were achieved I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments[2]
Certainly this doesn’t seem like a less speculative claim than one sometimes offered as a criticism of longtermism’s XR-focus: that the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero[3] because of things that already exist. Nuclear bunkers, isolation and vaccination and the general resilience of even unsophisticated lifeforms to natural disasters are congruent with our current scientific understanding in a way which faster than light travel isn’t, and the farthest reaches of the galaxy aren’t a less hostile environment for human survival than a post-nuclear earth.
And of course any AGI determined to destroy humans is unlikely to be less capable than relatively stupid, short-lived, oxygen-breathing lifeforms in space, so the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday. A persistent stable “friendly AI” might insulate humans from all these risks if sufficiently powerful (with or without space travel) as you suggest but that feels like an equally speculative possibility—and worse still one which many courses of action aimed at mitigating AI risk have a non-zero possibility of inadvertently working against....
- ^
if the baseline rate after the current time of peril is merely reduced a little by the nonzero possibility that interstellar travel could mitigate x-risk but remains nontrivial, the expected number of future humans alive still drops off sharply the further we go into the future (at least without countervailing assumptions about increased fecundity or longevity)
- ^
Individual human groups seem significantly less likely to survive a given generation the smaller they are and further they are from earth and the more they have to travel, to the point where the benefit against catastrophe of having humans living in other parts of the universe might be pretty short lived. If we’re not disregarding small possibilities there’s the possibility of a novel existential risk from provoking alien civilizations too...
- ^
I don’t endorse this claim FWIW, though I suspect that making humans extinct as opposed to severely endangered is more difficult than many longtermists predict.
- ^
This feels like an isolated demand for rigour, since as far as I can see Thorstad’s[1] central argument isn’t that a particular course of the future is more plausible, but that [popular representations of] longtermist arguments themselves don’t consider the full range of possibilities, don’t discount for uncertainty, and that apparently modest-sounding claims that existential risk is non-zero and that humanity could last a long time if we survive near-term threats are compatible only if one makes strong claims about the hinginess of history[2]
I don’t see him trying to build a more accurate model of the future[3] so much as pointing out how very simple changes completely change longtermist models. As such, his models are intentionally simple and Owen’s expansion above adds more value for anyone actively trying to model a range of future scenarios. But I’m not sure why it would be incumbent on the researcher arguing against choosing a course of action based on long term outcomes to be the one who explicitly models the entire problem space. I’d turn that around and question why longtermists who don’t consider the whole endeavour of predicting the long term future in our decision theory to be futile generally dogmatically reject low probability outcomes with Pascalian payoffs that favour the other option, or to simply assume the asymmetry of outcomes works in their favour.
Now personally I’m fine with “no, actually I think catastrophes are bad”, but that’s because I’m focused on the near term where it really is obvious that nuclear holocausts aren’t going to have a positive welfare impact. Once we’re insisting that our decisions ought to be guided by tiny subjective credences in far future possibilities with uncertain likelihood but astronomic payoffs and that it’s an error not to factor unlikely interstellar civilizations into our calculations of what we should do if they’re big enough, it seems far less obvious that the astronomical stakes skew in favour of humanity.
The Tarsney paper even explicitly models the possibility of non-human galactic colonization, but with the unjustified assumption that no non-humans will be of converting resources to utility at a higher rate than [post]humans, so their emergence as competitors for galactic resources merely “nullifies” the beneficial effects of humanity surviving. But from a total welfarist perspective, the problem here isn’t just that maximizing the possible welfare across the history of the universe may not be contingent on the long term survival of the human species,[4] but that humans surviving to colonise galaxies might diminish galactic welfare. Schweitzgebel’s argument that actually human extinction might be net good for total welfare is only a mad hypothetical if you reject fanaticism: otherwise it’s the logical consequence of accepting the possibility, however small, that a nonhuman species might convert resources to welfare much more efficiently than us.[5] Now a future of decibillions of aliens building Dyson Spheres all over the galaxy because there’s no pesky humans in their way sounds extremely unlikely, and perhaps even less likely than a galaxy filled with the same fantastic tech to support quadrillions of humans—a species we at least know exists and has some interest in inventing Dyson Spheres—but despite this the asymmetry of possible payoff magnitudes may strongly favour not letting us survive to colonise the galaxy.[6]
In the absence of any particular reason for confidence that the EV of one set of futures is definitely higher than the others, it seems like you end up reaching for heuristics like “but letting everyone die would be insane”. I couldn’t agree more, but the least arbitrary way to do that is to adjust the framework to privilege the short term with discount rates sufficiently high to neuter payoffs so speculative and astronomical we can’t rule out the possibility they exceed the payoff from [not] letting eight billion humans die[7] Since that discount rate reflects extreme uncertainty about what might happen and what payoffs might look like, it also feels more epistemically humble than basing an entire worldview on the long tail outcome of some low probability far futures whilst dismissing other equally unsubstantiated hypotheticals because their implications are icky. And I’m pretty sure this is what Thorstad wants us to do, not to place high credence in his point estimates or give up on X-risk mitigation altogether.
- ^
For the avoidance of doubt I am a different David T ;-)
- ^
which doesn’t of course mean that hinginess is untrue, but does make it less a general principle of caring about the long term and more a relatively bold and specific claim about the distribution of future outcomes.
- ^
in the arguments referenced here anyway. He has also written stuff which attempts to estimate different XR base rates from those posited by Ord et al, which I find just as speculative as the longtermists’
- ^
there are of course ethical frameworks other than maximizing total utility across all species which give us reason to prefer 10^31 humans over a similarly low probability von Neumann civilization involving 10^50 aliens or a single AI utility monster (I actually prefer them, so no proposing destroying humanity as a cause area from me!) but they’re a different from the framework Tarsney and most longtermists use, and open the door to other arguments for weighting current humans over far future humans.
- ^
We’re a fairly stupid, fragile and predatory species capable of experiencing strongly negative pain and emotional valences at regular intervals over fairly short lifetimes, with competitive social dynamics, very specific survival needs and a wasteful approach to consumption, so it doesn’t seem obvious or even likely that humanity and its descendants will be even close to the upper bound for converting resources to welfare...
- ^
Of course, if you reject fanaticism, the adverse effects of humans not dying in a nuclear holocaust on alien utility monsters are far too remote and unlikely and frankly a bit daft to worry about. But if you accept fanaticism (and species-neutral total utility maximization), it seems as inappropriate to disregard the alien Dyson spheres as the human ones...
- ^
Disregarding very low probabilities which are subjective credences applied to future scenarios we have too little understanding of to exclude (rather than frequencies inferred from actual observation of their rarity) is another means to the same end, of course.
- ^
Ultimately the safety of the space domain and safety on earth from space debris are linked by both overlapping technologies for monitoring and mitigation and the overlapping principle that entities ought to take responsibility for what they put into space. And from that perspective it would be pretty hard to lecture foreign universities on why they should spend a few grand on safely deorbiting their Cubesat to mitigate a very small risk of hitting other satellites whilst being the entity that decided to abdicate responsibility for safely deorbiting the ISS to mitigate a very small risk of hitting a densely populated urban area—to save a lot more money but still only about 4 months of ISS budget.
Ultimately they’re optimising for technological potential rather than saving lives, and the budget for this is far more closely linked to debates like “but we can’t trust the Russians to manage the deorbiting process, can we”, “does it have commercialization potential” and “could it be turned into an ASAT weapon” than “would it save more lives than the debris could possibly threaten if we bought $843m worth of medicine instead?”
The ISS itself isn’t particularly likely to create space debris (its orbit is already lower than major constellations and anything with thrusters is going to move out the way, and if it breaks up as it hits the upper atmosphere the pieces will rain over earth rather than remain in orbit). But tens of thousands of other satellites being launched this decade have plenty of potential to create space debris, space is a commons and space law is by international treaty with lots of blank spaces (unlike, for example, the heavily-regulated airspace).
If the deorbiting strategy for the ISS is “we decided that to save a third of the annual budget we usually put in, we’d do a reentry with limited control from its onboard thrusters because only a few islands might get hit, and in fact even though we missed the target we didn’t hurt anything except an abandoned chicken shed”, or “we left it to Roscosmos to figure out”[1] nobody is going to listen to NASA’s guidelines for a safer space (not even Congress). Especially since all the precautions everyone else might need to take will cost them significant money.
- ^
there are other political considerations to leaving it to Roscosmos to figure out of course, even though they’re hardly likely to target California with it, and tech developed to deorbit the ISS isn’t going to be more useful as an antisatellite weapon than dozens of existing civil projects to create tugs for deorbiting and servicing defunct smaller satellites)
- ^
Just to clarify on the orbital debris problem: it’s not just the risk of the ISS specifically hitting things on the way down (which is non-zero but at the same time not that likely: the ISS is too big to overlook and will move in a reasonably predictable manner so things will generally adjust their orbits in advance to move out the way, and most of them have higher orbits anyway). It’s also that when operators of thousands of other satellites[1]- from Starlink to university cubesats—are being advised/required to have specific end-of-life deorbiting strategies to avoid creating more orbital debris, all of which cost them money in terms of additional man hours and launch mass, and lots of research dollars are being spent on addressing the problem of orbital debris, the world’s major space agencies can hardly state their end of life strategy for the ISS is as long as everyone else gets out the way and then when it breaks apart in the upper atmosphere the pieces land somewhere like Australia or the sea it probably won’t do any real harm. It’s really bad politics to demand everyone else is a responsible citizen whilst shrugging your shoulders about the fate of your flagship. And nearly all the alternatives—especially those discussed in the white paper—would cost more.
And yes, in the scope of the operations of the ISS $843m isn’t even that big a number, which I realise may seem obscene in a country where that sum of money would buy the entire population a couple of malaria nets
(FWIW I still think you can [i] make a good case that the project is premature, the wrong approach or poor value for money and [ii] make a good case that SpaceX has done unusually well in turning pork-barrel projects into useful, value-for-money services and may do so again despite the project being premature, the wrong approach and/or poor value for money)
- ^
most of which were launched in the past few years, which is why history isn’t a reliable guide...
- ^
Needless to say, NASA does not use EA math in its budgeting ;-)
The world’s major space agencies abandoning the biggest thing we ever put into space in an uncontrolled deorbit is a politically untenable option (and the project represnts only around a third of the estimated $3bn annual budget to keep the ISS operational, although there’s an argument this spend has more ROI...). That’s even more the case against a backdrop of increasing calls for more regulation around everyone else’s launches and orbits and deorbits to prevent collisions in space[1]
The potential risk to human life of uncontrolled ISS reentry therefore isn’t the only factor in the decision and probably not even the main one, though I don’t think the deorbiting of generally orders of magnitude smaller stuff gives much of a guide to the magnitude of that risk.[2] (There are of course also other arguments against spending money on this project, such as the desirability of maintaining the ISS, the possibility of raising it to a graveyard orbit for future reuse/recycling instead of destroying it; and other arguments in favour such as the likelihood at least some of SpaceX’s R&D can be deployed to more productive projects in future). Space agencies usually aren’t especially rigorous in analysing cost effectiveness anyway, but cost-per-life saved is a pretty minor factor in why such contracts are awarded. Space funding is industrial policy targeting notionally large medium term returns from technology, not evidence-based philanthropy trying to find the most cost effective way to remedy problems.
- ^
this potentially compounds, with each debris impact creating more orbital debris, with the theoretical possibility of rendering some orbits unusable in future. Avoiding this scenario might still seem wasteful from the point of view of a Ugandan farmer whose neighbourhood could be fed for years on the research budgets being devoted to maintaining congestion-free orbits, but rather a lot of the developed world depends on access to satellite technology and I suspect even some NGOs in Uganda make some use of GPS and satcomms.
- ^
but that risk is probably still low, assuming even with it rentering via gradual orbital decay, operators would still have sufficient ability to control reentry using onboard thrusters to direct it to scatter it’s debris over thousands of kms that’s mostly ocean or sparsely populated, as with Skylab...
- ^
The point of credentialism is that the ideal circumstances for an individual to evaluate ideas don’t exist very often. Medical practitioners aren’t always right, and homeopaths or opinion bloggers aren’t always wrong, but bearing in mind that I’m seldom well enough versed in the background literature to make my own mind up, trusting the person with the solid credentials over the person with zero or quack credentials is likely to be the best heuristic in the absence of any solid information to the contrary
And yes, of course sometimes it isn’t, and sometimes the bar is completely arbitrary (the successful applicant will have some sort of degree from some sort of top 20 university) or the level of distinction irrelevant (his alma mater is more reputable than hers) and sometimes the credentials themselves are suspect
I’d add that the transitional effects of climate change look like they would have particularly negative effects on poor crop farmers in places like the Indian subcontinent who are unlikely to source much/any of their diet from factory farms, and relatively little effect on wealthy Western consumers who eat particularly large quantities of factory farmed meat (it’s even conceivable that price pressures resulting from shortages of some staple crops in some countries could benefit Western factory farms’ profitability...), so it’s really difficult to see the negative animal welfare impact of slowing climate change down a bit
For the record, I agree that evolutionary mechanisms need not hold any moral force over us, and lean personally towards considering acts to save human lives of being approximately equal value irrespective of distance and whether anyone actually notices or not. But I still think it’s a fairly strong counterargument to point out that the vast majority of humanity does attach moral weight to proximity and community links, as do the institutions they design to do good, and for reasons.
This argument is understandably unpopular because it’s inconsistent with core principles of EA.
But the principle of reciprocity (and adjacent kin selection arguments) absolutely is the most plausible argument for why the human species evolved to behave in an apparently altruistic[1] manner and value it in others in the first place, long before we started on abstract value systems like utilitarianism, and in many cases people still value or practice some behaviours that appear altruistic despite indifference to or active disavowal of utilitarian or deontological arguments for improving others’ welfare.
- ^
there’s an entire literature on “reciprocal altruism”
- ^
I think there’s plenty of place for argument in moral reflection, but part of that argument includes accepting that things aren’t necessarily “obvious” or “irrefutable” because they’re intuitively appealing. Personally I think the drowning child experiment is pretty useful as thought experiments go, but human morality in practice is so complicated that even Peter Singer doesn’t act consistently with it, and I don’t think it’s because he doesn’t care.
If being thoughtful, sincere and selfless is a core value, it seems like it would be more of a problem if influential people in the community felt they had to embrace the label even if they didn’t think it was valuable or accurate
I suspect a lot of the ‘EA adjacent’ description comes from question marks about particular characteristics EA stances or image rather than doubting some of their friends could benefit from participating in the community, and that part of that is less a rejection of EA altogether and more an acknowledgement they often find themselves at least as closely aligned with people doing great work outside the community.
(Fwiw I technically fit into the “adjacent” bracket from the other side: never been significantly active in the community, like some of its ideas and values—many of which I believed in before ‘EA’ was a thing—and don’t identify with or disagree with other ideas commonly associated with EA, so it wouldn’t really make much sense to call myself an EA)
I think you raise an important point: people legitimately have different opinions on what the scale should mean, and there might also be cultural factors that skew how people perceive they should respond on aggregate. If there is such a thing as a true hedonic scale for how people actually feel about their life that can be compared from person to person, survey data isn’t an ideal proxy for it.
But I don’t think the average person responding assumes the valence symmetry that you probably assume. Most people do want to go on living and so it’s not unreasonable to assume that the bottom half of the scale which goes all the way up to the “best possible life” isn’t supposed to represent different degrees of unbearable torture. I imagine most of the large fraction of the world’s population who awarded themselves a 4⁄10 on that scale would be utterly horrified by the idea that this might imply their life wasn’t worth living.
Yep. A significant portion of the relevant health economics literature Givewell researchers will be familiar with uses measures which do treat lives as non-equal, typically the “value of a statistical life” which represents how much society is willing to pay to save that life which is broadly proportional to GDP per capita. The rationale is basically that survivors in richer societies are capable of generating enough wealth to cover the costs of their treatment, but if you’re valuing lives from an altruistic perspective then you really, really don’t want to weight it based on future ability to pay...
That “value of a statistical life” obviously factors in differences in opportunities and values positive externalities generated from surviving, but vastly overweights differences in actual quality of life—and even on value-of-a-statistical-life grounds malaria nets and vitamin supplementation in Sub Saharan Africa is generally still seen as cost effective.[1] From a pure hedonic utilitarian perspective you might want to use some sort of subjective wellbeing factor instead. Multiply that by the expected future life of the person saved and you get the WELLBY as an alternative metric [2]
But the difference in average self-reported subjective wellbeing on a linear scale is… really not very big compared with the differences in costs between countries, and probably isn’t going to change their recommendations very much. Taking the example of the Democratic Republic of Congo, and Congoese people polled do indeed value their happiness at lower than many other countries on the World Happiness Poll’s nominally linear scale at only 3.3 out of 10. But India and Bangladesh, highlighted in the post as countries which don’t have ongoing conflict and plausibly have better economic opportunities, score only 4.1 and 3.8 respectively so factoring in the weightings of subjective wellbeing—if you believe them to be accurate—would change very little. (The main reason why comparatively few nets are dispensed in India and Bangladesh is that the local malaria variety is a lot less prevalent and a lot less lethal. The life expectancy difference to Congo shrinks if you factor out malaria too...). And if children survive infancy, their lives are typically lived over spans of 60-70 years. It’s unlikely the global distribution of happiness will be identical 30 years from now, and entirely possible that the countries with the lowest happiness will see the biggest improvement
So whilst GiveWell may have made the judgement to weight lives equally on ideological grounds, the actual data you’d need to create a robust argument for doing things differently tends to not be there or broadly inclined to what they’re already doing...
- ^
people in richer countries not only face proportionally higher healthcare costs in general, but also diminishing returns since the treatments they’re at risk of missing out on tend to be expensive and complex surgery and new experimental drugs, rather than vitamins and nets...
- ^
using national life expectancy figures which are significantly affected by malaria prevalence in infants as weights which discourage supplying malaria nets is questionable, but in theory life expectancy measures could be adjusted to factor malaria out....
- ^
If the slow death involves no pain, of course it’s credible. (The electric shock is, incidentally, generally insufficient to kill. They generally solve the problem of the fish reviving with immersion in ice slurry....). It’s also credible that neither are remotely as painful as a two week malaria infection or a few years of malaria infection which is (much of) what sits on the other side of the trade here.