Scriptwriter for RationalAnimations! Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc. Also a big fan of EA / rationalist fiction!
Jackson Wagner
Kind of a funny selection effect going on here here where if you pick sufficiently promising / legible / successful orgs (like Against Malaria Foundation), isn’t that just funging against OpenPhil funding? This leads me to want to upweight new and not-yet-proven orgs (like the several new AIM-incubated charities), plus things like PauseAI and Wild Animal Initiative that OpenPhil feels they can’t fund for political reasons. (Same argument would apply for invertebrate welfare, but I personally don’t really believe in invertebrate welfare. Sorry!)
I’m also somewhat saddened by the inevitable popularity-contest nature of the vote; I feel like people are picking orgs they’ve heard of and picking orgs that match their personal cause-prioritization “team” (global health vs x-risk vs animals). I like the idea that EA should be experimental and exploratory, so (although I am a longtermist myself), I tried to further upweight some really interesting new cause areas that I just learned about while reading these various posts:
- Accion Transformadora’s crime-reduction stuff seems like a promising new space to explore for potential effective interventions in medium-income countries.
- One Acre Fund is potentially neat, I’m into the idea of economic-growth-boosting interventions and this might be a good one.
- It’s neat that Observatorio de Riesgos Catastroficos is doing a bunch of cool x-risk-related projects throughout latin america; their nuclear-winter-resilience-planning stuff in Argentina and Brazil seems like a particularly well-placed bit of local lobbying/activism.
But alas, there can only be three top-three winners, so I ultimately spent my top votes on Team Popular Longtermist Stuff (Nucleic Acid Observatory, PauseAI, MATS) in the hopes that one of them, probably PauseAI, would become a winner.
(longtermist stuff)
1. Nucleic Acid Observatory
2. Observatorio de Riesgos Catastroficos
3. PauseAI
4. MATS
(interesting stuff in more niche cause areas, which i sadly doubt can actually win)
5. Accion Transformadora
6. One Acre Fund
7. Unjournal
(if longtermism loses across the board, I prefer wild animal welfare to invertebrate welfare)
8. Wild Animal Inititative
9. Faunalytics
I don’t know anything about One Acre Fund in particular, but it seems plausible to me that a well-run intervention of this sort could potentially beat cash transfers (just as many Givewell-recommended charities do).
Increasing African agricultural productivity has been a big cause area for groups like the Bill & Melinda Gates Foundation for a long time. Hanna Ritchie, of OurWorldInData, explains here why this cause seems so important—it just seems kinda mathematically inevitable that if labor productivity doesn’t improve, these regions will be trapped in poverty forever. (But improving productivity seems really easy—just use fertilizer, use better crop varieties, use better farming methods, etc.) So this seems potentially similar to cash transfers, insofar as if we did cash transfers instead, we’d hope to see people spending a lot of the money on better agricultural inputs!
Notably, people who are into habitat / biodiversity preservation and fighting climate change, really like the positive environmental externalities of improving agricultrual productivity. (The more productive the world’s farmland gets, the less pressure there is to chop into jungle and farm more land.) So if you are really into the environment, maybe those positive eco externalities make a focused intervention like this much more appealing than cash transfers, which are more about the benefits to the direct recipients and local economy.
One could look at this as a kind of less-libertarian, more top-down alternative to cash transfers, which makes it look bad. (Basically—give people the cash, and wouldn’t they end up making these agricultural improvements themselves eventually? Wouldn’t cash outperform, since central planning underperforms?) But you could also look at it as a very pro-libertarian, economic-growth-oriented intervention designed to provide public goods and create stronger markets, which makes it look good. (Hence all the emphasis about educating farmers to store crops and sell when prices are high, or preemptively transporting agricultural inputs around to local villages where they can then be sold. Through this lens I feel like “they’re solving coordination problems and providing important information to farmers. Of course a sufficiently well-run version of this charity has the potential to outperform cash!”) This is basically me rephrasing your second bullet point.
Just a feeling, but I think your first bullet point (loans are more efficient because the money is paid back) wouldn’t obviously make this more efficient than cash transfers? (Maybe you are alluding to this with your use of “you believe”.) Yes, making loans is “cheaper than it first seems” because the money is paid back. But giving cash transfers is also “better than it first seems” because the money (basically stimulus) has a multiplier effect as it percolates throughout the local economy. Whether it’s better for people to buy farming tools with cash they’ve been loaned (and then you get the money back and make more loans to more people who want to buy tools), versus cash they’ve been given (and then the cash percolates around the local economy and again other people make purchases), seems like a complicated macroeconomics question that might vary based on the local unemployment & inflation rate or etc. It’s not clear to me that one strategy is obviously better.
But these are all just thoughts, of course—I too would be curious if One Acre Fund has some real data they can share.
Hi! Jackson Wagner here, former aerospace engineer—I worked as a systems engineer at Xona Space Systems (which is trying to develop next-gen GPS technology, and is recently getting involved in a military program to create a kind of backup for GPS). I am also a big fan of the ALLFED concept.
Here are some thoughts on the emergency satellite concept mentioned—basically I think this is a bad idea! I am sorry that this is a harsh and overly-negative rant that just harps on one small detail of the post; I think the other ideas you mention are pretty good!:
1. No way will you be able to build and launch a satellite for $300K?? Sure, if you are SpaceX, with all the worlds’ most genius engineers, and you can amortize your satellite design costs over tens of thousands of identical Starlink copies, then maybe you can eventually get marginal satellite construction cost down to around $300K. But for the rest of us mere mortals, designing and building individual satellites, that is around the price of building and launching a tiny cubesat (like the pair I helped build at my earlier job at a tiny Virginia company called SpaceQuest).
2. I’m pretty skeptical that a tiny cubesat would be able to talk directly to cellphones? I thought direct-to-cell satellites were especially huge due to the need for large antenas. Although I guess Lynk Global’s satellites don’t seem so big, and probably you can save on power when you’re just transmitting the same data to everybody instead of trying to send and recieve individual messages. Still, I feel very skeptical that a minimum-viable cubesat will have enough power to do much of use. (Many cubesats can barely fit enough batteries to stay charged through eclipse!)
3. How are you going to launch and operate this satellite amid a global crisis?? Consider that even today’s normal cubesat projects, happening in a totally benign geopolitical / economic environment, have something like a 1⁄3 rate of instant, “dead on arrival” mission failure (ie the ground crew is never able to contact the cubesat after deployment). In the aftermath of nuclear war or other worldwide societal collapse, you are going to have infinitely more challenges than the typical university cubesat team. Many ground stations will be offline because they’re located in countries that have collapsed into anarchy, etc! Who will be launching rockets, aside from perhaps the remnants of the world’s militaries? Your satellite’s intended orbit will be much more radioactive, so failure rates of components will be much higher! Basically, space is hard and your satellite is not going to work. At the very least, you’d want to make three satellites—one to launch and test, another to keep in underground storage for a real disaster (maybe buy some rocket, like a RocketLab Electron, to go with it!), and probably a spare.
(If the disaster is local rather than global, then you’d have an easier time launching from eg the USA to help address a faminine Africa. But in this scenario you don’t need a satellite as badly—militaries can airdrop leaflets, neighboring regions can set up radio stations, we can ship food aid around on boats, etc.)
4. Are you going to get special permission from all the world’s governments and cell-network providers, that you can just broadcast texts to anyone on earth at any time? Getting FCC licensed for the right frequencies, making partnerships with all the cell-tower providers (or doing whatever else is necessary so that phones are pre-configured to be able to recieve your signal), etc, seems like a big ask!
5. Superpower militaries are already pretty invested in maintaining some level of communications capability through even a worst-case nuclear war. (Eg, the existing GPS satellites are some of the most radiaiton-hardened satellites ever, in part because they were designed in the 1980s to remain operational through a nuclear war. Modern precision ASAT weapons could take out GPS pretty easily—but hence the linked Space Force proposal for backup “resilient GPS” systems. I know less about military comms systems, but I imagine the situation is similar.) Admittedly, most of these communications systems aren’t aimed at broadcasting information to a broad public. But still, I expect there would be some important communications capability left even during/after an almost inconcievably devastating war, and I would bet that crucial information could be disseminated surpisingly well to places like major cities.
6. Basically instead of building satellites yourselves, somebody should just double-check with DARPA (or Space Force or whoever) that we are already planning on keeping a rocket’s worth of Starlink satellites in reserve in a bunker somewhere. This will have the benefit of already being an important global system (many starlink terminals all around the world), reliable engineering, etc.
Okay, hopefully the above was helpful rather than just seeming mean! If you are interested in learning more about satellites (or correcting me if it turns out I’m totally wrong about the feasibility of direct-to-cellphone from a cubesat, or etc), feel free to message me and we could set up a call! In particular I’ve spent some time thinking about what a collapse of just the GPS system would look like (eg if China or Russia did a first-strike against western global positioning satellites as part of some larger war), which might be interesting for you guys to consider. (Losing GPS would not be totally devastating to the world by itself—at most it would be an economic disruption on the scale of covid-19. But the problem is that if you lose GPS, you are probably also in the middle of a world war, or maybe an unprecedented worst-case solar storm, so you are also about to lose a lot of other important stuff all at once!)
Concluding by repeating that this was a hastily-typed-out, kinda knee-jerk response to a single part of the post, which doesn’t impugn the other stuff you talk about!
Personally, of the other things you mentioned, I’d be most excited about both of the “#1” items you list—continuing research on alternative foods themselves, and lobbying naturally well-placed-to-survive-disaster governments to make better plans for resiliency. Then #4 and #5 seem a little bit like “do a bunch of resliliency-planning research ourselves”, which initially struck me as seeming less good than “lobbying governments to do resiliency planning” (since I figure governments will take their own plans more seriously). But of course it would also be great to be able to hand those governments detailed, thoughtful information for them to start from and use as a template, so that makes #4 and #5 look good again to me. Finally, I would be really hyped to see some kind of small-scale trials of ideas like seaweed farming, papermill-to-sugar-mill conversions, etc.
Cross-posting a lesswrong comment where I argue (in response to another commenter) that not only did NASA’s work on rocketry probably benefitted military missile/ICBM technology, but their work on satellites/spacecraft also likely contributed to military capabilities:
Satellites were also plausibly a very important military technology. Since the 1960s, some applications have panned out, while others haven’t. Some of the things that have worked out:GPS satellites were designed by the air force in the 1980s for guiding precision weapons like JDAMs, and only later incidentally became integral to the world economy. They still do a great job guiding JDAMs, powering the style of “precision warfare” that has given the USA a decisive military advantage since 1991′s first Iraq war.
Spy satellites were very important for gathering information on enemy superpowers, tracking army movements and etc. They were especially good for helping both nations feel more confident that their counterpart was complying with arms agreements about the number of missile silos, etc. The Cuban Missile Crisis was kicked off by U-2 spy-plane flights photographing partially-assembled missiles in Cuba. For a while, planes and satellites were both in contention as the most useful spy-photography tool, but eventually even the U-2′s successor, the incredible SR-71 blackbird, lost out to the greater utility of spy satellites.
Systems for instantly detecting the characteristic gamma-ray flashes of nuclear detonations that go off anywhere in the world (I think such systems are included on GPS satellites), and giving early warning by tracking ballistic missile launches during their boost phase (the Soviet version of this system famously misfired and almost caused a nuclear war in 1983, which was fortunately forestalled by one Lieutenant colonel Stanislav Petrov).
Some of the stuff that hasn’t:
The air force initially had dreams of sending soldiers into orbit, maybe even operating a military base on the moon, but could never figure out a good use for this. The Soviets even test-fired a machine-gun built into one of their Salyut space stations: “Due to the potential shaking of the station, in-orbit tests of the weapon with cosmonauts in the station were ruled out.The gun was fixed to the station in such a way that the only way to aim would have been to change the orientation of the entire station. Following the last crewed mission to the station, the gun was commanded by the ground to be fired; some sources say it was fired to depletion”.
Despite some effort in the 1980s, were were unable to figure out how to make “Star Wars” missile defence systems work anywhere near well enough to defend us against a full-scale nuclear attack.
Fortunately we’ve never found out if in-orbit nuclear weapons, including fractional orbit bombardment weapons, are any use, because they were banned by the Outer Space Treaty. But nowadays maybe Russia is developing a modern space-based nuclear weapon as a tool to destroy satellites in low-earth orbit.
Overall, lots of NASA activities that developed satellite / spacecraft technology seem like they had a dual-use effect advancing various military capabilities. So it wasn’t just the missiles. Of course, in retrospect, the entire human-spaceflight component of the Apollo program (spacesuits, life support systems, etc) turned out to be pretty useless from a military perspective. But even that wouldn’t have been clear at the time!
Rethink’s weights unhedged in the wild: the most recent time I remember seeing this was when somebody pointed me towards this website: https://foodimpacts.org/, which uses Rethink’s numbers to set the moral importance of different animals. They only link to where they got the weights in a tiny footnote on a secondary page about methods, and they don’t mention any other ways that people try to calculate reference weights, or anything about what it means to “assume hedonism” or etc. Instead, we’re told these weights are authoritative and scientific because they’re “based on the most elaborate research to date”.
IMO it would be cool to be able to swap between Rethink, versus squared neuron count or something, versus everything-is-100%. As is, they do let you edit the numbers yourself, and also give a checkbox that makes everything equal 100%. Which (perhaps unintentionally) is a pretty extreme framing of the discussion!! “Are shrimp 3% as important as a human life (30 shrimp = 1 person)! Or 100%? Or maybe you want to edit the numbers to something in-between?”
I think the foodimpacts calculator is a cool idea, and I don’t begrudge anyone an attempt to make estimates using a bunch of made-up numbers (see the ACX post on this subject) -- indeed, I wish the calculator went more out on a limb by trying to include the human health impacts of various foods (despite the difficulties / uncertainties they mention on the “methods” page). But this is the kind of thing that I was talking about re: the weights.
Animal welfare feeling more activist & less truth-seeking:
This post is specifically about vegan EA activists, and makes much stronger accusations of non-truthseeking-ness than I am making here against the broader animal welfare movement in general: https://forum.effectivealtruism.org/posts/qF4yhMMuavCFrLqfz/ea-vegan-advocacy-is-not-truthseeking-and-it-s-everyone-s
But I think that post is probably accurate in the specific claims that it makes, and indeed vegan EA activism is part of overall animal welfare EA activism, so perhaps I could rest my case there.
I also think that the broader animal welfare space has a much milder version of a similar ailment. I am pretty “rationalist” and think that rationalist virtues (as expounded in Yudkowsky’s Sequences, or Slate Star Codex blog posts, or Secular Solstice celebrations, or just sites like OurWorldInData) are important. I think that global health places like Givewell do a pretty great job embodying these virtues, that longtermist stuff does a medium-good job (they’re trying! but it’s harder since the whole space is just more speculative), and animal welfare does a worse job (but still better than almost all mainstream institutions, eg way better than either US political party). Mostly I think this is just because a lot of people get into animal EA without ever first reading rationalist blogs (which is fine, not everybody has to be just like me); instead they sometimes find EA via Peter Singer’s more activist-y “Animal Liberation”, or via the yet-more-activist mainstream vegan movement or climate movements. And in stuff like climate protest movements (greta thurnberg, just stop oil, sunrise, etc), being maximally truthseeking and evenhanded just isn’t a top priority like it is in EA! Of course the people that come to EA from those movements are often coming specifically because they recognize that, and they prefer EA’s more rigorous / rationalist vibe. (Kinda like how when Californians move to Texas, they actually make Texas more republican and not more democrat, because California is very blue but Californians-who-choose-to-move-to-Texas are red.) But I still think that (unlike the CA/TX example?) the long-time overlap with those other activist movements makes animal welfare less rationalist and thereby less truthseeking than I like.
(Just to further caveat… Not scoring 100⁄100 on truthseekingness isn’t the end of the world. I love the idea of Charter Cities and support that movement, despite that some charter city advocates are pretty hype-y and use exaggerated rhetoric, and a few, like Balajis, regularly misrepresent things and feel like outright hustlers at times. As I said, I’d support animal welfare over GHD despite truthseeky concerns if that was my only beef; my bigger worries are some philosophical disagreements and concern about the relative lack of long-term / ripple effects.)
David Mathers makes a similar comment, and I respond, here. Seems like there are multiple definitions of the word, and EA folks are using the narrower definition that’s preferred by smart philosophers. Wheras I had just picked up the word based on vibes, and assumed the definition by analogy to racism and sexism, which does indeed seem to be a common real-world usage of the term (eg, supported by top google results in dictionaries, wikipedia, etc). It’s unclear to me whether the original intended meaning of the word was closer to what modern smart philosophers prefer (and everybody else has been misinterpreting it since then), or closer to the definition preferred by activists and dictionaries (and it’s since been somewhat “sanewashed” by philosophers), or if (as I suspect ) it was mushy and unclear from the very start—invented by savvy people who maybe deliberately intended to link the two possible interpretations of the word.
Good to know! I haven’t actually read “Animal Liberation” or etc; I’ve just seen the word a lot and assumed (by the seemingly intentional analogy to racism, sexism, etc) that it meant “thinking humans are superior to animals (which is bad and wrong)”, in the same way that racism is often used to mean “thinking europeans are superior to other groups (which is bad and wrong)”, and sexism about men > women. Thus it always felt to me like a weird, unlikely attempt to shoehorn a niche philosophical position (Are nonhuman animals’ lives of equal worth to humans?) into the same kind of socially-enforced consensus whereby things like racism are near-universally condemend.
I guess your definition of speciesism means that it’s fine to think humans matter more than other animals, but only if there’s a reason for it (like that we have special quality X, or we have Y percent greater capacity for something, therefore we’re Y percent more valuable, or because the strong are destined to rule, or whatever). Versus it would be speciesist to say that humans matter more than other animals “because they’re human, and I’m human, and I’m sticking with my tribe”.
Wikipedia’s page on “speciesism” (first result when I googled the word) is kind of confusing and suggests that people use the word in different ways, with some people using it the way I assumed, and others the way you outlined, or perhaps in yet other ways:
The term has several different definitions.[1] Some specifically define speciesism as discrimination or unjustified treatment based on an individual’s species membership,[2][3][4] while others define it as differential treatment without regard to whether the treatment is justified or not.[5][6] Richard D. Ryder, who coined the term, defined it as “a prejudice or attitude of bias in favour of the interests of members of one’s own species and against those of members of other species”.[7] Speciesism results in the belief that humans have the right to use non-human animals in exploitative ways which is pervasive in the modern society.[8][9][10] Studies from 2015 and 2019 suggest that people who support animal exploitation also tend to have intersectional bias that encapsulates and endorses racist, sexist, and other prejudicial views, which furthers the beliefs in human supremacy and group dominance to justify systems of inequality and oppression.
The 2nd result on a google search for the word, this Britannica article, sounds to me like it is supporting “my” definition:
Speciesism, in applied ethics and the philosophy of animal rights, the practice of treating members of one species as morally more important than members of other species; also, the belief that this practice is justified.
That makes it sound like anybody who thinks a human is more morally important than a shrimp, by definition is speciesist, regardless of their reasons. (Later on the article talks about something called Singer’s “principle of equal consideration of interests”. It’s unclear to me if this thought experiment is supposed to imply humans == shrimps, or if it’s supposed to be saying the IMO much more plausible idea that a given amount of pain-qualia is of equal badness whether it’s in a human or a shrimp. (So you could say something like—humans might have much more capacity for pain, making them morally more important overall, but every individual teaspoon of pain is the same badness, regardless of where it is.)
Third google result: this 2019 philosophy paper debating different definitions of the term—I’m not gonna read the whole thing, but its existence certainly suggests that people disagree. Looks like it ends up preferring to use your definition of speciesism, and uses the term “species-egalitarianists” for the hardline humans == shrimp position.
Fourth: Merriam-Webster, which has no time for all this philosophical BS (lol) -- speciesism is simply “prejudice or discrimination based on species”, and that’s that, apparently!
Fifth: this animal-ethics.org website—long page, and maybe it’s written in a sneaky way that actually permits multiple definitions? But at least based on skimming it, it seems to endorse the hardline position that not giving equal consideration to animals is like sexism or racism: “How can we oppose racism and sexism but accept speciesism?”—“A common form of speciesism that often goes unnoticed is the discrimination against very small animals.”—“But if intelligence cannot be a reason to justify treating some humans worse than others, it cannot be a reason to justify treating nonhuman animals worse than humans either.”
Sixth google result is PETA, who says “Speciesism is the human-held belief that all other animal species are inferior… It’s a bias rooted in denying others their own agency, interests, and self-worth, often for personal gain.” I actually expected PETA to be the most zealously hard-line here, but this page definitely seems to be written in a sneaky way that makes it sound like they are endorsing the humans == shrimp position, while actually being compatible with your more philosophically well-grounded definition. Eg, the website quickly backs off from the topic of humans-vs-animals moral worth, moving on to make IMO much more sympathetic points, like that it’s ridiculous to think farmed animals like pigs are less deserving of moral concern than pet animals like dogs. And they talk about how animals aren’t ours to simply do absolutely whatever we please with zero moral consideration of their interests (which is compatible with thinking that animals deserve some-but-not-equal consideration).Anyways. Overall it seems like philosophers and other careful thinkers (such as the editors of the the EA Forum wiki) would like a minimal definition, wheras perhaps the more common real-world usage is the ill-considered maximal definition that I initially assumed it had. It’s unclear to me what the intention was behind the original meaning of the term—were early users of the word speciesism trying to imply that humans == shrimp and you’re a bad person if you disagree? Or were they making a more careful philosophical distinction, and then, presumably for activist purposes, just deliberately chose a word that was destined to lead to this confusion?
No offense meant to you, or to any of these (non-EA) animal activist sources that I just googled, but something about this messy situation is not giving me the best “truthseeking” vibes...
Excerpting from and expanding on a bit of point 1 of my reply to akash above. Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark:
Something akin to “experience size” (very well-described by that recent blog post!)
The importance of sapience—if an experience of suffering is happening “all on its own”, floating adrift in the universe with nobody to think “I am suffering”, “I hope this will end soon”, etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences? Maybe it’s incoherent to ask questions like this, or I am thinking about this in totally the wrong way? But it seems like an important question to me. The similiarities between layers of “neurons” in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine “experience of vision” (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation. I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier.
How good of an approximation is it to linearly “add up” positive experiences when the experiences are near-identical? ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.
Something about “higher pleasures”, or Neitzcheanism, or the complexity of value, that maybe there’s more to life than just adding up positive and negative valence?? Personally, if I got to decide right now what happens to the future of human civilization, I would definitely want to try and end suffering (insomuch as this is feasible), but I wouldn’t want to try and max out happiness, and certainly not via any kind of rats-on-heroin style approach. I would rather take the opposite tack, and construct a smaller number of god-like superhuman minds, who might not even be very “happy” in any of the usual senses (ie, perhaps they are meditating on the nature of existence with great equanimity), but who in some sense are able to like… maximize the potential of the universe to know itself and explore the possibilities of consciousness. Or something...
Yeah, I wish they had clarified how many years the $100m is spread out over. See my point 3 in reply to akash above.
Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I’ve talked to about this stuff. However:
I don’t have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn’t overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I’ve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints—one is “animal-welfare EA sometimes seems too ‘activist’ in a non-truthseeking way”, and another is more like “I disagree with these folks about philosophical questions”. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other… who’s tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who’s doing it because they disagree with me about something akin to “experience size”, or the importance of sapience, or how good of an approximation it is to linearly “add up” positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it’s hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commenters—I wasn’t primarily thinking “man, this animal-welfare stuff is gonna tank EA’s reputation”, but rather “hey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but ‘weird’ causes like AI safety and chicken wellbeing.”
I get that the question is only asking about $100m, which seems like it wouldn’t shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, I’m glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area… if it weren’t for the pesky problem that it’s massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
I’m not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if you’re trading off against global health, then you’re losing something strongly positive.
“Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?”—well, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things:
- Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years?
- Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging “pro-animal-welfare, anti-GHD” coalition who will return for next year’s question, “Should we shift $500m from GHD to animal welfare?”, and the question the year after that...
I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so I’d be more amenable to it.
- ^
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.)
The animal welfare side of things feels less truthseeking, more activist, than other parts of EA. Talk of “speciesim” that implies animals’ and humans’ lives are of ~equal value, seems farfetched to me. People frequently do things like taking Rethink’s moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.
If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above. But I’m also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions—both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA. Going all-in on animal welfare at the expense of global development seems bad for the movement.
I’d especially welcome criticism from folks not interested in human longevity. If your priority as a human being isn’t to improve healthcare or to reduce catastrophic/existential risks, what is it? Why?
Personally, I am interested in longevity and I think governments (and other groups, although perhaps not EA grantmakers) should be funding more aging research. Nevertheless, some criticism!
I think there are a lot of reasonable life goals other than improving healthcare or reducing x-risks. These things are indeed big, underrated threats to human life. But the reason why human life is so worthwhile and in need of protection, is because life is full of good experiences. So, trying to create more good experiences (and conversely, minimize suffering / pain / sorrow / boredom etc) is also clearly a good thing to do. “Create good experiences” covers a lot of things, from mundane stuff like running a restaurant that makes tasty food or developing a fun videogame, to political crusades to reduce animal suffering or make things better in developing countries or prevent wars and recessions or etc, to anti-aging-like moonshot tech projects like eliminating suffering using genetic engineering or trying to build Neuralink-style brain-computer interfaces or etc. Basically, I think the Bryan Johnson style “the zeroth rule is don’t-die” messaging where antiaging becomes effectively the only thing worth caring about, is reductive and will probably seem off-putting to many people. (Even though, personally, I totally see where you are coming from and consider longevity/health a key personal priority.)
This post bounces around somewhat confusingly among a few different justifications for / defenses of aging research. I think this post (or future posts) would be more helpful if it had a more explicit structure, acknowledging that there are many reasons one could be skeptical of aging research. Here is an example outline:
Some people don’t understand transhumanist values at all, and think that death is essentially good because “death gives life meaning’ or etc silliness.
Other people will kinda-sorta agree that death is bad, but also feel uncomfortable about the idea of extending lifespans—people are often kinda confused about their own feelings/opinions here simply because they haven’t thought much about it.
Some people totally get that death is bad, insofar as they personally would enjoy living much longer, but they don’t think that solving aging would be good from an overall societal perspective.
Some people think that a world of extended longevity would have various bad qualities that would mean the cure for aging is worse than the disease—overpopulation, or stagnant governments/culture (including perpetually stable dictatorships), or just a bunch of dependent old people putting an unsustainable burden on a small number of young workers, or conversely that if people never got to retire this would literally be a fate worse than death. (I think these ideas are mostly silly, but they are common objections. Also, I do think it would be valuable to try and explore/predict what a world of enhanced longevity would look like in more detail, in terms of the impact on culture / economy / governance / geopolitics / etc. Yes, the common objections are dumb, and minor drawbacks like overpopulation shouldn’t overshadow the immense win of curing aging. But I would still be very curious to know what a world of extended longevity would look like—which problems would indeed get worse, and which would actually get better?)
Most of this category of objections is just vague vibes, but a subcategory here is people actually running the numbers and worrying that an increase in elderly people will bankrupt Medicare, or whatever—this is why, when trying to influence policy and public research funding decisions, I think it’s helpful to address this by pointing out that slowing aging (rather than treating disease) would actually be positive for government budgets and the economy, as you do in the post. (Even though in the grand scheme of things, it’s a little absurd to be worried about whether triuphing over death will have a positive or negative effect on some CBO score, as if that should be the deciding factor of whether to cure aging!!)
Other people seem to think that curing death would be morally neutral from an external top-down perspective—if in 2024 there are 8 billion happy people, and in 2100 there are 8 billion happy people, does it really matter whether it’s the same people or new ones? Maybe the happiness is all that counts. (I have a hard time understanding where people are coming from when they seem to sincerely believe this 100%, but lots of philosophically-minded people feel this way, including many utilitarian EA types.) More plausibly, people won’t be 100% committed to this viewpoint, but they’ll still feel that aging and death is, in some sense, less of an ongoing catastrophe from a top-down civilization-wide perspective than it is for the individuals making up that civilization. (I understand and share this view.)
Some people agree that solving aging would be great for both individuals and society, but they just don’t think that it’s tractable to work on aging. IMO this has been the correct opinion for the vast majority of human history, from 10,000 B.C. up until, idk, 2005 or something? So I don’t blame people for failing to notice that maybe, possibly, we are finally starting to make some progress on aging after all. (Imagine if I wrote a post arguing for human expansion to other star systems, and eventually throughout the galaxy, and made lots of soaring rhetorical points about how this is basically the ultimate purpose of human civilization. In a certain sense this is true, but also we obviously lack the technology to send colony-ships to even the nearest stars, so what’s the point of trying to convince people who think civilization should stay centered on the Earth?)
I really like the idea of ending aging, so I get excited about various bits of supposed progress (rapamycin? senescent cell therapy? idk). Many people don’t even know about these small promising signs (eg the ongoing mouse longevity study).
Some people know about those small promising signs, but still feel uncertain whether these current ideas will pan out into real benefits for healthy human lifespans. Reasonable IMO.
Even supposing that something like rapamycin, or some other random drug, indeed extends lifespan by 15% or something—that would be great, but what does that tell me about the likelihood that humanity will be able to consistently come up with OTHER, bigger longevity wins? It is a small positive update, but IMO there is potentially a lot of space between “we tried 10,000 random drugs and found one that slows the progression of alzheimers!” and “we now understand how alzheimers works and have developed a cure”. Might be the same situation with aging. So, even getting some small wins doesn’t necessarily mean that the idea of “curing aging” is tractable, especially if we are operating without much of a theory of how aging works. (Seems plausible to me that humanity might be able to solve, like, 3 of the 5 major causes of aging, and lifespan goes up 25%, but then the other 2 are either impossible to fix for fundamental biological reasons, or we never manage to figure them out.)
A lot of people who appear to be in the “death is good” / “death isn’t a societal problem, just an individual problem” categories above, would actually change their tune pretty quickly if they started believing that making progress on longevity was actually tractable. So I think the tractability objections are actually more important to address than it seems, and the earlier stuff about changing hearts and minds on the philosophical questions is actually less important.
Probably instead of one giant comprehensive mega-post addressing all possible objections, you should tackle each area in its own more bite-sized post—to be fancy, maybe you could explicitly link these together in a structured way, like Holden Karnofsky’s “Most Important Century” blog posts.
I don’t really know anything about medicine or drug development, so I can’t give a very detailed breakdown of potential tractability objections, and indeed I personally don’t know how to feel about the tractability of anti-aging.
Of course, to the extent that your post is just arguing “governments should fund this area more, it seems obviously under-resourced”, then that’s a pretty low bar, and your graph of the NIH’s painfully skewed funding priorities basically makes the entire argument for you. (Although I note that the graph seems incorrect?? Shouldn’t $500M be much larger than one row of pixels?? Compare to the nearby “$7B” figures; the $500M should of course be 1/14th as tall...) For this purpose, it’s fine IMO to argue “aging is objectively very important, it doesn’t even matter how non-tractable it is, SURELY we ought to be spending more than $500m/year on this, at the very least we should be spending more than we do on Alzheimers which we also don’t understand but is an objectively smaller problem.”
But if you are trying to convince venture-capitalists to invest in anti-aging with the expectation of maybe actually turning a profit, or win over philanthropists who have other pressing funding priorities, then going into more detail on tractability is probably necessary.
You might be interested in some of the discussion that you can find at this tag: https://forum.effectivealtruism.org/topics/refuges
People have indeed imagined creating something like a partially-underground town, which people would already live in during daily life, precisely to address the kinds of problems you describe (working out various kinks, building governance institutions ahead of time, etc). But on the other hand, it sounds expensive to build a whole city (and would you or I really want to uproot our lives and move to a random tiny town in the middle of nowhere just to help be the backup plan in case of nuclear war?), and it’s so comparatively cheap to just dig a deep hole somewhere and stuff a nuclear reactor + lots of food + whatever else inside, which after all will probably be helpful in a catastrophe.In reality, if the planet was to be destroyed by nuclear holocaust, a rogue comet, a lethal outbreak none of these bunkers would provide the sanctity that is promised or the capability to ‘rebuild’ society.
I think your essay does a pretty good job of pointing out flaws with the concept of bunkers in the Fallout TV + videogame universe. But I think that in real life, most actual bunkers (eg constructed by militaries, the occasional billionare, cities like Seoul which live in fear of enemy attack or natural disasters, etc) aren’t intended to operate indefinitely as self-contained societies that could eventually restart civilization, so naturally they would fail at that task. Instead, they are just supposed to keep people alive through an acute danger period of a few hours to weeks (ie, while a hurricane is happening, or while an artillery barage is ongoing, or while the local government is experiencing a temporary period of anarchy / gang rule / rioting, or while radiation and fires from a nearby nuclear strike dissapate). Then, in 9 out of 10 cases, probably the danger passes and some kind of normal society resumes (FEMA shows up after the hurricane, or a new stable government eventually comes to power, etc—even most nuclear wars probably wouldn’t result in the comically barren and devastated world of the Fallout videogames). I don’t think militaries or billionaires are necessarily wasting their money; they’re just buying insurance against medium-scale catastrophes, and admitting that there’s nothing they can do about the absolute worst-case largest-scale catastrophes.
Few people have thought of creating Fallout-style indefinite-civilizational-preservation bunkers in real life, and to my knowledge nobody has actually built one. But presumably if anyone did try this in real life (which would involve spending many millions of dollars, lots of detailed planning, etc), they would think a little harder and produce something that makes a bit more sense than the bunkers from the Fallout comedy videogames, and indeed do something like the partially-underground-city concept.
This is a great idea and seems pretty well thought-through; one of the more interesting interventions I’ve seen proposed on the Forum recently. I don’t have any connection to medicine or public policy or etc, but it seems like maybe you’d want to talk to OpenPhil’s “Global Health R&D” people, or maybe some of the FDA-reform people including Alex Tabbarok and Scott Alexander?
Of course both candidates would be considered far-right in a very left-wing place (like San Fransisco?), and they’d be considered far-left in a right-wing place (like Iran?), neoliberal/libertarian in a protectionist/populist place (like Turkey or peronist Argentina?), protectionist/populist in a neoliberal/libertarian place (like Singapore or Argentina under Milei?).
But I think the question is why neither party seems capable of offering up a more electable candidate, with fewer of the obvious flaws (old age and cognitive decline for Biden, sleazyness and transparent willingness to put self-interest over the national interest for Trump) and perhaps closer to the median American voter in terms of their positions (in fact, Biden and Trump are probably closer to the opinions of the median democrat / republican, respectively, than they are to the median overall US citizen).
Some thoughts:
Promising donations, or even endorsements, to politicians in exchange for their signing up to the dominant-assurance-contract-style scheme, would almost certainly be percieved as sketchy / corruption-adjacent, even if it isn’t a violation of campaign finance law. (I think promising conditional donations, even if not done in writing, would indeed be a violation.) It would be better to just have people signing up because they thought it was a good idea, with no money or other favors changing hands.
I don’t think having people sign a literal dominant assurance contract is the load-bearing part of this proposal; therefore the part where people sign a literal contract should be dropped. First, how will you enforce the contract? Sue them if they aren’t sufficiently enthusiastic supporters of the centrist candidate?? This world of endorsemenets and candidate selection doesn’t run on formal legal rules, it runs on political coalition-building. So instead of having a literal contract at the center of your scheme, you should just have a “whisper-network” style setup, where one central organization (perhaps the No Labels campaign) runs the dominant-assurance-contract logic informally (but still with a high degree of trust and rigor). ie, No Labels would individually talk to different congressmen, explain the scheme, ask if they are interested, etc. If the congressmen like the idea of making a coordinated switch to endorsing a No Labels candidate once enough other congressmen have signed on, then No Labels will keep that in mind, and meanwhile keep their support secret. A problem here is that the organization running this scheme would ideally want to have lots of credibility, authority, etc, which as far as I know, No Labels doesn’t currently have.
(There are other situations, like the national popular vote compact, where a literal legal mechanism is the best way to implement the dominant assurance contract idea. But it’s not right for this situation.)
You and I have been talking about flipping senators and congressmen to support a third-party presidential candidate; but is this really the best plan? Won’t congressmen rationally be extremely hesitant to betray their party like this, even if the scheme succeeds? Imagine that, say, two thirds of the senate and congress and whoever, decide to flip their endorsements to a centrist candidate, and that candidate wins the election. There will still be partisian republican-vs-democrat elections for every other role, including the members’ own reelection campaigns. The party organizations (DNC / RNC) and surrounding infrastructure (think tanks, NGOs, etc), of the democrats and republicans will still exist—these party organizations will want to preserve their own existence (after all, they have to keep fighting for all the downballot races, and they have to be ready to run another more-partisian presidential election in 2028!), so they’ll want to punish these No-Labels-dominant-assurance-scheme defectors by ostracising them, refusing to fund their campaigns, funding primary challengers, etc. So, I think trying to get everyone to flip to a temporary third party just for one presidential election would be a doomed prospect—you’d instead have to go even bigger, and somehow try to get everyone to flip to a permanent third party that would endure as a new, dominant political force in american politics for years to come. This, in turn, seems like way too big of a project and too much of a longshot for anyone to pull off in the next few months.
Probably a better idea would be to just try and get EITHER democrats OR republicans to pull off a smaller-scale realignment WITHIN their party—ie, getting a cabal of democrats to agree to switch their endorsement (and their electors at the party convention) from Biden to some more-electable figure like Gavin Newsom (or ideally, someone more centrist than Newsom), or getting a cabal of republicans to switch from Trump to Haley (or, again, someone more centrist). Instead of trying to transform the entire political landscape and summon an entire third-party winning coalition ex nihilo, for this plan you only need a wee bit of elite coordination, similar to how you describe Biden’s suprise comeback in the 2020 primary election. Plus, now you get two shots-on-goal! Since either the republicans or democrats could use this strategy (personally I’d be more optimistic about the democrats’ ability to pull this off, but if moderate republicans somehow manage an anti-trump coup at their convention, more power to them!).
Finally, you might find this blog post by Matthew Yglesias helpful for understanding some of the political details that have led to this weird situation where both parties seem to be making huge unforced errors by nominating unpopular and weak candidates: https://www.slowboring.com/p/why-the-parties-cant-decide
Yglesias’s writing in general has influenced my comments above, insofar as he emphasizes the importance of internal coalition politics, dives into the nitty-gritty details of the bargaining / politics behind major decisions, and emphasizes “elite persuation” as a good way of trying to achieve change. Personally, I am a huge fan of nerdy poli-sci schemes like approval voting and quadratic voting, dominant assurance contracts, georgist land-value taxes and carbon taxes, charter cities, “base realignment and closure”-inspired ideas for optimal budget reform, and so forth. But reading a bunch of Slow Boring has given me more of an appreciation for the fact that often the most practical way to get things done is indeed to do a bunch of normal grubby politics/negotiation/bargaining/persuasion (and just try to do politics well). Thus, even when trying to implement some kind of idealized poli-sci scheme, I think it’s important to pay attention to the detailed politics of the situation and try to craft a hybrid approach, to build something with the best chance of winning.
I don’t understand this post, because it seems to be parodying Anthropic’s Responsible Scaling Policies (ie, saying that the RSPs are not sufficient), but the analogy to nuclear power is confusing since IMO nuclear power has in fact been harmfully over-regulated, such that advocating for a “balanced, pragmatic approach to mitigating potential harms from nuclear power” does actually seem good, compared to the status quo where society hugely overreacted to the risks of nuclear power without properly taking a balanced view of the costs vs benefits.
Maybe you can imagine how confused I am, if we use another example of an area where I think there is a harmful attitude of regulating entirely with a view towards avoiding visible errors of commision, and completely ignoring errors of omission:Hi, we’re your friendly local pharma company. Many in our community have been talking about the need for “vaccine safety.”… We will conduct ongoing evaluations of whether our new covid vaccine might cause catastrophic harm (conservatively defined as >10,000 vaccine-side-effect-induced deaths).
We aren’t sure yet exactly whether the vaccine will have rare serious side effects, since of course we haven’t yet deployed the vaccine in the full population, and we’re rushing to deploy the vaccine quickly in order to save the lives of the thousands of people dying to covid every day. But fortunately, our current research suggests that our vaccine is unlikely to cause unacceptable harm. The frequency and severity of side effects seen so far in medical trials of the vaccine are far below our threshold of concern… the data suggest that we don’t need to adopt additional safety measures at present.
To me, vaccine safety and nuclear safety seem like the least helpful possible analogies to the AI situation, since the FDA and NRC regulatory agencies are both heavily infected with an “avoid deaths of commision at nearly any cost” attitude, which ignores tradeoffs and creates a massive “invisible graveyard” of excess deaths-of-omission. What we want from AI regulation isn’t an insanely one-sided focus that greatly exaggerates certain small harms. Rather, for AI it’s perfectly sufficient to take the responsible, normal, common-sensical approach of balancing costs and benefits. The problem is just that the costs might be extremely high, like a significant chance of causing human extinction!!
Another specific bit of confusion: when you mention that Chernobyl only killed 50 people, is this supposed to convey:
1. This sinister company is deliberately lowballing the Chernobyl deaths in order to justify continuing to ignore real risks, since a linear-no-threshold model suggests that Chernobyl might indeed have caused tens of thousands of excess cancer deaths around the world? (I am pretty pro- nuclear power, but nevertheless the linear-no-threshold model seems plausible to me personally.)
2. That Chernobyl really did kill only 50 people, and therefore the company is actually correct to note that nuclear accidents aren’t a big deal? (But then I’m super-confused about the overall message of the post...)
3. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation on nuclear power plants in order to prevent other rare accidents that might kill 50 people tops? (This seems like extreme over-regulation of a beneficial technology, compared to the much larger number of people who die from the smoke of coal-fired power plants and other power sources.)
4. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation, because future accidents might indeed kill over 10,000 people? (This seems like it would imply some kind of conversation about first-principles reasoning and tail risks and stuff, but this isn’t present in the post?)
As you mention, the scale seems small here relative to the huge political lift necessary to get something like MAID passed in the USA. I don’t know much about MAID or how it was passed in Canada, but I’m picturing that in the USA this would become a significant culture-war issue at least 10% as big as the pro-life-vs-pro-choice wars over abortion rights. If EA decided to spearhead this movement, I fear it would risk permanently politicizing the entire EA movement, ruining a lot of great work that is getting done in other cause areas. (Maybe in some European countries this kind of law would be an easier sell?)
If I was a negative utilitarian, besides focusing on longtermist S-risks, I would probably be most attracted to campaigns like this one to try and cure the suffering of cluster-headaches patients. This seems like a much more robustly-positive intervention (ie, regular utilitarians would like it too), much less politically dangerous, for a potentially similar-ish (???) reduction in suffering (idk how many people suffer cluster headaches versus how many people would use MAID who wouldn’t otherwise kill themselves, and idk how to compare the suffering of cluster headaches to that of depression).
In terms of addressing depression specifically, I’d think that you could get more QALYs per dollar (even from a fully negative-utilitarian perspective) by doing stuff like:funding Strongminds-style mental health charities in LMIC (and other semi-boring public-health-policy stuff that reduces depression on a population level, including interventions like “get people to exercise more”, or “put lithium in the drinking water”, or whatever)
literally just trying to use genetic engineering to end all suffering
using AI to try and discover amazing new classes of antidepressants (actually, big pharma is probably already on the case, so EA doesn’t have to take this on)
trying to find various ways to lower the birthrate, and especially of disproportionately lowering the birthrate of people likely to have miserable lives (ie children likely to grow up impovershed / mentally ill / etc), or perhaps improving future people’s mental health via IVF polygenic selection for low neuroticism and low depression.
Finally, I would have a lot of questions about the exact theory of impact here and the exact pros/cons of enacting a MAID-style law in more places. From afar (I don’t know much about suicide methods), it seems like there are plenty of reasonably accessible ways that a determined person could end their life. So, for the most part, a MAID law wouldn’t be enabling the option of suicide for people who previously couldn’t possibly commit suicide in any way—it’s more like it would be doing some combination of 1. making suicide logistically easier / more convenient, and 2. making suicide more societally acceptable. This seems dicier to me, since I’d be worried about causing a lot of collateral damage / getting a lot of adverse selection—who exactly are the kinds of people who would suicide if it was marginally more societally acceptable, but wouldn’t suicide otherwise?
Of course this is an April Fool’s day post, but I actually think that Lockheed Martin isn’t a great choice for this parody, since (unlike something like a cigarrette company, where the social impact of pretty much any job at the company is going to be “marginally more cigarrettes get sold”), some of the military stuff that Lockheed works on probably very positively impactful on the world, and other stuff is negatively impactful. So it seems there would be a huge variance in social impact depending on the individual job.
Some examples of how it’s tricky to assess whether a given military tech is net positive or negative:Lockheed Martin makes the GPS satellites, which:
contribute massive levels of positive economic externalities for the whole world (large positive impact on global development—literally like 2% of global GDP is directly enabled by GPS...)
also enables precision-guided weapons like JDAM bombs instead of the dumb bombs of yesteryear (ambiguous impact—great that you can cause less collateral damage to hit a given target, but obviously that perhaps encourages you to bomb more targets)
little-known fact, the GPS satellites also contain some nuclear-detonation detection hardware—ambiguous impact since I don’t even know the details of what this system does, but probably good for the USA to know ASAP if there are suprise nukes going off somewhere in the world??
Not sure if Lockheed specifically makes submarines or submarine-based nuclear missiles, but these were actually immensely helpful for reducing nuclear risk, by creating a robust “second strike” capability, and reducing the “use it or lose it” pressure to preemptively first-strike. So it strikes me that working on stealthier submarine technology could actually be a great, morally virtuous career choice for reducing nuclear risk.
Similarly, I’ve heard that spy satellites (which Lockheed does make, I think?) were helpful for nuclear risk in the cold war, since once the USA and Soviet Union could see each other’s nuclear silos from space, each nation now had an additional way to verify that the other was adhering to arms-control agreements. This made it easier to make new arms control agreements and ultimately reduce nuclear stockpiles.
Anti-ballistic-missile defenses for intercepting nukes in-flight—is this good (because after all, you are preventing some city from being nuked) or bad (because now you just broke the balance of deterrence, and maybe encourage your enemy to build and launch twice as many nukes to overwhelm your missile defenses)? Probably bad, but idk.
Most of the above examples are nuclear-related, which is kind of a topsy-turvy world where sometimes bad-seeming things can be good and vice versa. Meanwhile, in the domain of normal weapons, like fighter jets or bombs or tanks or machine guns or whatever, it seems more straightforward that filling the world with more weapons --> ultimately more people dying, somewhere, somehow. But even here, there are lots of uncertainties and big questions. The US sent a lot of weapons to Ukraine to help them fight against Russia. Is this bad (longer war = more Ukrainians and Russians dying, would’ve been better to just let Ukraine get defeated quickly and mercifully?), or good (making Russia struggle and pay a heavy price for their war of aggression = maybe deters nations from fighting other offensive wars of conquest in the future)?
Lockheed spends a lot of R&D money pushing the envelope on cutting-edge technology like drones and hypersonic missiles, which I often think is bad because it is probably just promoting an arms race and encouraging China / Russia / everyone else to try and match our investments in killer drones or whatever. But if you are sufficiently enthusiastic about America’s role in geopolitics, you can always make the classic argument that American hegemony is good for the world (ensure trade, promote democracy, whatever) → therefore anything that makes America stronger relative to its adversaries is good. I don’t think this argument is strong enough to justify harmful arms races in things like “slaughterbot”-style drones or hypersonics. But I do think that the US is on net a force for good in the world (at least in the sense of value-over-replacement-superpower), so I do think this argument is worth something.
All the above isn’t a criticism of your post at all—I’ve just had this military-jobs-related rant pent up in my head for a while and your post happened to remind me to write it up. I unironically think it would be interesting and helpful (albeit not a top priority) for an EA organization like 80K to engage more deeply about some of these topics (the general quality of discourse around Lockheed-style jobs is very rudimentary and dumb, basically just overall “military-bad” vs “military-good”), and give people some detailed, considered advice about navigating situations like this where the stakes seem high in terms of both upside and downside of potential career impact.
One crucial consideration that might actually end up vindicating the overall “military-bad” vs “military-good” framing—maybe I do all this detailed thinking and decide to become an engineer working on submarine stealth technology, which is great for reducing nuclear risk. But maybe if I do that, I actually just free up another Lockheed engineer who isn’t a super-well-informed 80,000 Hours fan, and instead of submarine stealth tech, they get a job working on submarine detection technology (which is correspondingly destabilizing to nuclear risk), or hypersonic missiles that are fueling an arms race, or some other terrible thing. Since most Lockheed engineers aren’t EAs, maybe this means the career impact of individual roles really does just reduce to the average career impact of the Lockheed company (or career specialization, like “stealth technology engineer”) as a whole.
Final random note: Lockheed salaries are, to my knowledge, not actually exceptional… programmer salaries at most military-industrial places are actually about half that of programmer salaries at “tech” companies like Google and Microsoft: https://www.levels.fyi/?compare=Microsoft,Google,Lockheed%20Martin&track=Software%20Engineer
Thinking about my point #3 some more (how do you launch a satellite after a nuclear war). I realized that if you put me in charge of making a plan to DIYing this (instead of lobbying the US military to do it for me, which would be my first choice), and if SpaceX also wasn’t answering my calls to see if I could buy any surplus starlinks...
You could do worse than partnering with Rocketlab, a satellite and rocket company based in New Zealand, developing the emergency satellite based on their “Photon” platform (design has flown before, small enough to still be kinda cheap, big enough to generate much more power than a cubesat). Then Rocketlab can launch their Electron rocket from New Zealand in the event of a nuclear war, and (in a real crisis like that), the whole company would help make sure the mission happened—the idea of partnering with someone rather than just buying a satellite is key, IMO, because then it’s mostly THEIR end of the world plan and in a crisis would benefit from their expertise / workforce.
I’d try to talk to the CEO, get him on board. Seems like the kind of flashy, Elon-esque, altruistic-in-a-sexy-way mission that could help with making RocketLab seem “cool” and recruiting eager mission-driven employees. (RocketLab’s CEO currently has ambitions to do some similar flashy missions, like sending their own probe to Venus.)
But this would definitely be more like a $30M project, than a $300K project.