Scriptwriter for RationalAnimations! Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc. Also a big fan of EA / rationalist fiction!
Jackson Wagner
I’d especially welcome criticism from folks not interested in human longevity. If your priority as a human being isn’t to improve healthcare or to reduce catastrophic/existential risks, what is it? Why?
Personally, I am interested in longevity and I think governments (and other groups, although perhaps not EA grantmakers) should be funding more aging research. Nevertheless, some criticism!
I think there are a lot of reasonable life goals other than improving healthcare or reducing x-risks. These things are indeed big, underrated threats to human life. But the reason why human life is so worthwhile and in need of protection, is because life is full of good experiences. So, trying to create more good experiences (and conversely, minimize suffering / pain / sorrow / boredom etc) is also clearly a good thing to do. “Create good experiences” covers a lot of things, from mundane stuff like running a restaurant that makes tasty food or developing a fun videogame, to political crusades to reduce animal suffering or make things better in developing countries or prevent wars and recessions or etc, to anti-aging-like moonshot tech projects like eliminating suffering using genetic engineering or trying to build Neuralink-style brain-computer interfaces or etc. Basically, I think the Bryan Johnson style “the zeroth rule is don’t-die” messaging where antiaging becomes effectively the only thing worth caring about, is reductive and will probably seem off-putting to many people. (Even though, personally, I totally see where you are coming from and consider longevity/health a key personal priority.)
This post bounces around somewhat confusingly among a few different justifications for / defenses of aging research. I think this post (or future posts) would be more helpful if it had a more explicit structure, acknowledging that there are many reasons one could be skeptical of aging research. Here is an example outline:
Some people don’t understand transhumanist values at all, and think that death is essentially good because “death gives life meaning’ or etc silliness.
Other people will kinda-sorta agree that death is bad, but also feel uncomfortable about the idea of extending lifespans—people are often kinda confused about their own feelings/opinions here simply because they haven’t thought much about it.
Some people totally get that death is bad, insofar as they personally would enjoy living much longer, but they don’t think that solving aging would be good from an overall societal perspective.
Some people think that a world of extended longevity would have various bad qualities that would mean the cure for aging is worse than the disease—overpopulation, or stagnant governments/culture (including perpetually stable dictatorships), or just a bunch of dependent old people putting an unsustainable burden on a small number of young workers, or conversely that if people never got to retire this would literally be a fate worse than death. (I think these ideas are mostly silly, but they are common objections. Also, I do think it would be valuable to try and explore/predict what a world of enhanced longevity would look like in more detail, in terms of the impact on culture / economy / governance / geopolitics / etc. Yes, the common objections are dumb, and minor drawbacks like overpopulation shouldn’t overshadow the immense win of curing aging. But I would still be very curious to know what a world of extended longevity would look like—which problems would indeed get worse, and which would actually get better?)
Most of this category of objections is just vague vibes, but a subcategory here is people actually running the numbers and worrying that an increase in elderly people will bankrupt Medicare, or whatever—this is why, when trying to influence policy and public research funding decisions, I think it’s helpful to address this by pointing out that slowing aging (rather than treating disease) would actually be positive for government budgets and the economy, as you do in the post. (Even though in the grand scheme of things, it’s a little absurd to be worried about whether triuphing over death will have a positive or negative effect on some CBO score, as if that should be the deciding factor of whether to cure aging!!)
Other people seem to think that curing death would be morally neutral from an external top-down perspective—if in 2024 there are 8 billion happy people, and in 2100 there are 8 billion happy people, does it really matter whether it’s the same people or new ones? Maybe the happiness is all that counts. (I have a hard time understanding where people are coming from when they seem to sincerely believe this 100%, but lots of philosophically-minded people feel this way, including many utilitarian EA types.) More plausibly, people won’t be 100% committed to this viewpoint, but they’ll still feel that aging and death is, in some sense, less of an ongoing catastrophe from a top-down civilization-wide perspective than it is for the individuals making up that civilization. (I understand and share this view.)
Some people agree that solving aging would be great for both individuals and society, but they just don’t think that it’s tractable to work on aging. IMO this has been the correct opinion for the vast majority of human history, from 10,000 B.C. up until, idk, 2005 or something? So I don’t blame people for failing to notice that maybe, possibly, we are finally starting to make some progress on aging after all. (Imagine if I wrote a post arguing for human expansion to other star systems, and eventually throughout the galaxy, and made lots of soaring rhetorical points about how this is basically the ultimate purpose of human civilization. In a certain sense this is true, but also we obviously lack the technology to send colony-ships to even the nearest stars, so what’s the point of trying to convince people who think civilization should stay centered on the Earth?)
I really like the idea of ending aging, so I get excited about various bits of supposed progress (rapamycin? senescent cell therapy? idk). Many people don’t even know about these small promising signs (eg the ongoing mouse longevity study).
Some people know about those small promising signs, but still feel uncertain whether these current ideas will pan out into real benefits for healthy human lifespans. Reasonable IMO.
Even supposing that something like rapamycin, or some other random drug, indeed extends lifespan by 15% or something—that would be great, but what does that tell me about the likelihood that humanity will be able to consistently come up with OTHER, bigger longevity wins? It is a small positive update, but IMO there is potentially a lot of space between “we tried 10,000 random drugs and found one that slows the progression of alzheimers!” and “we now understand how alzheimers works and have developed a cure”. Might be the same situation with aging. So, even getting some small wins doesn’t necessarily mean that the idea of “curing aging” is tractable, especially if we are operating without much of a theory of how aging works. (Seems plausible to me that humanity might be able to solve, like, 3 of the 5 major causes of aging, and lifespan goes up 25%, but then the other 2 are either impossible to fix for fundamental biological reasons, or we never manage to figure them out.)
A lot of people who appear to be in the “death is good” / “death isn’t a societal problem, just an individual problem” categories above, would actually change their tune pretty quickly if they started believing that making progress on longevity was actually tractable. So I think the tractability objections are actually more important to address than it seems, and the earlier stuff about changing hearts and minds on the philosophical questions is actually less important.
Probably instead of one giant comprehensive mega-post addressing all possible objections, you should tackle each area in its own more bite-sized post—to be fancy, maybe you could explicitly link these together in a structured way, like Holden Karnofsky’s “Most Important Century” blog posts.
I don’t really know anything about medicine or drug development, so I can’t give a very detailed breakdown of potential tractability objections, and indeed I personally don’t know how to feel about the tractability of anti-aging.
Of course, to the extent that your post is just arguing “governments should fund this area more, it seems obviously under-resourced”, then that’s a pretty low bar, and your graph of the NIH’s painfully skewed funding priorities basically makes the entire argument for you. (Although I note that the graph seems incorrect?? Shouldn’t $500M be much larger than one row of pixels?? Compare to the nearby “$7B” figures; the $500M should of course be 1/14th as tall...) For this purpose, it’s fine IMO to argue “aging is objectively very important, it doesn’t even matter how non-tractable it is, SURELY we ought to be spending more than $500m/year on this, at the very least we should be spending more than we do on Alzheimers which we also don’t understand but is an objectively smaller problem.”
But if you are trying to convince venture-capitalists to invest in anti-aging with the expectation of maybe actually turning a profit, or win over philanthropists who have other pressing funding priorities, then going into more detail on tractability is probably necessary.
You might be interested in some of the discussion that you can find at this tag: https://forum.effectivealtruism.org/topics/refuges
People have indeed imagined creating something like a partially-underground town, which people would already live in during daily life, precisely to address the kinds of problems you describe (working out various kinks, building governance institutions ahead of time, etc). But on the other hand, it sounds expensive to build a whole city (and would you or I really want to uproot our lives and move to a random tiny town in the middle of nowhere just to help be the backup plan in case of nuclear war?), and it’s so comparatively cheap to just dig a deep hole somewhere and stuff a nuclear reactor + lots of food + whatever else inside, which after all will probably be helpful in a catastrophe.In reality, if the planet was to be destroyed by nuclear holocaust, a rogue comet, a lethal outbreak none of these bunkers would provide the sanctity that is promised or the capability to ‘rebuild’ society.
I think your essay does a pretty good job of pointing out flaws with the concept of bunkers in the Fallout TV + videogame universe. But I think that in real life, most actual bunkers (eg constructed by militaries, the occasional billionare, cities like Seoul which live in fear of enemy attack or natural disasters, etc) aren’t intended to operate indefinitely as self-contained societies that could eventually restart civilization, so naturally they would fail at that task. Instead, they are just supposed to keep people alive through an acute danger period of a few hours to weeks (ie, while a hurricane is happening, or while an artillery barage is ongoing, or while the local government is experiencing a temporary period of anarchy / gang rule / rioting, or while radiation and fires from a nearby nuclear strike dissapate). Then, in 9 out of 10 cases, probably the danger passes and some kind of normal society resumes (FEMA shows up after the hurricane, or a new stable government eventually comes to power, etc—even most nuclear wars probably wouldn’t result in the comically barren and devastated world of the Fallout videogames). I don’t think militaries or billionaires are necessarily wasting their money; they’re just buying insurance against medium-scale catastrophes, and admitting that there’s nothing they can do about the absolute worst-case largest-scale catastrophes.
Few people have thought of creating Fallout-style indefinite-civilizational-preservation bunkers in real life, and to my knowledge nobody has actually built one. But presumably if anyone did try this in real life (which would involve spending many millions of dollars, lots of detailed planning, etc), they would think a little harder and produce something that makes a bit more sense than the bunkers from the Fallout comedy videogames, and indeed do something like the partially-underground-city concept.
This is a great idea and seems pretty well thought-through; one of the more interesting interventions I’ve seen proposed on the Forum recently. I don’t have any connection to medicine or public policy or etc, but it seems like maybe you’d want to talk to OpenPhil’s “Global Health R&D” people, or maybe some of the FDA-reform people including Alex Tabbarok and Scott Alexander?
Of course both candidates would be considered far-right in a very left-wing place (like San Fransisco?), and they’d be considered far-left in a right-wing place (like Iran?), neoliberal/libertarian in a protectionist/populist place (like Turkey or peronist Argentina?), protectionist/populist in a neoliberal/libertarian place (like Singapore or Argentina under Milei?).
But I think the question is why neither party seems capable of offering up a more electable candidate, with fewer of the obvious flaws (old age and cognitive decline for Biden, sleazyness and transparent willingness to put self-interest over the national interest for Trump) and perhaps closer to the median American voter in terms of their positions (in fact, Biden and Trump are probably closer to the opinions of the median democrat / republican, respectively, than they are to the median overall US citizen).
Some thoughts:
Promising donations, or even endorsements, to politicians in exchange for their signing up to the dominant-assurance-contract-style scheme, would almost certainly be percieved as sketchy / corruption-adjacent, even if it isn’t a violation of campaign finance law. (I think promising conditional donations, even if not done in writing, would indeed be a violation.) It would be better to just have people signing up because they thought it was a good idea, with no money or other favors changing hands.
I don’t think having people sign a literal dominant assurance contract is the load-bearing part of this proposal; therefore the part where people sign a literal contract should be dropped. First, how will you enforce the contract? Sue them if they aren’t sufficiently enthusiastic supporters of the centrist candidate?? This world of endorsemenets and candidate selection doesn’t run on formal legal rules, it runs on political coalition-building. So instead of having a literal contract at the center of your scheme, you should just have a “whisper-network” style setup, where one central organization (perhaps the No Labels campaign) runs the dominant-assurance-contract logic informally (but still with a high degree of trust and rigor). ie, No Labels would individually talk to different congressmen, explain the scheme, ask if they are interested, etc. If the congressmen like the idea of making a coordinated switch to endorsing a No Labels candidate once enough other congressmen have signed on, then No Labels will keep that in mind, and meanwhile keep their support secret. A problem here is that the organization running this scheme would ideally want to have lots of credibility, authority, etc, which as far as I know, No Labels doesn’t currently have.
(There are other situations, like the national popular vote compact, where a literal legal mechanism is the best way to implement the dominant assurance contract idea. But it’s not right for this situation.)
You and I have been talking about flipping senators and congressmen to support a third-party presidential candidate; but is this really the best plan? Won’t congressmen rationally be extremely hesitant to betray their party like this, even if the scheme succeeds? Imagine that, say, two thirds of the senate and congress and whoever, decide to flip their endorsements to a centrist candidate, and that candidate wins the election. There will still be partisian republican-vs-democrat elections for every other role, including the members’ own reelection campaigns. The party organizations (DNC / RNC) and surrounding infrastructure (think tanks, NGOs, etc), of the democrats and republicans will still exist—these party organizations will want to preserve their own existence (after all, they have to keep fighting for all the downballot races, and they have to be ready to run another more-partisian presidential election in 2028!), so they’ll want to punish these No-Labels-dominant-assurance-scheme defectors by ostracising them, refusing to fund their campaigns, funding primary challengers, etc. So, I think trying to get everyone to flip to a temporary third party just for one presidential election would be a doomed prospect—you’d instead have to go even bigger, and somehow try to get everyone to flip to a permanent third party that would endure as a new, dominant political force in american politics for years to come. This, in turn, seems like way too big of a project and too much of a longshot for anyone to pull off in the next few months.
Probably a better idea would be to just try and get EITHER democrats OR republicans to pull off a smaller-scale realignment WITHIN their party—ie, getting a cabal of democrats to agree to switch their endorsement (and their electors at the party convention) from Biden to some more-electable figure like Gavin Newsom (or ideally, someone more centrist than Newsom), or getting a cabal of republicans to switch from Trump to Haley (or, again, someone more centrist). Instead of trying to transform the entire political landscape and summon an entire third-party winning coalition ex nihilo, for this plan you only need a wee bit of elite coordination, similar to how you describe Biden’s suprise comeback in the 2020 primary election. Plus, now you get two shots-on-goal! Since either the republicans or democrats could use this strategy (personally I’d be more optimistic about the democrats’ ability to pull this off, but if moderate republicans somehow manage an anti-trump coup at their convention, more power to them!).
Finally, you might find this blog post by Matthew Yglesias helpful for understanding some of the political details that have led to this weird situation where both parties seem to be making huge unforced errors by nominating unpopular and weak candidates: https://www.slowboring.com/p/why-the-parties-cant-decide
Yglesias’s writing in general has influenced my comments above, insofar as he emphasizes the importance of internal coalition politics, dives into the nitty-gritty details of the bargaining / politics behind major decisions, and emphasizes “elite persuation” as a good way of trying to achieve change. Personally, I am a huge fan of nerdy poli-sci schemes like approval voting and quadratic voting, dominant assurance contracts, georgist land-value taxes and carbon taxes, charter cities, “base realignment and closure”-inspired ideas for optimal budget reform, and so forth. But reading a bunch of Slow Boring has given me more of an appreciation for the fact that often the most practical way to get things done is indeed to do a bunch of normal grubby politics/negotiation/bargaining/persuasion (and just try to do politics well). Thus, even when trying to implement some kind of idealized poli-sci scheme, I think it’s important to pay attention to the detailed politics of the situation and try to craft a hybrid approach, to build something with the best chance of winning.
I don’t understand this post, because it seems to be parodying Anthropic’s Responsible Scaling Policies (ie, saying that the RSPs are not sufficient), but the analogy to nuclear power is confusing since IMO nuclear power has in fact been harmfully over-regulated, such that advocating for a “balanced, pragmatic approach to mitigating potential harms from nuclear power” does actually seem good, compared to the status quo where society hugely overreacted to the risks of nuclear power without properly taking a balanced view of the costs vs benefits.
Maybe you can imagine how confused I am, if we use another example of an area where I think there is a harmful attitude of regulating entirely with a view towards avoiding visible errors of commision, and completely ignoring errors of omission:Hi, we’re your friendly local pharma company. Many in our community have been talking about the need for “vaccine safety.”… We will conduct ongoing evaluations of whether our new covid vaccine might cause catastrophic harm (conservatively defined as >10,000 vaccine-side-effect-induced deaths).
We aren’t sure yet exactly whether the vaccine will have rare serious side effects, since of course we haven’t yet deployed the vaccine in the full population, and we’re rushing to deploy the vaccine quickly in order to save the lives of the thousands of people dying to covid every day. But fortunately, our current research suggests that our vaccine is unlikely to cause unacceptable harm. The frequency and severity of side effects seen so far in medical trials of the vaccine are far below our threshold of concern… the data suggest that we don’t need to adopt additional safety measures at present.
To me, vaccine safety and nuclear safety seem like the least helpful possible analogies to the AI situation, since the FDA and NRC regulatory agencies are both heavily infected with an “avoid deaths of commision at nearly any cost” attitude, which ignores tradeoffs and creates a massive “invisible graveyard” of excess deaths-of-omission. What we want from AI regulation isn’t an insanely one-sided focus that greatly exaggerates certain small harms. Rather, for AI it’s perfectly sufficient to take the responsible, normal, common-sensical approach of balancing costs and benefits. The problem is just that the costs might be extremely high, like a significant chance of causing human extinction!!
Another specific bit of confusion: when you mention that Chernobyl only killed 50 people, is this supposed to convey:
1. This sinister company is deliberately lowballing the Chernobyl deaths in order to justify continuing to ignore real risks, since a linear-no-threshold model suggests that Chernobyl might indeed have caused tens of thousands of excess cancer deaths around the world? (I am pretty pro- nuclear power, but nevertheless the linear-no-threshold model seems plausible to me personally.)
2. That Chernobyl really did kill only 50 people, and therefore the company is actually correct to note that nuclear accidents aren’t a big deal? (But then I’m super-confused about the overall message of the post...)
3. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation on nuclear power plants in order to prevent other rare accidents that might kill 50 people tops? (This seems like extreme over-regulation of a beneficial technology, compared to the much larger number of people who die from the smoke of coal-fired power plants and other power sources.)
4. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation, because future accidents might indeed kill over 10,000 people? (This seems like it would imply some kind of conversation about first-principles reasoning and tail risks and stuff, but this isn’t present in the post?)
As you mention, the scale seems small here relative to the huge political lift necessary to get something like MAID passed in the USA. I don’t know much about MAID or how it was passed in Canada, but I’m picturing that in the USA this would become a significant culture-war issue at least 10% as big as the pro-life-vs-pro-choice wars over abortion rights. If EA decided to spearhead this movement, I fear it would risk permanently politicizing the entire EA movement, ruining a lot of great work that is getting done in other cause areas. (Maybe in some European countries this kind of law would be an easier sell?)
If I was a negative utilitarian, besides focusing on longtermist S-risks, I would probably be most attracted to campaigns like this one to try and cure the suffering of cluster-headaches patients. This seems like a much more robustly-positive intervention (ie, regular utilitarians would like it too), much less politically dangerous, for a potentially similar-ish (???) reduction in suffering (idk how many people suffer cluster headaches versus how many people would use MAID who wouldn’t otherwise kill themselves, and idk how to compare the suffering of cluster headaches to that of depression).
In terms of addressing depression specifically, I’d think that you could get more QALYs per dollar (even from a fully negative-utilitarian perspective) by doing stuff like:funding Strongminds-style mental health charities in LMIC (and other semi-boring public-health-policy stuff that reduces depression on a population level, including interventions like “get people to exercise more”, or “put lithium in the drinking water”, or whatever)
literally just trying to use genetic engineering to end all suffering
using AI to try and discover amazing new classes of antidepressants (actually, big pharma is probably already on the case, so EA doesn’t have to take this on)
trying to find various ways to lower the birthrate, and especially of disproportionately lowering the birthrate of people likely to have miserable lives (ie children likely to grow up impovershed / mentally ill / etc), or perhaps improving future people’s mental health via IVF polygenic selection for low neuroticism and low depression.
Finally, I would have a lot of questions about the exact theory of impact here and the exact pros/cons of enacting a MAID-style law in more places. From afar (I don’t know much about suicide methods), it seems like there are plenty of reasonably accessible ways that a determined person could end their life. So, for the most part, a MAID law wouldn’t be enabling the option of suicide for people who previously couldn’t possibly commit suicide in any way—it’s more like it would be doing some combination of 1. making suicide logistically easier / more convenient, and 2. making suicide more societally acceptable. This seems dicier to me, since I’d be worried about causing a lot of collateral damage / getting a lot of adverse selection—who exactly are the kinds of people who would suicide if it was marginally more societally acceptable, but wouldn’t suicide otherwise?
Of course this is an April Fool’s day post, but I actually think that Lockheed Martin isn’t a great choice for this parody, since (unlike something like a cigarrette company, where the social impact of pretty much any job at the company is going to be “marginally more cigarrettes get sold”), some of the military stuff that Lockheed works on probably very positively impactful on the world, and other stuff is negatively impactful. So it seems there would be a huge variance in social impact depending on the individual job.
Some examples of how it’s tricky to assess whether a given military tech is net positive or negative:Lockheed Martin makes the GPS satellites, which:
contribute massive levels of positive economic externalities for the whole world (large positive impact on global development—literally like 2% of global GDP is directly enabled by GPS...)
also enables precision-guided weapons like JDAM bombs instead of the dumb bombs of yesteryear (ambiguous impact—great that you can cause less collateral damage to hit a given target, but obviously that perhaps encourages you to bomb more targets)
little-known fact, the GPS satellites also contain some nuclear-detonation detection hardware—ambiguous impact since I don’t even know the details of what this system does, but probably good for the USA to know ASAP if there are suprise nukes going off somewhere in the world??
Not sure if Lockheed specifically makes submarines or submarine-based nuclear missiles, but these were actually immensely helpful for reducing nuclear risk, by creating a robust “second strike” capability, and reducing the “use it or lose it” pressure to preemptively first-strike. So it strikes me that working on stealthier submarine technology could actually be a great, morally virtuous career choice for reducing nuclear risk.
Similarly, I’ve heard that spy satellites (which Lockheed does make, I think?) were helpful for nuclear risk in the cold war, since once the USA and Soviet Union could see each other’s nuclear silos from space, each nation now had an additional way to verify that the other was adhering to arms-control agreements. This made it easier to make new arms control agreements and ultimately reduce nuclear stockpiles.
Anti-ballistic-missile defenses for intercepting nukes in-flight—is this good (because after all, you are preventing some city from being nuked) or bad (because now you just broke the balance of deterrence, and maybe encourage your enemy to build and launch twice as many nukes to overwhelm your missile defenses)? Probably bad, but idk.
Most of the above examples are nuclear-related, which is kind of a topsy-turvy world where sometimes bad-seeming things can be good and vice versa. Meanwhile, in the domain of normal weapons, like fighter jets or bombs or tanks or machine guns or whatever, it seems more straightforward that filling the world with more weapons --> ultimately more people dying, somewhere, somehow. But even here, there are lots of uncertainties and big questions. The US sent a lot of weapons to Ukraine to help them fight against Russia. Is this bad (longer war = more Ukrainians and Russians dying, would’ve been better to just let Ukraine get defeated quickly and mercifully?), or good (making Russia struggle and pay a heavy price for their war of aggression = maybe deters nations from fighting other offensive wars of conquest in the future)?
Lockheed spends a lot of R&D money pushing the envelope on cutting-edge technology like drones and hypersonic missiles, which I often think is bad because it is probably just promoting an arms race and encouraging China / Russia / everyone else to try and match our investments in killer drones or whatever. But if you are sufficiently enthusiastic about America’s role in geopolitics, you can always make the classic argument that American hegemony is good for the world (ensure trade, promote democracy, whatever) → therefore anything that makes America stronger relative to its adversaries is good. I don’t think this argument is strong enough to justify harmful arms races in things like “slaughterbot”-style drones or hypersonics. But I do think that the US is on net a force for good in the world (at least in the sense of value-over-replacement-superpower), so I do think this argument is worth something.
All the above isn’t a criticism of your post at all—I’ve just had this military-jobs-related rant pent up in my head for a while and your post happened to remind me to write it up. I unironically think it would be interesting and helpful (albeit not a top priority) for an EA organization like 80K to engage more deeply about some of these topics (the general quality of discourse around Lockheed-style jobs is very rudimentary and dumb, basically just overall “military-bad” vs “military-good”), and give people some detailed, considered advice about navigating situations like this where the stakes seem high in terms of both upside and downside of potential career impact.
One crucial consideration that might actually end up vindicating the overall “military-bad” vs “military-good” framing—maybe I do all this detailed thinking and decide to become an engineer working on submarine stealth technology, which is great for reducing nuclear risk. But maybe if I do that, I actually just free up another Lockheed engineer who isn’t a super-well-informed 80,000 Hours fan, and instead of submarine stealth tech, they get a job working on submarine detection technology (which is correspondingly destabilizing to nuclear risk), or hypersonic missiles that are fueling an arms race, or some other terrible thing. Since most Lockheed engineers aren’t EAs, maybe this means the career impact of individual roles really does just reduce to the average career impact of the Lockheed company (or career specialization, like “stealth technology engineer”) as a whole.
Final random note: Lockheed salaries are, to my knowledge, not actually exceptional… programmer salaries at most military-industrial places are actually about half that of programmer salaries at “tech” companies like Google and Microsoft: https://www.levels.fyi/?compare=Microsoft,Google,Lockheed%20Martin&track=Software%20Engineer
Fort Collins EA / Rationalist meetup at Wolverine Farm
Yeah, I wondered what threshold to set things at -- $10m is a pretty easy bar for some of these areas, since of course some of my listed cause areas are more niche / fringe than others. I figure that for the highest-probability markets, where $10m is considered all but certain, maybe I can follow up with a market asking about a $50m or $100m threshold.
I agree that $10m isn’t “mainstream” in the sense of joining the pantheon alongside biosecurity, AI safety, farmed animal welfare, etc. But it would still be a big deal to me if, say, OpenPhil doubled their grantmaking to “land use” and split the money equally between YIMBYism and Georgism. Or if mitigating stable totalitarianism risk got as much support as “progress studies”-type stuff. $10m of grants towards studying grabby aliens or the simulation hypothesis or etc would definitely be surprising!
Wild animal welfare? Stable totalitarianism? Predict which new EA cause area will go mainstream!
There are definitely a lot of examples of places where some rich people wanted to try to create a kinda dumb, socially-useless tax haven, and then they accomplished that goal, and then the resulting entity had either negative impact or close-to-zero impact on the surrounding area. (I don’t know much about Monaco or the Cayman Islands, but these seem like potentially good examples?) But there have also been times when political leaders have set out to create sustained, long-term, positive-sum economic growth, and this has also occasionally been achieved! (Dubai, South Korea, Guangzhou… I’m not as familiar with the stories of places like Rwanda or Botswana or Bangladesh, but there are a lot of countries which are trying pretty hard to follow a kind of best-practices economic development playbook, and often seeing decent results.)
Both these phenomena predate the “charter cities” concept… as I understand it, the goal of orgs like the Charter Cities Institute is not to blindly cheerlead the creation of new cities of all kinds (as we mention in the video, lots of new cities are being built already, across the rapidly-urbanizing global South), but rather to encourage a specific model of development that looks more like the Dubai / South Korea / etc story, rather than simply building more cities as relatively useless tax-havens, or small and limited SEZs that won’t be able to build their own economic momentum, or as mere infrastructure projects with no economic/legal reform aspect.
I could definitely see myself agreeing with a criticism like “Sure, charter cities advocates do a LITTLE bit of work to avoid accidentally letting their ideas get used as an excuse to actually create useless tax havens, but actually they need to do a LOT MORE work to guard against this failure mode”. Right now I guess I feel like I don’t know enough about the status of specific projects to confidently identify what exact mistakes various charter-city groups are making. But we did try to allude to this failure mode in the video when we talked about Paul Romer’s complaints about the Honduras charter cities law.
Re: the idea that creating more competition can lead to more good things, but also makes it harder to coordinate to prevent negative externalties—yup, this is definitely something that I think about. I tend to think that since there are already almost 200 countries in the world, coordination on the most important topics—stuff like nuclear nonproliferation, the ongoing global moratorium on slavery, international agreements about climate or potentially soon about AI—already has to deal with lots of competing stakeholders, and hopefully won’t be impeded too much by adding some charter cities to the mix. (This is one area where it definitely helps that, at the end of the day, charter cities ultimately lack top-level national sovereignty!) I think charter cities in particular have a lot of potential benefits that could even help with these risks, namely by helping pioneer new styles of governance / regulation / institutions that could find better ways of dealing with some of these problems. Nevertheless, I agree it’s a real trade-off… we’re actually working on a draft script about “risks of stable totalitarianism” at RationalAnimations, and in that video we’re planning to spend a lot more time talking about a similar tradeoff space. It’s obviously extremely helpful to have global coordination / relatively unified world governance to solve important problems, so the best ways of reducing stable totalitarianism risk are things like differential technological development, or maybe influencing cultural norms or etc, not just decentralizing stuff, since blindly decentralizing stuff makes coordination harder!
“What if we could redesign society from scratch? The promise of charter cities.” [Rational Animations video]
Hyperbolic discounting, despite its reputation for being super-short-term and irrational, is actually better in this context, and doesn’t run into the same absurd “value an extra meal in 10,000 years more than a thriving civilization in 20,000 years” problems of exponential discounting.
Here is a nice blog post arguing that hyperbolic discounting is actually more rational than exponential: hyperbolic discounting is what you get when you have uncertainty over what the correct discount rate should be.
Nice! I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.
Very interested to find out some of the details here:
Why now? Was there some specific act of wrongdoing that the board discovered (if so, what was it?), or was now an opportune time to make a move that the board members had secretly been considering for a while, or etc?
Was this a pro-AI-safety move that EAs should ultimately be happy about (ie, initiated by the most EA-sympathetic board members, with the intent of bringing in more x-risk-conscious leadership)? Or is this a disaster that will end up installing someone much more focused on making money than on talking to governments and figuring out how to align superintelligence? Or is it relatively neutral from an EA / x-risk perspective? (Update: first speculation I’ve seen is this cautiously optimistic tweet from Eliezer Yudkowsky)
Greg Brockman, president of the board, is also stepping down. How might this be related, and what might this tell us about the politics of the board members and who supported/opposed this decision?
Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, “modeling stuff about yourself” in your brain) in a way that optimism/pessimism or pain-avoidance doesn’t. (Although wouldn’t a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc? Even tiny mammals like mice/rats display sophisticated social behaviors...)
I tend to assume that some kind of panpsychism is true, so you don’t need extra “circuitry for experience” in order to turn visual-information-processing into an experience of vision. What would such extra circuitry even do, if not the visual information processing itself? (Seems like maybe you are a believer in what Daniel Dennet calls the “fallacy of the second transduction”?)
Consequently, I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”! But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.
So, I tend to think that fish and other primitive creatures probably have “qualia”, including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it’s kind of just “suffering happening nowhere” or “an experience of suffering not connected to anything else”—the fish doesn’t know it’s a fish, doesn’t know that it’s suffering, etc, the fish is just generating some simple qualia that don’t really refer to anything or tie into a larger system. Whether you call such a disconnected & shallow experience “real qualia” or “real suffering” is a question of definitions.
I think this personal view of mine is fairly similar to Eliezer’s from the Sequences: there are no “zombies” (among humans or animals), there is no “second transduction” from neuron activity into a mythical medium-of-consciousness (no “extra circuitry for experience” needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia. So, animals and even simpler systems probably have qualia in some sense. But since animals aren’t self-aware (and/or have less self-awareness than humans), their qualia don’t matter (and/or matter less than humans’ qualia).
...Anyways, I think our core disagreement is that you seem to be equating “has a self-model” with “has qualia”, versus I think maybe qualia can and do exist even in very simple systems that lack a self-model. But I still think that having a self-model is morally important (atomic units of “suffering” that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it’s probably fine to eat fish.
I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake. I agree that I see a lot of people being confused and making mistakes, but I don’t think the problems are solved!
Why would showing that fish “feel empathy” prove that they have inner subjective experience? It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy. Couldn’t fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?
Conversely, isn’t it possible for fish to have inner subjective experience but not feel empathy? Fish are very simple creatures, while “empathy” is a complicated social emotion. Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc. Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious—if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you? What’s special about the social emotion of empathy?
Personally, I am more sympathetic to the David Chalmers “hard problem of consciousness” perspective, so I don’t think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience. I do think that fish / bees / etc probably have some kind of inner subjective experience, but I’m not sure how “strong”, or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals.
(Personally, I also happily eat fish & shrimp all the time—this is due to a combination of me wanting to eat a healthy diet without expending too much effort, and me figuring that the negative qualia experienced by creatures like fish is probably very small, so I should spend my efforts trying to improve the lives of current & future humans (or finding more-leveraged interventions to reduce animal farming) instead of on trying to make my diet slightly more morally clean.)
In general, I think this post is talking about consciousness / qualia / etc in a very confused way—if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.
April fools’ day request:
I was reading the openai blog post “learning to summarize with human feedback” from the AI Safety Fundamentals course (https://openai.com/research/learning-to-summarize-with-human-feedback), especially the intriguing bit at the end about how if they try to fully optimize the model for maximum reward, they actually overfit and get lower-quality responses.
My ill-advised request is that I would just LOVE to see the EA Forum’s “summaryBot” go similarly haywire for a day and start summarizing every post in the same repetitive / aggressive tone as the paper:
“28yo dude stubbornly postponees start pursuing gymnastics hobby citing logistics reasons despite obvious interest??? negatively effecting long term fitness progress both personally and academically thoght wise? want change this dumbass shitty ass policy pls”
The animal welfare side of things feels less truthseeking, more activist, than other parts of EA. Talk of “speciesim” that implies animals’ and humans’ lives are of ~equal value, seems farfetched to me. People frequently do things like taking Rethink’s moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.
If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above. But I’m also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions—both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA. Going all-in on animal welfare at the expense of global development seems bad for the movement.