We truly do live in interesting times
kbog
On AI Weapons
Extinguishing or preventing coal seam fires is a potential cause area
Overview of Capitalism and Socialism for Effective Altruism
Response to a Dylan Matthews article on Vox about bipartisanship
Love seems like a high priority
An Informal Review of Space Exploration
Four practices where EAs ought to course-correct
Welfare stories: How history should be written, with an example (early history of Guam)
Reasons to eat meat
An integrated model to evaluate the impact of animal products
American policy platform for total welfare
Super-exponential growth implies that accelerating growth is unimportant in the long run
Vox’s “Future Perfect” column frequently has flawed journalism
Vocational Career Guide for Effective Altruists
There are more problems with The Sunrise Movement (TSM) which don’t seem to have been raised yet in this discussion.
I think they have an underappreciated propensity to actively oppose progress in environmental policy. Others have brought up their opposition to a carbon tax in Washington, as well as their hostility to nuclear power, but here one Sunrise local group is opposing cap-and-trade in Oregon, and here Sunrise is opposing carbon capture on fossil fuel emissions. Also, the same environmentalist-NIMBY problem we have seen with nuclear power is likely to repeat with geothermal energy: certain kinds of geothermal power are a bit controversial because they use technology which is similar to fracking, and as geothermal technology and industry mature this will likely become a bigger battleground where Sunrise may work for the wrong side. I also have reservations about how Sunrise-type activists react to natural gas and waste-to-energy technologies, two things which are legitimately controversial but still might be net positive. I can’t find a source for whether Sunrise has actually opposed waste-to-energy but it seems probable (others like them have). They also gave Biden an F for his climate plan; personally, I thought Biden deserved 2.2 points on air pollution on a −3 to +3 scale. Giving an F to someone with a pretty good environmental plan is a big red flag.
Second, TSM is not very focused on climate change; they perform activism and lobbying for a wider range of political issues. Insofar as TSM spends time and energy on other stuff besides climate change, this probably reduces their effectiveness on climate issues relative to more focused groups. Some of those specific political activities are discussed below.
Third, TSM’s non-climate-change impacts are plausibly harmful.
Housing policy—TSM has engaged in NIMBY opposition to upzoning, and here is Sunrise Honolulu commenting that all housing investment should be banned. I’ve heard that they have a bigger pattern of this. Such behavior is certainly bad for both economic and environmental reasons; see my writeup on residential zoning. At the same time they have promoted new housing in other contexts, it’s not clear if the good outweighs the bad.
Police reform—TSM has promoted Defund the Police. As I describe here, defunding police departments is a bad policy idea, in fact hiring more police officers is probably a good idea. That said, Sunrise has also promoted Black Lives Matter and perhaps some more reasonable forms of police reform, and this is more likely to be a good thing.
Deliberate electoral politics—TSM has endorsed political campaigns with farther-reaching impacts beyond climate policy, generally because they are a progressive left-wing group who wants to achieve a variety of progressive left-wing political goals. Some notable ones which stick out to me are:
They supported an unsuccessful primary campaign against Sen. Dianne Feinstein, which was probably good because Feinstein is a pretty bad senator, tho defeating her probably would have achieved nothing good for climate policy. In fact, Feinstein has sponsored a carbon tax bill.
They supported a successful primary campaign against Rep. Eliot Engel, who had been a strong congressional proponent of effective foreign aid programs including PEPFAR. Removing Engel has no discernible impact on the climate. He has since been replaced in his position as the chair of the Foreign Affairs Committee with Rep. Gregory Meeks who has no such record on foreign aid, altho hopefully he will become more active with his new position.
They supported Sen. Ed Markey against a primary challenge. Again this had no discernible impact on the climate, nor on most other policy issues frankly. I am happy that Markey won, but it is not a big deal.
They supported Bernie Sanders in his 2020 presidential primary campaign. On the merits, Sanders was pretty comparable to other Democratic candidates including Biden. But in terms of electability, he was inferior (see this essay where I use his campaign as a case study of electability). So this was a bad decision.
Inadvertent electoral politics—as other commentators have touched upon, some of Sunrise’s advocacy can inadvertently harm the Democratic Party. This is especially a consequence of calls to defund the police. As I argue here, the Democratic Party is generally superior to the Republican Party, so preventing the Democratic Party from winning elections constitutes harm.
Deprioritization of other issues—if TSM’s mechanism of change is to make Democratic politicians expend more political capital on climate change, that implies that the politicians will expend less political capital on other issues. It’s one thing to say that we need more action on climate change, but quite another to say that Democratic politicians should focus on climate policy before or instead of other things like healthcare, immigration and tax policy. I do lean towards saying that air pollution should indeed get more priority on the margin, but the downside for other issues still chips away at the expected value. Additionally, insofar as TSM pressures Democratic politicians to place more priority on other issues like criminal justice and public housing, that similarly detracts from alternative priorities, and here I’d be still less optimistic about the impact.
Certainly there is a difference between everything that TSM does, and the marginal impact of GG’s recommendation for their education fund. And certainly it is possible that the good parts of TSM’s environmental activism outweigh these downsides. And you might disagree with me on some of these political issues. But we must see strong arguments along these lines before prioritizing TSM for donations. And while I haven’t taken a close or systematic look at TSM’s activities, given all the red flags I tentatively expect that the Sunrise Movement does more harm than good.
Other commenters here have framed this stuff as a tension between the left and conservatives/moderates, but there are plenty of Democrats who criticize TSM too. Here’s Matt Yglesias saying “The problem with funding Sunrise is not that there is an objective scarcity of funds and other people need the money more, it’s that Sunrise is bad and should get $0.” And such views about TSM are pretty common at least on left-leaning Twitter. Recommending TSM without having awareness and counterarguments to these criticisms does not imply a need to listen more to conservatives or moderates (tho I don’t necessarily oppose the idea of listening more to conservatives or moderates), it suggests a more general need to keep closer tabs on the current political discourse. The synthesis of “EA should generally strive to be apolitical” and “some good causes are inherently political” should not be for us to naively support interventions because of the way that they attack one political problem while we ignore the risky impacts of those interventions on other parts of the political system.
Finally, I am less confident about this point, but I suspect that GG is being too credulous about TSM achieving change. Just because they demand that Democratic politicians do something, and the Democratic politicians do that something, with TSM claiming that they were responsible for making the Democratic politicians do that something, doesn’t mean TSM actually was responsible for making the politicians change. If a Democratic politician does major climate stuff in office after being criticized by TSM during their election campaign for something symbolic like not bringing up the Green New Deal, that’s only very weak evidence that TSM actually changed the politician’s behavior; it is better evidence for the claim that Democratic politicians are generally both serious on climate policy and savvy at election messaging and TSM was just making unfounded criticisms all along.
Here it is worth distinguishing two theories of how the Democratic Party works. Some people (like TSM and others on the progressive left) think the elites of the Democratic Party are centrist corporatists who don’t really want to implement leftist policies but will do it if their base pressures them hard enough. Other people think that Democratic Party elites are actually very ideologically liberal and would intrinsically like to implement ambitious reforms on the environment and other issues, but are stymied by right-wing and centrist political forces. AFAICT the second theory is much more accurate, and David Shor (the leftist data whiz) seems to agree.
I hope this does not come across too negative, since I am glad Giving Green exists and I just think this recommendation is a mistake.
Lethal autonomous weapons systems are an early test for AGI safety, arms race avoidance, value alignment, and governance
OK, so this makes sense and in my writeup I argued a similar thing from the point of view of software development. But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don’t want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development. What you actually suggest, contra some other advocates, is to prohibit certain classes but not others… I’m not sure if that would be helpful or harmful in this dimension. Of course it certainly would be helpful if we simply worked to ensure higher standards of safety and reliability.
I’m skeptical that this is a large concern. Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don’t know. Maybe.
Seeking to govern deeply unpopular AWSs (which also presently lack strong interest groups pushing for them) provides the easiest possible opportunity for a “win” in coordination amongst military powers.
I don’t think this is true at all. Defense companies could support AWS development, and the overriding need for national security could be a formidable force that manifests in domestic politics in a variety of ways. Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers?
Compared to other areas of military coordination among military powers, I guess AI weapons look like a relatively easy area right now but that will change in proportion to their battlefield utility.
While these concerns are not foremost from the perspective of overall expected utility, for these and other reasons we believe that delegating the decision to take a human life to machine systems is a deep moral error, and doing so in the military sets a terrible precedent.
I thought your argument here was just that we need to figure out how to implement autonomous systems in ways that best respond to these moral dilemmas, not that we need to avoid them altogether. AGI/ASI will almost certainly be making such decisions eventually, right? We better figure it out.
In my other post I had detailed responses to these issues, so let me just say briefly here that the mere presence of a dilemma in how to design and implement an AWS doesn’t count as a reason against doing it at all. Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.
Lethal autonomous weapons as WMDs
At this point, it’s been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don’t think anyone is publicly developing such drones—suggesting it’s really not so easy or useful.
A mass drone swarm terror attack would be limited by a few things. First, distances. Small drones don’t have much range. So if these are released from one or a few shipping containers, the vulnerable area will be limited. These $100 micro drones have a range of only around 100 meters. The longest range consumer drones apparently go 1-8km but cost several hundred or several thousand dollars. Of course you could do better if you optimize for range, but these slaughterbots cannot be optimized for range, they must have many other features like military payload, autonomous computing, and so on.
Covering these distances will take time. I don’t know how fast these small drones are supposed to go—is 20km/h a good guess, taking into account buildings posing obstacles to them? If so then it will take half an hour to cover a 10 kilometer radius. If these drones are going to start attacking immediately, they will make a lot of noise (from those explosive charges going off) which will alert people, and pretty soon alarm will spread on phones and social media. If they are going to loiter until the drones are dispersed, then people will see the density of drones and still be alerted. Specialized sensors or crowdsourced data might also be used to automatically detect unusual upticks in drone density and send an alert.
So if the adversary has a single dispersal point (like a shipping container) then the amount of area he can cover is fundamentally pretty limited. If he tries to use multiple dispersal points to increase area and/or shorten transit time, then logistics and timing get complicated. (Timing and proper dispersal will be especially difficult if a defensive EW threat prevents the drones from listening to operators or each other.) Either way, the attack must be in a dense urban area to maximize casualties. But few people are actually outside at any given time. Most are either in a building, in a car or public transport, even during rush hour or lunch break. And for every person who gets killed by these drones, there will be many other people watching safely through car or building windows who can see what is going on and alert other people. So people’s vulnerability will be pretty limited. If the adversary decides to bring large drones to demolish barriers then it will be a much more expensive and complex operation. Plus, people only have to wait a little while until the drones run out of energy. The event will be over in minutes, probably.
If we imagine that drone swarms are a sufficiently large threat that people prepare ahead of time, then it gets still harder to inflict casualties. Sidewalks could have light coverings (also good for shade and insulation), people could carry helmets, umbrellas, or cricket bats, but most of all people would just spend more time indoors. It’s not realistic to expect this in an ordinary peacetime scenario but people will be quite adept at doing this during military bombardment.
Also, there are options for hard countermeasures which don’t use technology that is more complicated than that which is entailed by these slaughterbots. Fixtures in crowded areas could shoot anti-drone munitions (which could be less lethal against humans) or launch defensive drones to disable the attackers.
Now, obviously this could all change as drones get better. But defensive measures including defensive drones could improve at the same time.
I should also note that the idea of delivering a cheap deadly payload like toxins or a dirty bomb via shipping container has been around for a while yet no one has carried out.
Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It’s just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.
The unfortunate flip-side of these differences, however, is that anti-personnel lethal AWSs are much more likely to be used. In terms of “bad actors,” along with the advantages of being safe to transport and hard to detect, the ability to selectively attack particular types of people who have been identified as worthy of killing will help assuage the moral qualms that might otherwise discourage mass killing.
I don’t think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise. After all the primary considerations in going to war are matters of national interest, not morality. If there is such a moral hazard effect then it is small and outweighed by the first-order reduction in harm.
Autonomous WMDs would pose all of the same sorts of threats that other ones do,[12]
Just because drones can deploy WMDs doesn’t mean they are anything special—you could can also combine chem/bio/nuke weapons with tactical ballistic missiles, with hypersonics, with torpedoes, with bombers, etc.
Lethal autonomous weapons as destabilizing elements in and out of war
I stand by the point in my previous post that it is a mistake to conflate a lower threshold for conflict with a higher (severity-weighted) expectation of conflict, and military incidents will be less likely to escalate (ceteris paribus) if fewer humans are in the initial losses.
Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.
A large-scale nuclear war is unbelievably costly: it would most likely kill 1-7Bn in the first year and wipe out a large fraction of Earth’s economic activity (i.e. of order one quadrillion USD or more, a decade worth of world GDP.)Some current estimates of the likelihood of global-power nuclear war over the next few decades range from ~0.5-20%. So just a 10% increase in this probability, due to an increase in the probability of conflict that leads to nuclear war, costs in expectation ~500K − 150m lives and ~$0.1-10Tn (not counting huge downstream life-loss and economic losses).
The mean expectations are closer to the lower ends of these ranges.
Currently, 87,000 people die in state-based conflicts per year. If automation cuts this by 25% then in three decades it will add up to 650k lives saved. That’s still outweighed if the change in probability is 10%, but for reasons described previously I think 10% is too pessimistic.
The third is simply that this is “somebody else’s problem,” and low-impact relative to other issues to which effort and resources could be devoted.[21] We’ve argued above against all three positions: the expected utility of widespread autonomous weapons is likely to be highly negative (due to increase probability of large-scale war, if nothing else), the issue is addressable (with multiple examples of past successful arms-control agreements), currently tractable if difficult, and success would also improve the probability of positive results in even more high-stakes arenas including global AGI governance.
As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries.
We leave out disingenuous arguments against straw men such as “But if we give up lethal autonomous weapons and allow others to develop them, we lose the war.” No one serious, to our knowledge, is advocating this – the whole point of multilateral arms control agreements is that all parties are subject to them.
If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that’s basically what you’re doing.
I would like to again mention the Ottawa Treaty, I don’t know much about it, but it seems like a rich subject to explore for lessons that can be applied to AWS regulation.
I find this whole genre of post tedious and not very useful. If you think climate change is a good cause area, just write an actual cause prioritization analysis directly comparing it to other cause areas, and show how it’s better! If that’s beyond your reach, you can take an existing one and tweak it. This reads like academic turf warring, a demand that your cause area should get more prestige, instead of a serious attempt to help us decide which cause areas are actually most important.
1) There is a lack of evidence for the more severe impacts of climate change, rather than evidence that the impacts will not be severe.
OK, but I don’t know if anyone here was previously assuming that the impacts will definitely not be severe. The EA community has long recognized the risks of more severe impact. So this doesn’t seem like a point that challenges what we currently believe.
One of the central ideas in effective altruism is that some interventions are orders of magnitude more effective than others. There remain huge uncertainties and unknowns which make any attempt to compute the cost effectiveness of climate change extremely challenging. However, the estimates which have been completed so far don’t make a compelling case that mitigating climate change is actually order(s) of magnitude less effective compared to global health interventions, with many of the remaining uncertainties making it very plausible that climate change interventions are indeed much more effective.
I haven’t read those previous posts you’ve written, but the burden of argument is on showing that a cause is effective, not proving that it’s ineffective. We have many causes to choose from, and the Optimizer’s Curse means we must focus on ones where we have pretty reliable arguments. Merely speculating “what if climate change is worse than the best evidence suggests???” does nothing to show that we’ve neglected it. It just shows that further cause prioritization analysis could be warranted.
The EA importance, tractability, neglectedness (ITN) framework discounts climate change because it is not deemed to be neglected (e.g. scoring 2⁄12 on 80K Hours). I have previously disagreed with this position because it ignores whether the current level of action on climate change is anywhere close to what is actually required to solve the problem (it’s not).
This criticism doesn’t make sense to me. The mere fact that a problem will be unsolved doesn’t mean it’s more important for us to work on it. What matters is how much we can actually accomplish by trying to solve it.
The 80K Hours problem profile makes no mention of the concept of a carbon budget—the amount of of carbon which we can emit before we are committed to a particular level of warming.
That’s fine. Marginal/social cost of carbon is the superior way to think about the problem.
4) EA often ignores or downplays the impact of mainstream climate change, focusing on the tail risk instead
I’ve seen EAs talk about ‘mainstream’ costs many times. GWWC’s early analysis on climate change did this in detail. In any case, my estimate of the long-term economic costs of climate change (detailed writeup in Candidate Scoring System: http://bit.ly/ea-css ) aggregates over the various scenarios.
5) EA appears to dismiss climate change because it is not an x-risk
This phrasing suggests to me that you didn’t read, or perhaps don’t care, what is actually in many of the links that you’re citing. We do not believe that climate change is irrelevant because it’s not an x-risk. We do, however, believe that the arguments in favor of mitigating x-risks do not apply to climate change. So that provides one reason to prioritize x-risks over climate change. This is clearly a correct conclusion and you haven’t provided arguments to the contrary.
6) EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk
If you think that people will like EA more when they see us addressing on climate change, why don’t you highlight all the examples of EAs actually addressing climate change (there are many examples) instead of writing (yet another, we’ve had many) post making the accusation that we neglect it?
7) EA tries to quantify problems using simple models, leading to undervaluing of action on climate change
Other problems have complex, far-reaching negative consequences too, so it’s not obvious that simplistic modeling leads to an under-prioritization of climate change. It is very easy to think of analogous secondary effects for things like poverty.
In any case, estimating the damages of climate change upon the human economy has already addressed by multiple economic metanalyses. Estimating the short- and medium-term deaths has been done by GWWC. Estimating the impacts on wildlife is generally sidelined because we have no idea if they are net positive or net negative for wild animal welfare.
Global health interventions have a climate footprint, which I’ve never seen accounted for in EA cost effectiveness calculations.
I briefly addressed it in Candidate Scoring System, and determined that it was very small. If you look at CO2 emissions per person and compare it to the social cost of carbon, you can see that it’s not much for a person in the United States, let alone for people in (much-lower-emissions) developing countries.
Climate change is a problem which is getting worse with time and is expected to persist for centuries. Limiting warming to a certain level gets harder with every year that action is not taken. Many of the causes compared by EA don’t have the same property. For example, if we fail to treat malaria for another ten years, that won’t commit humanity to live with malaria for centuries to come. However, within less than a decade, limiting warming to 1.5C will become impossible.
Climate change being expected to persist for centuries is conditional upon the absence of major geoengineering. But we could quite plausibly see that in the later 21st century or anytime in the 22nd century.
Failing to limit warming to a certain level is a poor way of defining the problem. If we can’t stay under 1.5C, we might stay under 2.0C, which is not that much worse. The right way to frame the problem is to estimate how much accumulated damage will be caused by some additional GHGs hanging around the atmosphere for, probably, a century or more. That is indeed a long term cost.
But other cause areas also have major long-run impacts. There is plenty of evidence and arguments for long-run benefits of poverty relief, health improvements and economic growth.
10) Case study: Climate is visibly absent or downplayed within some key EA publications and initiatives
Pick another cause area that’s currently highlighted, compare it to climate change, and show how climate change is a more effective cause area.
There is a lot of guesswork involved here. How much would it cost for someone, like the CEA, to run a survey to find out how popular perception differs depending on these kinds of names? It would be useful to many of us who are considering branding for EA projects.
I don’t have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I don’t consider Robin Hanson an “intellectual ally” of the EA movement; I’ve never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to exist according to simulation theory so we should be less inclined to help them. On top of that he made some pretty uncharitable statements about EA Munich and CEA after this affair. And some of his pursuits suggest that he doesn’t care if he turns himself into a super controversial figure who brings negative attention towards EA by association. These things can be understandable on their own, you can rationalize each one, but when you put it all together it paints a picture of someone who basically doesn’t care about EA at all. It just happens to be the case that he was big in the rationalist blogosphere and lots of EAs (including me) think he’s smart in some ways and has some good ideas. He’s just here for the ride, we don’t owe him anything.
I’m definitely not trying to character-assassinate or ‘cancel’ him, I’m just saying that he only deserves as much community respect from us as any other decent academic does, we shouldn’t give him the kind of special anti-cancelling loyalty that we would reserve for people who have really worked as allies for us.