We truly do live in interesting times
kbog
I moved my comment to an answer after learning that the index was directly funded by an Open Phil grant. You’d do better to repost your reply to me there. Sorry about the confusion.
The Global Health Security Index looks like a misfire. This isn’t directly about performance during the pandemic, but Nuclear Threat Initiative, funded by Open Phil for this purpose (h/t HowieL for pointing this out) and collaborating with the Johns Hopkins Center for Health Security, made the 2019 Global Health Security Index which seems invalidated by COVID-19 outcomes and may have encouraged actors to take the wrong moves. This ThinkGlobalHealth article describes how its ratings did not predict good performance against the virus. The article relies on official death counts rather than excess mortality, but I made that correction and reached similar results.
Looking through the index, there are some indicators which don’t make sense, like praising countries for avoiding travel restrictions (which is perverse), praising them for having more ethical regulations against surveillance and clinical trials (which may be ethically justified but is more likely to make it harder to fight a pandemic), and praising them for gender equality (a noble sentiment but not directly relevant to pandemics).
Even cutting some of those dubious measures out, I found the index was not predictive of excess mortality. In general it appears that effective pandemic response is not about preparation and this may have been systematically overlooked by EA efforts and funding recipients in the realm of biorisk.
Some people have also criticized the index for rating China moderately highly on prevention of pathogen release, considering that COVID-19 came from China, but considering that COVID-19 is just one data point of virus emergence or lab leak and that China is a very large country I don’t think this is right.
EAs have voted in various elections in the United States. This study adjusted for various factors and found that Republican Party power at the state level was associated with modestly higher amounts of death from COVID-19. Since the majority of EA voters have picked the Democratic Party, this can be taken as something of a vindication. Of course, there are many other issues for deciding your vote besides pandemics, and that study might be wrong. It’s not even peer reviewed.
The difference might be entirely explained by politically motivated differences in social distancing behavior between Democratic and Republican citizens, although if that’s the case it could still somewhat vindicate opposition to the Republican Party.
Also, the study was done before the vaccine rollout; it will be interesting to see a similar analysis from a later date.
I discuss the GHS index at greater length in my answer.
Edit: I’ve reposted this comment as an answer, and am self-downvoting this.
OK, sorry for misunderstanding.
I make an argument here that marginal long run growth is dramatically less important than marginal x-risk. I’m not fully confident in it. But the crux could be what I highlight—whether society is on an endless track of exponential growth, or on the cusp of a fantastical but fundamentally limited successor stage. Put more precisely, the crux of the importance of x-risk is how good the future will be, whereas the crux of the importance of progress is whether differential growth today will mean much for the far future.
I would still ceteris paribus pick more growth rather than less, and from what I’ve seen of Progress Studies researchers, I trust them to know how to do that well.
It’s important to compare with long-term political and social change too. Arguably a higher priority than either effort, but also something that can be indirectly served by economic progress. One thing the progress studies discourse has persuaded me of is that there is some social and political malaise that arises when society stops growing. Healthy politics may require fast nonstop growth (though that is a worrying thing if true).
“EA/XR” is a rather confusing term. Which do you want to talk about, EA or x-risk studies?
It is a mistake to consider EA and progress studies as equivalent or mutually exclusive. Progress studies is strictly an academic discipline. EA involves building a movement and making sacrifices for the sake of others. And progress studies can be a part of that, like x-risk.
Some people in EA who focus on x-risk may have differences of opinion with those in the field of progress studies.
I think I don’t really buy your conceptual logic as the mitigation obstruction argument is about the degree to which particular solutions will be over or underestimated relative to their actual value, not about how absolutely good/cheap/fast/etc they are. When considered through that lens, it’s not clear (at least to me) what to make of distinctions between big actions and small actions or easy actions and hard actions.
Geoengineering is cheap but Halstead argues that it’s not such a bargain as was suggested by earlier estimates.
I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).
Deleted my previous comment—I have some little doubts and don’t think the international system will totally fail but some problems along these lines seem plausible to me
I’m not sure if immediacy of the problem really would lead to a better response: maybe it would lead to a shift from prevention to adaptation, from innovation to degrowth, and from international cooperation to ecofascism. Immediacy could clarify who will be the minority of winners from global warming, whereas distance makes it easier to say that we are all in this together.
At the very least, geoengineering does make the future more complicated, in that on top of the traditional combination of atmospheric uncertainties and emission uncertainties, we have to add uncertainty about how the geoengineering regime will proceed. And most humans don’t do a great job of responding to uncertain problems like this.
But I don’t think we understand these psychological and political dynamics very well. This all reminds me of public health researchers, pre-COVID, theorizing about the consequences of restricting international travel during a pandemic.
I’ll think a bit more on this.
Hm, I suppose I don’t have reason to be confident here. But as I understand it:
Stratospheric aerosol injection removes a certain wattage of solar radiation per square meter.
The additional greenhouse effect from human emissions only constitutes a tiny part of our overall temperature balance, shifting us from 289 K to 291 K for instance. SAI cuts nearly the entire energy input from the Sun (excepting that which is absorbed above the stratosphere). So maybe SAI could be slightly more effective in terms of watts per square meter or CO2 tonnes offset under a high-emissions scenario, but it will be a very small difference.
Would like to see an expert chime in here.
Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal
Hi Tommaso,
If I think about the poor record the International Criminal Court has of bringing war criminals to justice, and the fact that the use of cluster bombs in Laos or Agent Orange in Vietnam did not lead to major trials, I am skeptical on whether someone would be hold accountable for crimes committed by LAWs.
But the issue here is whether responsibility and accountability is handled worse with LAWs as compared with normal killing. You need a reason to be more skeptical for crimes committed by LAWs than you are for crimes not committed by LAWs. That there is so little accountability for crimes committed without LAWs even suggests that we have nothing to lose.
What evidence do we have that international lawmaking follows suit when a lethal technology is developed as the writer assumes it will happen?
I don’t think I make such an assumption? Please remind me (it’s been a while since I wrote the essay), you may be mistaking a part where I assume that countries will figure out safety and accountability for their own purposes. They will figure out how to hold people accountable for bad robot weapons just as they hold people accountable for bad equipment and bad human soldiers, for their own purposes without reference to international laws.
However, in order for the comparison to make more sense I would argue that the different examples should be weighted according to the number of victims.
I would agree if we had a greater sample of large wars, otherwise the figure gets dominated by the Iran-Iraq War, which is doubly worrying because of the wide range of estimates for that conflict. You could exclude it and do a weighted average of the other wars. Either way, seems like civilians are still just a significant minority of victims on average.
Intuitively to me, the case for LAWs increasing the chance of overseas conflicts such as the Iraq invasion is a very relevant one, because of the magnitude of civilian deaths.
Yes, this would be similar to what I say about the 1991 Gulf War—the conventional war was relatively small but had large indirect costs mostly at civilians. Then, “One issue with this line of reasoning is that it must also be applied to alternative practices besides warfare...” For Iraq in particular, while the 2003 invasion certainly did destabilize it, I also think it’s a mistake to think that things would have been decent otherwise (imagine Iraq turning out like Syria in the Arab Spring; Saddam had already committed democide once, he could have done it again if Iraqis acted on their grievances with his regime).
From what the text says I do not see why the conclusion is that banning LAWs would have a neutral effect on the likelihood of overseas wars, given that the texts admits that it is an actual concern.
My ‘conclusion’ paragraph states it accurately with the clarification of ‘conventional conflicts’ versus ‘overseas counterinsurgency and counterterrorism’
I think the considerations about counterinsurgencies operations being positive for the population is at the very least biased towards favoring Western intervention.
Well, the critic of AI weapons needs to show that such interventions are negative for the population. My position in this essay was that it’s unclear whether they are good or bad. Yes, I didn’t give comprehensive arguments in this essay. But since then I’ve written about these wars in my policy platform where you can see me seriously argue my views, and there I take a more positive stance (my views have shifted a bit in the last year or so).
The considerations about China and the world order in this section seem simplistic and rely on many assumptions.
Once more, I got you covered! See my more recent essay here about the pros and cons (predominately cons) of Chinese international power. (Yes it’s high time that I rewrote and updated this article)
But the answers to a survey like that wouldn’t be easy interpret. We should give the same message under organization names to group A and group B and see which group is then more likely to endorse the EA movement or commit to taking a concrete altruistic action.
No I agree on 2! I’m just saying even from a longtermist perspective, it may not be as important and tractable as improving institutions in orthogonal ways.
Review: “Why It’s OK To Ignore Politics” by Christopher Freiman
I think it’s really not clear that reforming institutions to be more longtermist has an outsized long run impact compared to many other axes of institutional reform.
We know what constitutes good outcomes in the short run, so if we can design institutions to produce better short run outcomes, that will be beneficial in the long run insofar as those institutions endure into the long run. Institutional changes are inherently long-run.
I saw OSINT results frequently during the Second Karabkh War (October 2020). The OSINT evidence of war crimes from that conflict has been adequately recognized and you can find info on that elsewhere. Beyond that, it seems to me that certain things would have gone better if certain locals had been more aware of what OSINT was revealing about the military status of the conflict, as a substitute for government claims and as a supplement to local RUMINT (rumor intelligence). False or uncertain perceptions about the state of a war can be deadly. But there is a language barrier and an online/offline barrier so it is hard to get that intelligence seen and believed by the people who need it.
Beyond that, OSINT might be used to actually influence the military course of conflicts if you can make a serious judgment call of which side deserves help, although this partisan effort wouldn’t really fit the spirit of “civilian” OSINT. Presumably the US and Russia already know the location of each other’s missile silos, but if you look for stuff that is less important or something which is part of a conflict between minor groups who lack good intelligence services, then you might produce useful intelligence. For a paramount example of dual use risks, during this war, someone geolocated Armenia’s Iskander missile base and shared it on Twitter, and it seems unlikely to me that anyone in Azerbaijan had found it already. I certainly don’t think it was responsible of him, and Azerbaijan did not strike the base anyway, but it suggests that there is a real potential to influence conflicts. You also might feed that intelligence to the preferred party secretly rather than openly, though that definitely violates the spirit of civilian OSINT. Regardless, OSINT may indeed shine when it is rushed in the context of an active military conflict where time is of the essence, errors notwithstanding. Everyone likes to makes fun of Reddit for the Boston Bomber incident but to me it seems like the exception that tests the rule. While there were a few OSINT conclusions during the war which struck me as dubious, never did I see evidence that someone’s geolocation later turned out to be wrong.
Also, I don’t know if structure and (formal) training are important. Again, you can pick on those Redditors, but lots of other independent open source geeks have been producing reliable results. Imposing a structure takes away some of the advantages of OSINT. That’s not to say that groups like Bellingcat don’t also do good work, of course.
To me, OSINT seems like a crowded field due to the number of people who do it as a hobby. So I doubt that the marginal person makes much difference. But since I haven’t seriously tried to do it, I’m not sure.
There is a lot of guesswork involved here. How much would it cost for someone, like the CEA, to run a survey to find out how popular perception differs depending on these kinds of names? It would be useful to many of us who are considering branding for EA projects.
I don’t think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it’s not about just semantics, but precision on which efforts did well or poorly.
Conventional wisdom is worth little when it is the product of armchair speculation rather than experience. If people live through half a dozen pandemics and still have that conventional wisdom then we can have a different conversation.
Wouldn’t preparation seem to be a part of the story of COVID-19 outcomes given a similarly superficial level of inquiry?
Forget semantics. Did EA funding efforts and recipients design systems that made good decisions about COVID-19? Did anyone who talked about “pandemic preparation” pre-2020 use the term to encompass the design of systems like that?
Well you can’t just define preparation as “good plans”, that’s a no-true-Scotsman argument. If you have some way of ensuring that your preparation will be good preparation then it’s a different story.
That isn’t necessarily due to physical preparation, it could easily be intangible changes in the culture and political system, granting that there is in fact a causal connection as opposed to East Asia and Australasia just being better at this stuff.
iirc there was a study which found that American cities that lived through the Spanish Flu (1919) suffered less death early in the COVID19 outbreak. Cannot find the study now but if it’s really true then that would be hard to explain through preparation.
I’m not sure exactly what anti-fragile means but that doesn’t sound right, decision systems in the US/UK for instance didn’t fall apart, they were just apathetic and unresponsive to good ideas just like they are for mundane problems that aren’t big crises. In other words they calmly kept operating the way they always do.
I don’t have reason to believe that there is a positive interaction between good leadership and good preparation. Maybe good preparation and good leadership act more as substitutes for each other rather than compliments.
Not sure it is useful to say ‘prevention helps’ since we cannot wish away viruses, we can only take measures to attempt to prevent viruses from emerging, and while those measures may be cost-effective it is a different conversation to which I have nothing to contribute.
I would summarize my view by saying that smart actions by government and civil society in the moment make the most difference, and if plans and preparation are to be helpful they will have to be done in careful ways to avoid the failures documented during COVID-19.