Hi Cari,
I would certainly agree they are quite similar, and think that Good Ventures is closer to how we would do things than almost all existing foundations, and tremendously good news for the world.
This makes sense. Nick has worked at GiveWell. I have been following and interacting with GiveWell since its founding. We share access to much of our respective knowledge bases, surrounding intellectual communities, and an interest in effective altruism, so we should expect a lot of overlap in approaches to solve similar problems.
Growing the EA community’s capabilities and quality of decision-making, pursuing high value-of-information questions about the available philanthropic options, and similar efforts are robustly valuable.
It’s harder for me to pin down differences with GV because of my uncertainty about Good Ventures’ reasoning behind some of its choices. Posting conversations makes it easier to see what information GV has access to, but I feel I know a lot more about GW’s internal thinking than GV’s.
Relative to GiveWell, I think we may care more about protecting the long-term trajectory of civilization relative to short-term benefits. And, speaking for myself at least, I am more skeptical that optimizing for short-term QALYs or similar measures will turn out to be very close to optimizing for long-term metrics. I’m not sure about GV’s take on those questions.
At the tactical level, and again speaking for myself and not for Nick, based on my current state of knowledge I don’t see how GV’s ratio of learning-by-granting relative to granting to fund direct learning efforts is optimal for learning.
For example, GiveWell and Good Ventures now provide the vast majority of funding for AMF. I am not convinced that moving from $15 MM to $20 MM of AMF funding provides information close in value to what could be purchased if one spent $5 MM more directly on information-gathering. GiveWell’s main argument on this point has been inability to hire using cash until recently, but it seems to me that existing commercial and other services can be used to buy valuable knowledge.
I’ll mention a few examples that come to mind. ScienceExchange, a marketplace that connects funders and scientific labs willing to take on projects for hire, is being used by the Center for Open Science to commission replications of scientific studies of interest. Polling firms can perform polls and surveys, of relevant experts or of the general public or donors, for hire in a standardized fashion. Consulting firms with skilled generalists or industry experts can be commissioned at market rates to acquire data and perform analysis in particular areas. Professional fundraising firms could have been commissioned to try street fundraising or direct mail and the like for AMF to learn whether those approaches are effective for GiveWell’s top charities.
Also, in buying access to information from nonprofit organizations, it’s not easy for me to understand the relationship between the extent of access/information and the grant, e.g. why make a grant sufficient to hire multiple full time staff-years in exchange for one staff-day of time? I can see various reasons why one might do this, such as wariness from nonprofits about sharing potentially embarrassing information, compensating for extensive investments required to produce the information in the first place, testing RFMF hypotheses, and building a reputation, but given what I know now I am not confident that the price is right if some of these grants really are primarily about gaining information. [However, the grants are still relatively small compared to your overall resources, so such overpayment is not a severe problem if it is a problem.]
Zooming out back to the big picture, I’ll reiterate that we are very much on the same page and are great fans of GV’s work.
There are many object-level lines of evidence to discuss, but this is not the place for great detail (I recommend Nick Bostrom’s forthcoming book). One of the most information-dense is that that’s surveys sent to the top 100 most-cited individuals in AI (identified using Microsoft’s academic search tool) resulted in a median estimate comfortably within the century, including substantial and non-negligible probability for the next few decades. The results were presented at the Philosophy and Theory of AI conference earlier this year and are on their way to publication.
Expert opinion is not terribly reliable on such questions, and we should probably widen our confidence intervals (extensive research shows that naive individuals give overly narrow intervals), assigning more weight to AI surprisingly soon and surprisingly late than otherwise. We might also try to correct against a possible optimistic bias (which would bias towards shorter timelines and lower risk estimates).
The surveyed experts also assigned credences in very bad or existentially catastrophic outcomes that, if taken literally, would suggest that AI poses the largest existential risk (although some respondents may have interpreted the question to include comparatively lesser harms).
Extinction-level asteroid, volcanoes, and other natural catastrophes are relatively well-characterized and pose extremely low annual risk based on empirical evidence of past events. GiveWell’s shallow analysis pages discuss several of these, and the edited volume “Global Catastrophic Risks” has more on these and others.
Climate scientists and the IPCC have characterized the risk of conditions threatening human extinction as very unlikely conditional on nuclear winter or severe continued carbon emissions, i.e. these are far more likely to cause large economic losses and death than to permanently disrupt human civilization.
Advancing biotechnology may make artificial diseases intentionally engineered to cause human extinction by large and well-resourced biowarfare programs an existential threat, although there is a very large gap between the difficulty of creating a catastrophic pathogen and civilization-ending one.
An FHI survey of experts at an Oxford Global Catastrophic Risks conference asked participants to assign credences to the risk of various levels of harm from different sources in the 21st century, including over 1 billion deaths and extinction. Median estimates assigned greater credence to human extinction from AI than conventional threats including nuclear war or engineered pandemics, but greater credence to casualties of at least 1 billion from the conventional threats.
So the relative importance of AI is greater in terms of existential risk than global catastrophic risk, but seems at least comparable in the latter area as well.