The Human Diagnosis Project (disclaimer: I currently work there). If successful, it will be a major step toward accurate medical diagnosis for all of humanity.
Ben_Harack
[Question] Is anyone in EA currently looking at short-term famine alleviation as a possible high-impact opportunity this year?
I’m late to the party on this reply, but I’ll try to reply as if I’m doing so in late 2020.
Yes, I’m more engaged than I was in 2019, and that’s saying something considering that I was pretty engaged in 2019: working at an EA-aligned org (the Human Diagnosis Project), participating in EAG, joining Modeling Cooperation, building other collaborations, writing blog posts, etc.
What changed?
1. The Human Diagnosis Project continues to make headway toward the possibility of (very) significant impact and my role there increased substantially in responsibility.2. During 2020 I systematically pursued knowledge relating to some of my key interests (e.g., International Relations and game theory) and this exposure seems to have opened a lot of conceptual doors for me. This substantially increased my belief that I can make significant contributions to EA and thus increased my motivation.
An update here: This COVID-19 forward triage tool now also allows anyone to get a doctor to look at their particular case for an extremely low fee ($12 USD—though free service is currently available if needed).
COVID-19 Assessment Tool by the Human Diagnosis Project
Thanks for this piece, I thought it was interesting!
A small error I noticed while reading through one of the references is that the line “For example, France’s GDP per capita is around 60% of US GDP per capita.[7]” is incorrectly summarizing the cited material. The value needs to be 67% to make this sentence correct. The relevant section in the underlying material is: “As an example, suppose we wish to compare living standards in France and the United States. GDP per person is markedly lower in France: France had a per capita GDP in 2005 of just 67 percent of the U.S. value. Consumption per person in France was even lower — only 60 percent of the U.S., even adding government consumption to private consumption.”
I believe that regional talent pools could also be another factor in favor of the multiple organization scenario. For example, something I think a lot about is how the USA could really use an institution like the Future of Humanity Institute (FHI) in the long run. In addition to all of the points made in the original post, I think that such an institution would improve the overall health of the ecosystem of “FHI-like research” by drawing on a talent pool that is at least somewhat non-overlapping with that drawn upon by FHI.
I think that the talent pools are at least somewhat distinct because a) crossing borders is often logistically challenging or impossible, depending on the scenario; and b) not all job candidates can relocate to the United Kingdom for a variety of personal reasons.
If anyone is interested in discussing a “FHI-like institution in the USA” further, please get in touch with me either via direct message or via ben.harack at visionofearth.org.
This line of inquiry (that rebuilding after wars is quite different from other periods of time) is explored in G. John Ikenberry’s After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order After Major Wars. A quick and entertaining summary of the book—and how it has held up since its publication—was written by Ikenberry in 2018: Reflections on After Victory.
While I’m sympathetic to this view (since I held it for much of my life), I have also learned that there are very significant risks to developing this capacity naively.
To my knowledge, one of the first people to talk publicly about this was Carl Sagan, who discussed this in his television show Cosmos (1980), and in these publications:
Harris, A., Canavan, G., Sagan, C. and Ostro, S., 1994. The Deflection Dilemma: Use Vs. Misuse of Technologies for Avoiding Interplanetary Collision Hazards.
Ben’s summary:
Their primary concern and point is that a system built to defend humanity from natural asteroids would actually expose us to more risk (of anthropogenic origin) than it would mitigate (of natural origin).
Opportunities for misuse of the system depend almost solely on the capability of that system to produce delta-V changes in asteroids (equivalently framed as “response time”). A system capable of ~1m/s delta V would be capable of about 100 times as many misuses as its intended uses. That is, it would see ~100 opportunities for misuse for each opportunity for defending Earth from an asteroid.
They say that a high capability system (capable of deflection with only a few days notice) would be imprudent to build at this time.
Sagan, C. and Ostro, S.J., 1994. Dangers of asteroid deflection. Nature, 368(6471), p.501.
Sagan, C., 1992. Between enemies. Bulletin of the Atomic Scientists, 48(4), p.24.
Sagan, C. and Ostro, S.J., 1994. Long-range consequences of interplanetary collisions. Issues in Science and Technology, 10(4), pp.67-72.
Two interesting quotes from the last one:
“There is no other way known in which a small number of nuclear weapons can destroy global civilization.”
“No matter what reassurances are given, the acquisition of such a package of technologies by any nation is bound to raise serious anxieties worldwide.”
More recently, my collaborator Kyle Laskowski and I have reviewed the relevant technologies (and likely incentives) and have come to a somewhat similar position, which I would summarize as: the advent of asteroid manipulation technologies exposes humanity to catastrophic risk; if left ungoverned, these technologies would open the door to existential risk. If governed, this risk can be reduced to essentially zero. (However, other approaches, such as differential technological development and differential engineering projects do not seem capable of entirely closing off this risk. Governance seems to be crucial.)
So, we presented a poster at EAG 2019 SF: Governing the Emerging Risk Posed By Asteroid Manipulation Technologies where we summarized these ideas. We’re currently expanding this into a paper. If anyone is keenly interested in this topic, reach out to us (contact info is on poster).
Epistemic status: I don’t have a citation handy for the following arguments, so any reader should consider them merely the embedded beliefs of someone who has spent a significant amount of time studying the solar system and the risks of asteroids.
No, I believe that dark Damocloids will be largely invisible (when they are far away from the sun) even to the new round of telescopes that are being deployed for surveying asteroids. They’re very dark and (typically) very far away.
Luckily, I think the consensus is that they’re only a small portion of the risk. Most of the risk comes from the near-Earth asteroids (NEAs), since due to orbital mechanics they have many opportunities (~1 per year or so) to strike the Earth, while comets fly through the inner solar system extremely rarely. Thus, as we’ve moved towards finding all of the really big NEAs, we’ve moved very significantly towards knowing about the vast majority of the possible “civilization ending” or “mass extinction” events in our near future. There will still be a (very) long tail of real risk here due to objects like the Damocloids, but most of the natural risk of asteroids will be addressed if we completely understand the NEAs.
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We’ve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we’re talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we’re in new territory. We’ve definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven’t seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you’re interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the “arms race” terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren’t the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
After reviewing the literature pretty extensively over the last several months for a related project (the risks of human-directed asteroids), it seems to me that there is a strong academic consensus that we’ve found most of the big ones (though definitely not all—and many people are working hard to create ways for us to find the rest). See this graphic for a good summary of our current status circa 2018: https://www.esa.int/spaceinimages/Images/2018/06/Asteroid_danger_explained
Recently, I’ve been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we’re expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).
Factfulness by Hans Rosling is currently my go-to recommendation for the most important single book I could hand to a generic person.
Why do I hold it in such high regard? I think that it does a good job of teaching us both about the world and about ourselves at the same time. It helps the reader achieve better knowledge and better ability to think clearly (and come to accurate beliefs about the world). It’s also very hopeful despite its tendency to tackle head-on some of the darker aspects of our world.
Under “Decision-making and Forecasting” I would add these two:
Superforecasting: The Art and Science of Prediction
Factfulness: Ten Reasons We’re Wrong About the World—and Why Things Are Better Than You Think
(Though Factfulness also touches on numerous other categories in the list.)
Toby Ord
Following up on this more than a year later, I can vouch for some but not all of these conclusions based on my experience at the high-impact organization I work for, the Human Diagnosis Project (www.humandx.org).
We’ve found it very difficult to recruit high-quality value-aligned engineers despite the fact that none of the above items really apply to us.
Our software engineering team performs very challenging work all over the stack—including infrastructure, backend, and mobile.
Working here is probably great for career development (in part because we’re on the bleeding edge of numerous technologies and give our engineers exposure to many technologies).
We pay similar salaries to other early-stage startups in Silicon Valley (and New York).
On problem I can identify right now is that I’ve attempted to recruit from the EA community a few times with very limited success. Perhaps I’ve gone about this via the wrong fora or have made other mistakes, but I’ve found that generally any candidates I did find were not good fits for the roles that we have to offer.
This problem continues to this day. Given that we don’t have the issues identified above (to my knowledge), my best hypothesis right now is that we’re simply unable to reach the right people in the right way—and I’m not sure how to fix that. If anyone has any particular ideas on this front, I’d love to hear them.
That said, if anyone wants to help us out, we’re still actively recruiting for a host of roles, including a lot of engineering positions. To learn more, take a look at https://www.humandx.org/team
I’ll try to directly answer some of the questions raised.
I’m generally interested in this project. If such a system existed, I’d probably issue certificates for research artifacts (papers, blog posts, software, datasets, etc.) and would advocate for the usage of impact certificates more broadly.
If I were able to reliably buy arbitrary fractions of certificates on an open market, I’d probably do so somewhat often (every several weeks) in order to send signals of value. My personal expenditures would be very small (a few hundreds of dollars per year probably unless something significantly changes), but I’d also try to influence others to get involved similarly.
As for concerns, I’m very uncertain about my position on the diverging concerns raised and argued by RyanCarey and gwern in this thread. As a creator, I can imagine wanting access to the entirety (or at least the majority) of the value of certificates attached to my work. As an observer of a market, I’d like for it to generally be open for speculation and revaluation, etc. Perhaps I’d be in favor of a system that splits the difference somehow, perhaps via smart contracts that enforce a split of resale royalties (most going to the creator, some going to the prior owner)?
Relatedly, I’d love to see a workable / understandable / intuitive system for revaluation of a certificate as various parties end up owning various parts of it, bought at differing prices (if such a thing is possible). I can imagine myself wanting to send a signal that a cert should be valued more highly by buying a small fraction of it for higher than the going rate. I may also just be unfamiliar with pricing schemes for fractional ownership and prices like this.