If someone was looking to work for OPP would an honours* or masters program be more beneficial than an undergraduate degree?
Are there particular questions or areas that could be worked on for a research project in honours/masters that are particularly helpful directly or develop the right kinds of skills for OPP? (especially in economics, philosophy or cognitive science)
(“Honours” in Australia is a 1 year research/coursework program)
Saved these all to pocket, thanks for the recommendations!
Digging into this a bit, I may have gotten the original argument for nuclear wrong—it does seem like some countries would struggle to source their energy from renewables due to space constraints (arguably, less of a problem in Australia). “I’m not even sure it’s physically possible with 100% renewables… if you were to try and just replace oil in a country like Korea or Japan, so a densely populated country without huge amounts of spare land, you have to take up a significant proportion of the entire nation with solar panels… In the UK… if you want to replace our oil consumption, you’d have to cover over one and a half times the size of Wales with solar just for oil; never mind about decarbonizing the electricity grid and all the rest of it.”—Mark Lynas on the 80,000 hours podcast
Thanks, I’ve found this helpful (if a little embarrassing)!
I think my (updated based on the comments so far) conclusion is the same as yours!
I’m all for pricing in carbon and sensible policy that regulates in proportion to our best estimate of the risk!
Thank you! I really appreciate the encouragement!
Thanks, it looks like you’ve put a lot of effort into summarising this information (it actually looks better and higher effort than my original post, oop).
Thanks for the post—this seems like a really important contribution! [Caveat: I am not at all an expert on this and just spent some time googling]. Snake antivenom actually requires that you milk venom from a snake to produce, and I wonder how much this is contributing to the high cost ($55–$640) of snake venom . I wonder if R&D would be a better investment, especially given the potentially high storage and transport costs for snake venom (see below). It would be interesting to see someone investigate this more thoroughly.
Storage costs are pretty low in that cost effectiveness estimate you cite , but it seems pretty plausible to me that storage and transportation costs would be much higher if you wanted to administer snake venom at smaller clinics that were closer to the victims of snake bites. The cost was based on this previous estimate, in which they say“The cost of shipping from abroad where the antivenoms are manufactured, transportation within Nigeria and freezing of antivenom (including use of supplementary diesel power electric generators in addition to national power grid) is estimated at N3,000 ($18.75) from prior experience and expert opinion. But it was assumed that appropriate storage facilities already exist at the local level through immunization/drug services and that no additional capital investment would be required to adequately store the antivenom in the field” .I’m not sure exactly what facilities are required and how expensive they would be, but this seems like it could be an important consideration.
 Brown NI (2012) Consequences of neglect: analysis of the sub-Saharan African snake antivenom market and the global context. PLoS Negl Trop Dis. 6: e1670.
 Hamza M, Idris MA, Maiyaki MB, Lamorde M, Chippaux JP, et al. (2016) Cost-Effectiveness of Antivenoms for Snakebite Envenoming in 16 Countries in West Africa. PLOS Neglected Tropical Diseases 10(3): e0004568. https://doi.org/10.1371/journal.pntd.0004568
 Habib AG, Lamorde M, Dalhat MM, Habib ZG, Kuznik A (2015) Cost-effectiveness of Antivenoms for Snakebite Envenoming in Nigeria. PLOS Neglected Tropical Diseases 9(1): e3381. https://doi.org/10.1371/journal.pntd.0003381
This is great!
Recently, I was reading David Thorstad’s new paper “Existential risk pessimism and the time of perils”. In it, he models the value of reducing existential risk on a range of different assumptions.
The headline result is that 1) most plausibly, existential risk reduction is not overwhelmingly valuable–though it may still be quite valuable, it doesn’t probably swamp all other cause areas. And 2) thinking that extinction is more likely tends to weaken the case for existential risk reduction rather than strengthen it.
It struck me that one of the results is particularly interesting, I call it the repugnant solution:
If we can reduce existential risk to 0% per century across all future centuries, this act is infinitely valuable, even if the initial risk was absolutely tiny and each century is only just of positive value. This act is therefore, better than basically anything else we could do.
Perhaps, in a Pascalian way, if we think there is a tiny chance that some particular action will lead to a permanent reduction in existential risk, that act too is infinitely valuable, and everything breaks.
This is also true even if we decrease the value of each century from “really amazingly great” to “only just net positive”.
This is really great to see!
I think economic growth is rated too highly by this framework. It gets a very high rating on the first criteria because many organisations think it’s something worth considering—but none of them rate it as their top priority, or even a particularly high priority (to my knowledge). My intuition is that it wouldn’t get such a high rating if the criteria was importance, rather than consensus that it is one of the issues worth considering—and that importance is what matters here?
Here are some articles I think would make good scripts (I’ll also be submitting one script of my own). Summaries of the following papers:
The Epistemic Challenge to Longtermism
A Paradox for Tiny Probabilities and Enormous Values
The Case for Strong Longtermism
Forecasting transformative AI: the “biological anchors” method in a nutshell
Are we living at the hinge of History?
In defence of fanaticism
Longtermist Institutional Reform
Doomsday Rings Twice
Asymmetry, Uncertainty, and the Longterm
Simulation in Expectation
Moral Uncertainty about population axiology
Existential risk pessimism and the time of perils
I’d also suggest the following papers which I haven’t seen a summary of:
The Potato’s Contribution to Population and Urbanization: Evidence from a Historical Experiment
A Bayesian Truth Serum for Subjective Data
Improving Judgments of Existential Risk: Better Forecasts, Questions, Explanations, Policies
The Parliamentary Approach to Moral Uncertainty
Is Power-Seeking AI an Existential Risk?
I’d also suggest all of WWOTF’s supplementary materials, especially Significance, Persistence and Contingency.
I am writing these 8 summaries, message me if you want to see them early.
Thanks, this is really helpful information about trusts and the 4% rule! On self trust: I feel that a common pattern might be that when you’re young, you’re ‘idealistic’ and want to do things like donate. When you’re older, you feel like spending your money (if you have it) in ways that might not make you particularly happy. I might even decide I would rather give it all to my kids (if I have some). This makes me think there’s a good chance I won’t donate it later if I haven’t pre-committed. On safety: I am from Australia, and to some extent my context is probably quite different to many others. (On the whole, Australia tends to look after you if you get severely injured or run entirely out of money. This makes quick access less of a pressing consideration for me). But to the extent that it is an important consideration, why not have a little money that’s easily accessible and most of it in a trust?
Thanks for pointing this out, the version on the GPI website has been corrected.
Thanks for writing such a thoughtful comment. The post has to reflect the content of the paper, so I’m glad your comment can provide extra context. The post now reflects that the paper was written in 2019, and I plan to address the 30x figure soon.
I really appreciate the feedback!
Great to see attempts to measure impact in such difficult areas. I’m wondering if there’s a problem of attribution that looks like this (I’m not up to date on this discussion):
An organisation like the Future Academy or 80,000 hours or someone says “look, we probably got this person into a career in AI safety, which has a higher impact, and cost us $x, so our impact per dollar is $x per probable career spent on AI safety”.
The person goes to do a training program, and they say “we trained this person to do good work in AI safety, which allows them to have an impact, and it only cost us $y to run the program, so our impact is $y per impactful career in AI safety”
The person then goes on to work at a research organisation, who says “we spent $z including salary and overheads on this researcher, and they produced a crucial seeming alignment paper, so our impact is $z per crucial seeming alignment paper”.
When you account for this properly, it’s clear that each of these estimates is too high, because part of the impact and cost has to be attributed elsewhere.
A few off the cuff thoughts:
It seems there should be a more complicated discounted measure of impact here for each organisation, that takes into account additional costs.
It certainly could be the case that at each stage the impact is high enough to justify the program at the discounted rate.
This might be a misunderstanding of what you’re actually doing, in which case I would be excited to learn that you (and similar organisations) already accounted for this!
I don’t mean to pick on any organisation in particular if no one is doing this, it’s just a thought about how these measures could be improved in general.