If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
rileyharris
Thesis Summary and Open Questions: Normative Uncertainty and Information Value
Paper summary––Protecting future generations: A global survey of legal academics
Savings as donations
Thanks for the post—this seems like a really important contribution!
[Caveat: I am not at all an expert on this and just spent some time googling]. Snake antivenom actually requires that you milk venom from a snake to produce, and I wonder how much this is contributing to the high cost ($55–$640) of snake venom [1]. I wonder if R&D would be a better investment, especially given the potentially high storage and transport costs for snake venom (see below). It would be interesting to see someone investigate this more thoroughly.Storage costs are pretty low in that cost effectiveness estimate you cite [2], but it seems pretty plausible to me that storage and transportation costs would be much higher if you wanted to administer snake venom at smaller clinics that were closer to the victims of snake bites. The cost was based on this previous estimate, in which they say“The cost of shipping from abroad where the antivenoms are manufactured, transportation within Nigeria and freezing of antivenom (including use of supplementary diesel power electric generators in addition to national power grid) is estimated at N3,000 ($18.75) from prior experience and expert opinion. But it was assumed that appropriate storage facilities already exist at the local level through immunization/drug services and that no additional capital investment would be required to adequately store the antivenom in the field” [3].
I’m not sure exactly what facilities are required and how expensive they would be, but this seems like it could be an important consideration.
[1] Brown NI (2012) Consequences of neglect: analysis of the sub-Saharan African snake antivenom market and the global context. PLoS Negl Trop Dis. 6: e1670.
[2] Hamza M, Idris MA, Maiyaki MB, Lamorde M, Chippaux JP, et al. (2016) Cost-Effectiveness of Antivenoms for Snakebite Envenoming in 16 Countries in West Africa. PLOS Neglected Tropical Diseases 10(3): e0004568. https://doi.org/10.1371/journal.pntd.0004568
[3] Habib AG, Lamorde M, Dalhat MM, Habib ZG, Kuznik A (2015) Cost-effectiveness of Antivenoms for Snakebite Envenoming in Nigeria. PLOS Neglected Tropical Diseases 9(1): e3381. https://doi.org/10.1371/journal.pntd.0003381
This is a fantastic initiative! I’m not personally vegan, but believe the “default” for catering should be vegan (or at least meat and egg free) with the option for participants to declare special diatery requirements. This would lower consumption of animal products as most people just go with the default option, and push the burden of responsibility to the people going out of their way to eat meat.
[Link] New data poisoning tool lets artists fight back against generative AI
Summary of “The Precipice” (1 of 4): Asteroids, volcanoes and exploding stars
Announcing Million Year View
I think Thorstad’s “Against the singularity hypothesis” might complement the week 10 readings.
Great to see attempts to measure impact in such difficult areas. I’m wondering if there’s a problem of attribution that looks like this (I’m not up to date on this discussion):
An organisation like the Future Academy or 80,000 hours or someone says “look, we probably got this person into a career in AI safety, which has a higher impact, and cost us $x, so our impact per dollar is $x per probable career spent on AI safety”.
The person goes to do a training program, and they say “we trained this person to do good work in AI safety, which allows them to have an impact, and it only cost us $y to run the program, so our impact is $y per impactful career in AI safety”
The person then goes on to work at a research organisation, who says “we spent $z including salary and overheads on this researcher, and they produced a crucial seeming alignment paper, so our impact is $z per crucial seeming alignment paper”.
When you account for this properly, it’s clear that each of these estimates is too high, because part of the impact and cost has to be attributed elsewhere.
A few off the cuff thoughts:
It seems there should be a more complicated discounted measure of impact here for each organisation, that takes into account additional costs.
It certainly could be the case that at each stage the impact is high enough to justify the program at the discounted rate.
This might be a misunderstanding of what you’re actually doing, in which case I would be excited to learn that you (and similar organisations) already accounted for this!
I don’t mean to pick on any organisation in particular if no one is doing this, it’s just a thought about how these measures could be improved in general.
Summary of “The Precipice” (4 of 4): Our place in the story of humanity
Summary: Existential risk from power-seeking AI by Joseph Carlsmith
Updating on Nuclear Power
Thanks for posting, I have a few quick comments I want to make:
-
I recently got into a top program in philosophy despite having clear association with EA (I didn’t cite “EA sources” in my writing sample though, only published papers and OUP books). I agree that you should be careful, especially about relying on “EA Sources” which are not widely viewed as credible.
-
Totally agree that prospects are very bad outside of top 10 and lean towards “even outside of top 5 seriously consider other options”
-
On the other hand, if you really would be okay with failing to find a job in philosophy, it might be reasonable to do a PhD just because you want to. It nice to spend a few years thinking hard about philosophy, especially if you have a funded place, and your program is outside of the US (and therefore shorter)
-
My understanding is that, at a high level, this effect is counterbalanced by the fact that a high rate of extinction risk means the expected value of the future is lower. In this example, we only reduce the risk this century to 10%, but next century it will be 20%, and the one after that it will be 20% and so on. So the risk is 10x higher than in the 2% to 1% scenario. And in general, higher risk lowers the expected value of the future.
In this simple model, these two effects perfectly counterbalance each other for proportional reductions of existential risk. In fact, in this simple model the value of reducing risk is determined entirely by the proportion of the risk reduced and the value of future centuries. (This model is very simplified, and Thorstad explores more complex scenarios in the paper).
Summary of “The Precipice” (2 of 4): We are a danger to ourselves
Summary of “The Precipice” (3 of 4): Playing Russian roulette with the future
I’d also potentially include the latest version of Carlsmiths chapter on Power-seeking AI.
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?