If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
rileyharris
More EA success stories:
Pandemics. We have now had the first truly global pandemic in decades, perhaps ever.
Nuclear war. Thanks to recent events, the world is closer than ever to a nuclear catastrophe.
It’s not all good news though. Unfortunately, poverty seems to be trending down, there’s less lead in the paint, and some say AI could solve most problems despite the risks.
Thanks for the post—this seems like a really important contribution!
[Caveat: I am not at all an expert on this and just spent some time googling]. Snake antivenom actually requires that you milk venom from a snake to produce, and I wonder how much this is contributing to the high cost ($55–$640) of snake venom [1]. I wonder if R&D would be a better investment, especially given the potentially high storage and transport costs for snake venom (see below). It would be interesting to see someone investigate this more thoroughly.Storage costs are pretty low in that cost effectiveness estimate you cite [2], but it seems pretty plausible to me that storage and transportation costs would be much higher if you wanted to administer snake venom at smaller clinics that were closer to the victims of snake bites. The cost was based on this previous estimate, in which they say“The cost of shipping from abroad where the antivenoms are manufactured, transportation within Nigeria and freezing of antivenom (including use of supplementary diesel power electric generators in addition to national power grid) is estimated at N3,000 ($18.75) from prior experience and expert opinion. But it was assumed that appropriate storage facilities already exist at the local level through immunization/drug services and that no additional capital investment would be required to adequately store the antivenom in the field” [3].
I’m not sure exactly what facilities are required and how expensive they would be, but this seems like it could be an important consideration.
[1] Brown NI (2012) Consequences of neglect: analysis of the sub-Saharan African snake antivenom market and the global context. PLoS Negl Trop Dis. 6: e1670.
[2] Hamza M, Idris MA, Maiyaki MB, Lamorde M, Chippaux JP, et al. (2016) Cost-Effectiveness of Antivenoms for Snakebite Envenoming in 16 Countries in West Africa. PLOS Neglected Tropical Diseases 10(3): e0004568. https://doi.org/10.1371/journal.pntd.0004568
[3] Habib AG, Lamorde M, Dalhat MM, Habib ZG, Kuznik A (2015) Cost-effectiveness of Antivenoms for Snakebite Envenoming in Nigeria. PLOS Neglected Tropical Diseases 9(1): e3381. https://doi.org/10.1371/journal.pntd.0003381
This is a fantastic initiative! I’m not personally vegan, but believe the “default” for catering should be vegan (or at least meat and egg free) with the option for participants to declare special diatery requirements. This would lower consumption of animal products as most people just go with the default option, and push the burden of responsibility to the people going out of their way to eat meat.
I think Thorstad’s “Against the singularity hypothesis” might complement the week 10 readings.
I feel like these actions and attitudes embody many of the virtues of effective altruism. You really genuinely wanted to help somebody, and you took personally costly actions to do so. I feel great about having people like you in the EA Community. My advice is to keep the feeling of how important you were to Tlalok’s life as you do good effectively with other parts of your time and effort, knowing you are perhaps making a profound difference in many lives.
Great to see attempts to measure impact in such difficult areas. I’m wondering if there’s a problem of attribution that looks like this (I’m not up to date on this discussion):
An organisation like the Future Academy or 80,000 hours or someone says “look, we probably got this person into a career in AI safety, which has a higher impact, and cost us $x, so our impact per dollar is $x per probable career spent on AI safety”.
The person goes to do a training program, and they say “we trained this person to do good work in AI safety, which allows them to have an impact, and it only cost us $y to run the program, so our impact is $y per impactful career in AI safety”
The person then goes on to work at a research organisation, who says “we spent $z including salary and overheads on this researcher, and they produced a crucial seeming alignment paper, so our impact is $z per crucial seeming alignment paper”.
When you account for this properly, it’s clear that each of these estimates is too high, because part of the impact and cost has to be attributed elsewhere.
A few off the cuff thoughts:
It seems there should be a more complicated discounted measure of impact here for each organisation, that takes into account additional costs.
It certainly could be the case that at each stage the impact is high enough to justify the program at the discounted rate.
This might be a misunderstanding of what you’re actually doing, in which case I would be excited to learn that you (and similar organisations) already accounted for this!
I don’t mean to pick on any organisation in particular if no one is doing this, it’s just a thought about how these measures could be improved in general.
Thanks for posting, I have a few quick comments I want to make:
-
I recently got into a top program in philosophy despite having clear association with EA (I didn’t cite “EA sources” in my writing sample though, only published papers and OUP books). I agree that you should be careful, especially about relying on “EA Sources” which are not widely viewed as credible.
-
Totally agree that prospects are very bad outside of top 10 and lean towards “even outside of top 5 seriously consider other options”
-
On the other hand, if you really would be okay with failing to find a job in philosophy, it might be reasonable to do a PhD just because you want to. It nice to spend a few years thinking hard about philosophy, especially if you have a funded place, and your program is outside of the US (and therefore shorter)
-
My understanding is that, at a high level, this effect is counterbalanced by the fact that a high rate of extinction risk means the expected value of the future is lower. In this example, we only reduce the risk this century to 10%, but next century it will be 20%, and the one after that it will be 20% and so on. So the risk is 10x higher than in the 2% to 1% scenario. And in general, higher risk lowers the expected value of the future.
In this simple model, these two effects perfectly counterbalance each other for proportional reductions of existential risk. In fact, in this simple model the value of reducing risk is determined entirely by the proportion of the risk reduced and the value of future centuries. (This model is very simplified, and Thorstad explores more complex scenarios in the paper).
I’d also potentially include the latest version of Carlsmiths chapter on Power-seeking AI.
I agree that regulation is enormously important, but I’m not sure about the following claim:
“That means that aligning an AGI, while creating lots of value, would not reduce existential risk”
It seems, naively, that an aligned AGI could help us detect and prevent other power seeking AGIs. It doesn’t completely eliminate the risk, but I feel even a single aligned AGI makes the world a lot safer against misaligned AGI.
Thank you! I really appreciate the encouragement!
The schedule looks like it’s all dated for August, is that the right link?
Thanks for pointing this out, the version on the GPI website has been corrected.
Here are some articles I think would make good scripts (I’ll also be submitting one script of my own).
Summaries of the following papers:Forecasting transformative AI: the “biological anchors” method in a nutshell
Are we living at the hinge of History?[1]
In defence of fanaticism[1]
Longtermist Institutional Reform[1]
Doomsday Rings Twice[1]
Asymmetry, Uncertainty, and the Longterm[1]
Simulation in Expectation[1]
Moral Uncertainty about population axiology[1]
Existential risk pessimism and the time of perils[1]
I’d also suggest the following papers which I haven’t seen a summary of:
The Potato’s Contribution to Population and Urbanization: Evidence from a Historical Experiment
Improving Judgments of Existential Risk: Better Forecasts, Questions, Explanations, Policies
I’d also suggest all of WWOTF’s supplementary materials, especially Significance, Persistence and Contingency.
(Edit: Spacing)
- ^
I am writing these 8 summaries, message me if you want to see them early.
Digging into this a bit, I may have gotten the original argument for nuclear wrong—it does seem like some countries would struggle to source their energy from renewables due to space constraints (arguably, less of a problem in Australia).
“I’m not even sure it’s physically possible with 100% renewables… if you were to try and just replace oil in a country like Korea or Japan, so a densely populated country without huge amounts of spare land, you have to take up a significant proportion of the entire nation with solar panels… In the UK… if you want to replace our oil consumption, you’d have to cover over one and a half times the size of Wales with solar just for oil; never mind about decarbonizing the electricity grid and all the rest of it.”—Mark Lynas on the 80,000 hours podcastThanks, I’ve found this helpful (if a little embarrassing)!
I just want to add that even if people treat you different, ultimately it’s a line on your CV that says “completed this degree, in this year”. I don’t think it makes a material difference to your opportunities at the point of completion if it took you much longer to complete.
What is the timeline for announcing the result of this competition?
“There are three main branches of decision theory: descriptive decision theory (how real agents make decisions), prescriptive decision theory (how real agents should make decisions), and normative decision theory (how ideal agents should make outcomes).”
This doesn’t seem right to me, I would say: an interesting way you can divide up decision theory is between descriptive decision theory (how people make decisions) and normative decision theory (how we should make decisions).
The last line of your description, “how ideal agents should make outcomes” seems especially troubling. I’m not quite sure what you are trying to say.
I think there are good parts of this post, for example, you’re hitting some interesting thought experiments. But several aspects are slightly confusing as written. For example, Newcomb’s problem isn’t (I believe) a counterexample to EDT, but that isn’t clear from your post.
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?