I agree and that appears the likely sequelae. I find it a bit disappointing that he went into this topic with his view already formed, and used the prominent contentious points and counterarguments to reinforce his preconceptions without becoming familiar with the detailed refutations already out there. It’s great to have good debate and opposing views presented, but his broad stroke dismissal makes it really difficult.
cassidynelson
I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom’s Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.
He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.
Great suggestion about Sam Harris—I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I’m still waiting for the audio to be uploaded on Sam’s podcast, but I wonder given Sam’s positions if he questions Pinker on this as well.
Enlightened Concerns of Tomorrow
This article has many parallels with Greg Lewis’ recent article on the unilateralist’s curse, as it pertains to biotechnology development.
If participation in an SRO is voluntary, even if you have 9⁄10 organisations on board how do you stop the final from proceeding with AGI development without oversight? I’d imagine that the setup of a SRO may infer disadvantages to participants potentially, thus indirectly incentivizing non-participation (if the lack of restrictions increase the probability of reaching AGI first).
Do you anticipate a SRO may be an initial step towards a more obligatory framework for oversight?
Systematic research would be useful to many areas within EA, and I agree—that from a broader perspective it is incorporated more here than other communities.
I think this discussion would benefit from acknowledging the risk of confirmation bias and general systematic inductive reasoning and deductive reasoning errors. Especially with related to your criteria example, I think there is a danger of finding evidence to confirm to existing beliefs about what framework should be used.
All the same though, do you think systematically comparing several criteria options will arrive at a better outcome? How would we assess this?
Great work and I really enjoyed reading this presentation.
On slide 27, where did you get the estimates for “Human-caused X-risks are thousands of times more likely per year than natural X-risks”
I agree with this generally but was wondering if you have a source for the thousands times more.
For the malaria vaccine, what was the additional 2$ cost for? In the citation, it just says it is an assumed constant. Why is it per child and not per dose?
I’m wondering what factor of vaccine production across the spectrum would be most associated with lower cost. I’d imagine R&D timeframe would be a large component, but are there specific cost-related factors that you predict matter more or less? Does making a vaccine that does not require refrigeration lower the cost substantially in rollout?
Thanks Risto,
This is great! EA Melbourne had its first reading group last weekend, and we did a Peter Singer paper for the first session. I think your questions list will come in use for our next one and I’ll bring it to the group.
Great read and interesting take on alternative considerations. A discussion about fundamental attribution error would be interesting here—or a closely related concept. Not applicable to existence vs. non-existence, but I’d imagine we have poor intuitions at knowing the effect of changes to perturbations in individual human characteristics, and I wonder if something similar is at play when we estimate the effect of our actions, personal choices or character. In a stochastic enough system with a large number of players, perhaps single changes become absorbed into the background chaos.
I think something to consider when deciding on the time point of giving is not only the investment in your own financial future, but the investment in the cause your donation is contributing towards. A donation today/this decade may have a smaller overall absolute value compared with a delayed plan for donating, but the effect of that donation today compounds over time as well.
For example, giving $1000 to a high impact charity working in extreme poverty cause areas may mean that in 10 years from now a community that benefited has a marked alteration in trajectory. Less malaria disease burden today in their younger members may have substantial further reductions in suffering in a decade, more than a larger donation at that time point might be able to achieve.
Another example is giving money towards high impact research today which may have positive benefits in a decade from now from helping enable a field. The value of starting research earlier, giving it time to produce value, may be much greater than investing a larger amount at a later point in time.