(Those two papers are far from the most influential ML papers in the last decade! But I usually learn ML from video lectures/blog posts/talking to people rather than papers)
(Probably also various AI Safety stuff, though no specific paper comes to mind).
*(the research/arguments weren’t directly decision-relevant for me, but the fact that they overturned something a lot of EAs believed to be true were a useful meta-update)
The degree that EA thought relies on cutting-edge* research in economics, philosophy, etc, from the last 10 years is kinda surprising if you think about it.
It’s kinda weird that not just Superintelligence but also Poor Economics, Compassion by the Pound, information hazards, unilateralist’s curse, and other things we just kinda assume to be “in the water supply” rely mostly on arguments or research that’s not even a decade old!
*the less polite way to put it is “likely to be overturned” :P
I talked about it before several times, but the biggest one is:
The Possibility of an Ongoing Moral Catastrophe by Evan G. Williams, which I summarized here.
Other than that, in philosophy mostly stuff by Bostrom:
The Unilateralist’s Curse
Information Hazards
(Also flagging Will’s work on moral uncertainty, though it’s unclear to me that his PhD thesis is the best presentation)
In CS:
Adversarial Examples are Not Bugs, They Are Features by Ilyas et.al. (makes clear something I suspected for a while about that topic)
World Models by Ha and Schmidhuber
(Those two papers are far from the most influential ML papers in the last decade! But I usually learn ML from video lectures/blog posts/talking to people rather than papers)
(Probably also various AI Safety stuff, though no specific paper comes to mind).
Designing Data-Intensive Applications quoted a ton of papers (that I did not read).
In Economics:
The academic textbook Compassion by the Pound.
Poor Economics (which won the 2019 Nobel Prize!)
Meta*:
Comment on ‘The aestivation hypothesis for resolving Fermi’s paradox’
Does suffering dominate enjoyment in the animal kingdom?
*(the research/arguments weren’t directly decision-relevant for me, but the fact that they overturned something a lot of EAs believed to be true were a useful meta-update)
The degree that EA thought relies on cutting-edge* research in economics, philosophy, etc, from the last 10 years is kinda surprising if you think about it.
It’s kinda weird that not just Superintelligence but also Poor Economics, Compassion by the Pound, information hazards, unilateralist’s curse, and other things we just kinda assume to be “in the water supply” rely mostly on arguments or research that’s not even a decade old!
*the less polite way to put it is “likely to be overturned” :P