The following list is highly biased towards EA authors. That’s not to say that non-EA authors haven’t done a lot of important work. It just means that I haven’t read it. I’m only including articles that haven’t been mentioned by others so far.
—
In philosophy:
Several of Nick Bostrom’s papers are insightful. I’m not going to discuss them one-by-one because it would take too long. :-) Even though I don’t agree with all of his arguments, they have nonetheless influenced my thinking.
This PhD thesis presents philosophical arguments in favor of working to improve the far-future. Beckstead discusses different ways of doing so (existential risk reduction, trajectory changes, ripple effects of short-run altruism), and responds to objections. The thesis includes a section on population ethics that I still use as a reference.
(Note: The first version of this article was written in July 2009, but only formally published as a paper in 2015.)
This was the first article I read that made me seriously consider reducing wild animal suffering as a cause worthy of significant attention. It was highly influential on me personally. Although I admit it was not the first article to make the general argument that humans should reduce wild-animal suffering, Tomasik goes into much more depth and includes many more details and crucial considerations. Previous work was mainly philosophical in nature. In contrast, Tomasik is largely responsible for the existence of actual organizations like Wild Animal Initiative and Animal Ethics which are actively trying to address wild-animal suffering.
(Note: I’m not sure whether this article was ever actually formally published.)
This article shows that previous work on the Drake equation / Fermi paradox relied on a fundamental mathematical error. Namely, people would take the Drake equation, plug in estimates for each variable, and multiply them together. The authors show that when the calculation is done in a more complicated but also statistically accurate way, the lack of evidence of aliens is no longer such a surprise.
“In many growth models, economic growth arises from people creating ideas, and the long-run growth rate is the product of two terms: the effective number of researchers and their research productivity. We present a wide range of evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s. Across a broad range of case studies at various levels of (dis)aggregation, we find that ideas — and in particular the exponential growth they imply — are getting harder and harder to find. Exponential growth results from the large increases in research effort that offset its declining productivity.”
“Although deep learning has historical roots going back decades, neither the term “deep learning” nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton’s now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”
The following list is highly biased towards EA authors. That’s not to say that non-EA authors haven’t done a lot of important work. It just means that I haven’t read it. I’m only including articles that haven’t been mentioned by others so far.
—
In philosophy:
Several of Nick Bostrom’s papers are insightful. I’m not going to discuss them one-by-one because it would take too long. :-) Even though I don’t agree with all of his arguments, they have nonetheless influenced my thinking.
“On the Overwhelming Importance of Shaping the Long-Term Future” (2013) by Nicholas Beckstead
This PhD thesis presents philosophical arguments in favor of working to improve the far-future. Beckstead discusses different ways of doing so (existential risk reduction, trajectory changes, ripple effects of short-run altruism), and responds to objections. The thesis includes a section on population ethics that I still use as a reference.
“The Importance of Wild-Animal Suffering” (2015) by Brian Tomasik
(Note: The first version of this article was written in July 2009, but only formally published as a paper in 2015.)
This was the first article I read that made me seriously consider reducing wild animal suffering as a cause worthy of significant attention. It was highly influential on me personally. Although I admit it was not the first article to make the general argument that humans should reduce wild-animal suffering, Tomasik goes into much more depth and includes many more details and crucial considerations. Previous work was mainly philosophical in nature. In contrast, Tomasik is largely responsible for the existence of actual organizations like Wild Animal Initiative and Animal Ethics which are actively trying to address wild-animal suffering.
“Dissolving the Fermi Paradox” (2018) by Anders Sandberg, Eric Drexler, Toby Ord
(Note: I’m not sure whether this article was ever actually formally published.)
This article shows that previous work on the Drake equation / Fermi paradox relied on a fundamental mathematical error. Namely, people would take the Drake equation, plug in estimates for each variable, and multiply them together. The authors show that when the calculation is done in a more complicated but also statistically accurate way, the lack of evidence of aliens is no longer such a surprise.
In economics:
“Are Ideas Getting Harder to Find?” (2017) by Bloom et al.
“In many growth models, economic growth arises from people creating ideas, and the long-run growth rate is the product of two terms: the effective number of researchers and their research productivity. We present a wide range of evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s. Across a broad range of case studies at various levels of (dis)aggregation, we find that ideas — and in particular the exponential growth they imply — are getting harder and harder to find. Exponential growth results from the large increases in research effort that offset its declining productivity.”
In CS:
“Deep Learning: A Critical Appraisal” (2018) by Gary Marcus
“Although deep learning has historical roots going back decades, neither the term “deep learning” nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton’s now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”