The degree that EA thought relies on cutting-edge* research in economics, philosophy, etc, from the last 10 years is kinda surprising if you think about it.
It’s kinda weird that not just Superintelligence but also Poor Economics, Compassion by the Pound, information hazards, unilateralist’s curse, and other things we just kinda assume to be “in the water supply” rely mostly on arguments or research that’s not even a decade old!
*the less polite way to put it is “likely to be overturned” :P
I talked about it before several times, but the biggest one is:
The Possibility of an Ongoing Moral Catastrophe by Evan G. Williams, which I summarized here.
Other than that, in philosophy mostly stuff by Bostrom:
The Unilateralist’s Curse
(Also flagging Will’s work on moral uncertainty, though it’s unclear to me that his PhD thesis is the best presentation)
Adversarial Examples are Not Bugs, They Are Features by Ilyas et.al. (makes clear something I suspected for a while about that topic)
World Models by Ha and Schmidhuber
(Those two papers are far from the most influential ML papers in the last decade! But I usually learn ML from video lectures/blog posts/talking to people rather than papers)
(Probably also various AI Safety stuff, though no specific paper comes to mind).
Designing Data-Intensive Applications quoted a ton of papers (that I did not read).
The academic textbook Compassion by the Pound.
Poor Economics (which won the 2019 Nobel Prize!)
Comment on ‘The aestivation hypothesis for resolving Fermi’s paradox’
Does suffering dominate enjoyment in the animal kingdom?
*(the research/arguments weren’t directly decision-relevant for me, but the fact that they overturned something a lot of EAs believed to be true were a useful meta-update)
This is the only answer here I’m moderately confident is correct. A pity the EV is so low!
cross-posted from Facebook.
Sometimes I hear people who caution humility say something like “this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?”. While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable , I think the framing is broadly wrong.In particular, using geologic time rather than anthropological time hides the fact that there probably weren’t that many people actively thinking about these issues, especially carefully, in a sustained way, and making sure to build on the work of the past. For background, 7% of all humans who have ever lived are alive today, and living people compose 15% of total human experience  so far!!! It will not surprise me if there are about as many living philosophers today as there were dead philosophers in all of written history.For some specific questions that particularly interest me (eg. population ethics, moral uncertainty), the total research work done on these questions is generously less than five philosopher-lifetimes. Even for classical age-old philosophical dilemmas/”grand projects” (like the hard problem of consciousness), total work spent on them is probably less than 500 philosopher-lifetimes, and quite possibly less than 100.There are also solid outside-view reasons to believe that the best philosophers today are just much more competent  than the best philosophers in history, and have access to much more resources.Finally, philosophy can build on progress in natural and social sciences (eg, computers, game theory).Speculating further, it would not surprise me, if, say, a particularly thorny and deeply important philosophical problem can effectively be solved in 100 more philosopher-lifetimes. Assuming 40 years of work and $200,000/year per philosopher, including overhead, this is ~800 million, or in the same ballpark as the cost of developing a single drug. Is this worth it? Hard to say (especially with such made-up numbers), but the feasibility of solving seemingly intractable problems no longer seems crazy to me. For example, intro philosophy classes will often ask students to take a strong position on questions like deontology vs. consequentialism, or determinism vs. compatibilism. Basic epistemic humility says it’s unlikely that college undergrads can get those questions right in such a short time.  https://eukaryotewritesblog.com/2018/10/09/the-funnel-of-human-experience/ Flynn effect, education, and education of women, among others. Also, just https://en.wikipedia.org/wiki/Athenian_democracy#Size_and_make-up_of_the_Athenian_population. (Roughly as many educated people in all of Athens at any given time as a fairly large state university). Modern people (or at least peak performers) being more competent than past ones is blatantly obvious in other fields where priority is less important (eg, marathon runners, chess). Eg, internet, cheap books, widespread literacy, and the current intellectual world is practically monolingual. https://en.wikipedia.org/wiki/Cost_of_drug_development
>> And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1⁄2,000th of human history but more like 1/20th of it.
And if you use person-years, you get something like 1⁄7 − 1/14! 
>> I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms.
I’m pretty confused about how these dramatically different priors are formed, and would really appreciate it if somebody (maybe somebody less busy than Will or Toby?) could give pointers on how to read up more on forming these sort of priors. As you allude to, this question seems to map to anthropics, and I’m curious how much the priors here necessarily maps to your views on anthropics. Eg, am I reading the post and your comment correctly that Will takes an SIA view and you take an SSA view on anthropic questions?
In general, does anybody have pointers on how best to reason about anthropic and anthropic-adjacent questions?
One of the most impactful purchases I’ve ever made! :P
Ender’s Game by Orson Scott Card really spoke to me as a kid, though hopefully your students are better socialized! :P
The Signal and the Noise by Nate Silver (of 538 fame) is the best and most readable introduction to Bayesian statistics and Bayesian reasoning that I’m aware of.
Dairy of a Madman by Lu Xun was helpful for me in cultivating a strong sense of dissatisfaction with the way things are and the implicit or explicit rules that govern social reality.
I don’t know if there are any good translations though.
Re Poor Economics:
I still remember the experiments in (I think) India where they demonstrated that even for people living in extreme poverty, where most of marginal spending goes to food, increased income frequently resulted in people buying better-tasting calories, not just more calories. A+.
I thought Chiang was unusually high in literary merit, but what do you think is the relevance to EA?
Strongly seconded. Both had a large effect on me, especially Famine, Affluence and Morality when I was a teenager.
For #2, Ideological Turing Tests could be cool too.
You may also like our discussion sheets for this topic:
Sure! In general you can assume that anything I write publicly is freely available for academic purposes. I’d also be interested in seeing the syllabus if/when you end up designing it.
Messaged. Will share more widely if/when it’s ready for prime time. :)
We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.
did you consider copying the summary into a Forum post, rather than linking it?
Yes. I did a lot of non-standard formatting tricks in Google Docs when I first wrote it (because I wasn’t expecting to ever need to port it over to a different format). So when I first tried to copy it over, the whole thing looked disastrously unreadable.
Changed the title. :)
In general, if I imagine ‘longtermism’ taking off as a term, I imagine it getting a lot of support if it designates the first concept, and a lot of pushback if it designates the second concept. It’s also more in line with moral ideas and social philosophies that have been successful in the past: environmentalism claims that protecting the environment is important, not that protecting the environment is (always) the most important thing; feminism claims that upholding women’s rights is important, not that doing so is (always) the most important thing. I struggle to think of examples where the philosophy makes claims about something being the most important thing, and insofar as I do (totalitarian marxism and fascism are examples that leap to mind), they aren’t the sort of philosophies I want to emulate.
Maybe this is the wrong reference class, but I can think of several others: utilitarianism, Christianity, consequentialism, where the “strong” definition is the most natural that comes to mind.
Ie, a naive interpretation of Christian philosophy is that following the word of God is the most important thing (not just one important thing among many). Similarly, utilitarians would usually consider maximizing utility to be the most important thing, consequentialists would probably consider consequences to be more important than other moral duties, etc.