One of the most impactful purchases I’ve ever made! :P
Ender’s Game by Orson Scott Card really spoke to me as a kid, though hopefully your students are better socialized! :P
The Signal and the Noise by Nate Silver (of 538 fame) is the best and most readable introduction to Bayesian statistics and Bayesian reasoning that I’m aware of.
Dairy of a Madman by Lu Xun was helpful for me in cultivating a strong sense of dissatisfaction with the way things are and the implicit or explicit rules that govern social reality.
I don’t know if there are any good translations though.
Re Poor Economics:
I still remember the experiments in (I think) India where they demonstrated that even for people living in extreme poverty, where most of marginal spending goes to food, increased income frequently resulted in people buying better-tasting calories, not just more calories. A+.
I thought Chiang was unusually high in literary merit, but what do you think is the relevance to EA?
Strongly seconded. Both had a large effect on me, especially Famine, Affluence and Morality when I was a teenager.
For #2, Ideological Turing Tests could be cool too.
You may also like our discussion sheets for this topic:
Sure! In general you can assume that anything I write publicly is freely available for academic purposes. I’d also be interested in seeing the syllabus if/when you end up designing it.
Messaged. Will share more widely if/when it’s ready for prime time. :)
We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.
did you consider copying the summary into a Forum post, rather than linking it?
Yes. I did a lot of non-standard formatting tricks in Google Docs when I first wrote it (because I wasn’t expecting to ever need to port it over to a different format). So when I first tried to copy it over, the whole thing looked disastrously unreadable.
Changed the title. :)
In general, if I imagine ‘longtermism’ taking off as a term, I imagine it getting a lot of support if it designates the first concept, and a lot of pushback if it designates the second concept. It’s also more in line with moral ideas and social philosophies that have been successful in the past: environmentalism claims that protecting the environment is important, not that protecting the environment is (always) the most important thing; feminism claims that upholding women’s rights is important, not that doing so is (always) the most important thing. I struggle to think of examples where the philosophy makes claims about something being the most important thing, and insofar as I do (totalitarian marxism and fascism are examples that leap to mind), they aren’t the sort of philosophies I want to emulate.
Maybe this is the wrong reference class, but I can think of several others: utilitarianism, Christianity, consequentialism, where the “strong” definition is the most natural that comes to mind.
Ie, a naive interpretation of Christian philosophy is that following the word of God is the most important thing (not just one important thing among many). Similarly, utilitarians would usually consider maximizing utility to be the most important thing, consequentialists would probably consider consequences to be more important than other moral duties, etc.
If you’re interested, I just wrote a draft of an article on this, happy to share and solicit feedback! :)
I was asked to comment here. As you know, I did a data science internship at Impossible Foods in late 2016. I’m mostly jotting down my own experiences, along with some anonymized information from talking to others.
NB: “Tech” below refers to jobs that are considered mainstream tech in Silicon Valley (software, data science, analytics, etc), while “science” refers to the food science/biochemistry/chemistry work that is Impossible’s core product.
Highly mission-driven. Many people were vegetarian or vegan (all the food the company served was vegan by default), and people there seemed fairly dedicated to the cause of replacing farmed animals with plants (less than I would expect from a EA or AR nonprofit)
Diversity. The gender ratio in the main office was slightly more women than men, and there was a lot of representation from different countries that I usually don’t see in Silicon Valley (though this could just be because biology/biochemistry draws from a different population than CS).
Niceness. People seemed really nice to each other a lot, and there wasn’t a lot of the assholish personalities I sometimes associate with startups.
Interesting problems. My subjective sense is that tech there is usually used to support scientific pursuits rather than eg, tech as a product or business development, and is more interesting in a broader sense than most big company or startup work.
Lots of opportunities to grow. People who’re up for the challenge often take on quite impressive challenges at low levels of seniority.
Benefits. I didn’t use them much, but my impression is that the company seemed quite progressive about things like vacation days and paternity leave(?).
Reasonable work-life balance. This seemed true of the tech people I knew, however the scientists seemed a little overworked and the business development people seemed a lot overworked. I don’t know how this compares to other startups.
The CEO (Pat Brown) appeared highly competent and clearly thoughtful. From my relatively brief interactions with him, there’s a reasonable chance he would have been at home in Stanford EA if he was much younger. Eg, he talks about quantitative cause prioritization and had a short rant at one point about selection bias in business advice.
Low pay. I feel like there’s a large mission/salary tradeoff that the company makes because it knows it could hire enough True Believers. My intern pay was substantially below market, and this seemed true of the other interns I talked to, as well as full-timers I talked to in broadly “tech” roles. I don’t know if this has changed by 2019. Another caveat is that I didn’t ask about equity, and Impossible’s valuation ~quadrupled in the last 3 years, so it’s quite possible full-timers were actually well-compensated even if they didn’t perceive it that way at the time. A final caveat is that I’m comparing with other for-profit companies, and maybe a better point of comparison is (EA) nonprofits or academia, and my guess is that Impossible pays better.
Subpar conflict resolution. I was pretty shielded from the politics as an intern, but I hear more bad stories from others than I would expect from a company of its size (caveat: I have a very poor understanding of the actual base rate of bad conflicts at successful companies). Possibly because of the niceness? I feel like people leave on bad terms more than I would guess.
Technical mentorship. Because tech is not the main product, you’ll get less senior mentorship or guidance than a primarily tech company. (Obviously, the opposite is true if you’re a food scientist or biochemist).
Incrementalist work. Impossible always had a vision of being the eventual replacement of all animal-based products, however when I joined in 2016, it was very much at the tail end of experimentation and the beginning of being laser-focused on beef, which seems less intellectually and altruistically interesting. My impression is that this was much more true as of 2018, however they seemed to have developed pork and fish replacements recently? 
The company seems fairly high-prestige in the public eye. It’s extremely well-known for its size, and people are often excited to talk to me about the work there (in a way that I’ve never experienced before or since). This seems good for career capital, and well-being, however I want to caution against seeing this as a clear positive. It’s easy to fall into prestige traps, and people should introspect about this before they apply. (Also local prestige matters more than global prestige for most job pivots, so public opinion is a poor proxy for how much future employers care).
Environmentalism. People at Impossible are much more likely to be environmentalists than animal welfare people. Personally I find Deep Ecology views to be philosophically untenable, but obviously other EAs have different philosophical views. I write this so people can make an informed decision self-selecting in.
On balance, I don’t think I’m informed enough to judge whether working at Impossible is better than a typical reader’s alternatives. My gut instinct is that if you have other altruistic options that can make full use of your skillsets (clean meat seems especially exciting), then it’s more impactful to do more early-stage work than being at Impossible, but I’m very uncertain about this opinion and it’s confounded by a lot of details on the ground.
Additional Note 2019/7/20: Rereading this, I think people are usually biased against applying, and I think it’s still worthwhile for people who consider farmed animal welfare their top (or close to top) cause area to apply to Impossible.
Facebook post that has a longer list (though the framing’s slightly different. “potentially lifechanging” rather than useful):
Obvious point: While EAs are special in some important ways, there are many more ways in which EAs aren’t that special. So if you want to be effective at what you do, then often generally good advice/resources for your field would be helpful.
Eg, if you want to be good at accounting, the best books on accounting continue to be useful as an “EA accountant”, if you want to be good at entrepreneurship/programming/social skills/research, the generally useful resources are still good for those things.
Books I found helpful:
The Productivity Project
Designing Data-Intensive Applications
Books that have the potential to be helpful, but I did not personally find dramatically helpful:
Thinking, Fast and Slow
The Signal and the Noise
The Art of Learning 
Definitely agree on “should,” assuming it’s tractable. As for “can”, one possible approach is to hunt down the references in Hunter and Schimdt, or similar/more recent meta-analyses, disaggregate by career fields that are interesting to EAs, and look at what specific questions are asked in things like “work sample tests” and “structured employment interviews.”
Ideally you want questions that are a) predictive, b)relatively uncorrelated with general mental ability and c) are reasonable to ask earlier on in someone’s studies.
One reason to be cynical of this approach is that personnel selection is a well-researched and economically really lucrative if for-profit companies can figure it out, and yet very good methods do not already exist.
One reason to be optimistic is that if we’re trying to help EAs figure out their own personal skills/comparative advantages, this is less subject to adversarial effects.
 Because if the question just tests how smart you are, it says something about absolute advantage but not comparative.
 Otherwise this will ruin the point of cheap tests.
I think this summarizes the core arguments for why focusing on extinction risk prevention is a good idea. https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/