I find it quite hard to do multiple quote-blocks in the same comment on the forum. For example, this comment took one 5-10 tries to get right.
What editor are you using? The default rich text editor? (EA Forum Docs)
What’s the issue?
The default rich text editor. The issue is that if I want to select one line and quote/unquote it, it either a) quotes (unquotes) lines before and after it, or creates a bunch of newlines before and after it. Deleting newlines in quote blocks also has the issue of quoting (unquoting) unintended blocks.Perhaps I should just switch to markdown for comments, and remember to switch back to a rich text editor for copying and pasting top-level posts?
crossposted from LessWrong
There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.
I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member there.
For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be closer to the mission of that site than that of the EA Forum).
I do agree that there are notable differences in what writing styles are often used and appreciated on the two sites.
Could this also be simply because of a difference in the extent to which people already know your username and expect to find posts from it interesting on the two sites? Or, relatedly, a difference in how many active users on each site you know personally?
I’m not sure how much those factors affect karma and comment numbers on either site, but it seems plausible that the have a substantial affect (especially given how an early karma/comment boost can set off a positive feedback loop).
Also, have you crossposted many things and noticed this pattern, or was it just a handful? I think there’s a lot of “randomness” in karma and comment numbers on both sites, so if it’s just been a couple crossposts it seems hard to be confident that any patterns would hold in future.
Personally, when I’ve crossposted something to the EA Forum and to LessWrong, those posts have decently often gotten more karma on the Forum and decently often the opposite, and (from memory) I don’t think there’s been a strong tendency in one direction or the other.
Could this also be simply because of a difference in the extent to which people already know your username and expect to find posts from it interesting on the two sites? Or, relatedly, a difference in how many active users on each site you know personally?
Yeah I think this is plausible. Pretty unfortunate though.
Also, have you crossposted many things and noticed this pattern, or was it just a handful? Hmm, I have 31 comments on LW, and maybe half of them are crossposts?
I don’t ever recall having a higher karma on LW than the Forum, though I wouldn’t be surprised if it happened once or twice.
Flipping the Repugnant Conclusion
Imagine a world populated by many, many (trillions) of people. These people’s lives aren’t purely full of joy, and do have a lot of misery as well. But each person thinks that their life is worth living. Their lives might be a be bit boring or they might be full of huge ups and downs, but on the whole they are net-positive.
From this view it seems really strange to think that it would be good for every person in this world to die/not exist/never have existed in order to allow a very small number of privileged people to live spectacular lives. It seems bad to stop many people from living a life that they mostly enjoy, in order to allow the flourishing of the few.
I think this hypothetical is a decent intuition pump for why the Repugnant Conclusion isn’t actually repugnant. But I do think it might be a little bit dishonest or manipulative. It frames the situation in terms of fairness and equality; we can sympathize with the many slightly happy people who are maybe being denied the right to exist, and think of the few extremely happy people as the privileged elite. It also takes advantage of status quo bias; by beginning with the many slightly happy people it seems worse to then ‘remove’ them.
Is anyone aware of previous writings by EAs on founding think tanks as a way of having an impact over the long-term?
In the UK, I think the Fabian Society and the Centre for Policy Studies are continuing to influence British politics long after the deaths of their founders.
Slate Star Codex had an interesting review on the Fabian Society and how advocacy can backfire.
Open Philanthropy Project has an interesting review of the Center for Global Development.
There is a well-known argument that rule utilatarianism actually collapses into act utilatarianism. I wonder if rule utilitarians are not getting at the notion of dynamic inconsistency. If might be better if utilitarians can pre-commit to following certain rules, because of the effect that has on society, even if after one has adopted the rules there are circumstances where a utilitarian would be tempted to make exceptions.
Clubhouse Invite Thread1) Clubhouse is a new social media platform, but you need an invite to join2) It allows chat in rooms, and networking3) Seems some people could deliver sooner value by having a clubhouse invite4) People who are on clubhouse have invites to give5) If you think an invite would be valuable or heck you’d just like one, comment below and then if anyone has invites to give they can see EAs who want them.6) I have some invites to give away.
When Roodman’s awesome piece on modelling the human trajectory came out, I feel like far too little attention was paid to the catastrophic effects of including finite resources in the model. I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this “efficiency” measure will reach a cap some time before space travel meaningfully increases our available resources.I’d like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.- They notice each other, and compete to grab those remaining resources as quickly as possible.- Resources gone, very bad.(This is along the same lines as “AGI acquires paperclips”, it’s not meant to be a fully fleshed out example, merely an illustrative story)
And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don’t have a “definite solution” yet.)
Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources.
This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).
Just flagging that space doesn’t solve anything—it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won’t keep up with either exponential or hyperbolic growth.
Why not cubically? Because the Milky Way is flat-ish?
Volume of a sphere with radius increasing at constant rate has a quadratic rate of change.
Ah yeah. Damn, I could have sworn I did the math before on this (for this exact question) but somehow forgot the result.😅
This is why you should have done physics ;)
Thanks, this is useful to flag. As It happens I think the “hard cap” will probably be an issue first, but it’s definitely noteworthy that even if we avoid this there’s still a softer cap which has the same effect on efficiency in the long run.
Today I learned about Simpol, an org working on solving global coordination problems: https://simpol.orgTheir approach is to encourage governments to work together to enact “simultaneous policy” across multiple issues to take action while avoiding first-mover challenges or a race to the bottom. By negotiating across multiple issues, concessions can be made to entities that might lose out in some areas in order to keep the entire negotiation net-positive for all.So far my learning has been from this podcast episode, although it took a while to really explain the not-very-complex solution. https://www.jimruttshow.com/john-bunzl/
Lots of Givewell’s modelling assumes that health burdens of diseases or deficiencies are roughly linear in a harm vs. severity sense. This is a defensible default assumption, but seems important enough when you dig in to the analysis that it would be worth investigating when it comes to whether there’s a more sensible prior.
Do people have thoughts on what the policy should be on upvoting posts by coworkers? Obviously telling coworkers (or worse, employees!) to upvote your posts should be verboten, and having a EA forum policy that you can’t upvote posts by coworkers is too draconian (and also hard to enforce). But I think there’s a lot of room in between to form a situation like “where on average posts by people who work at EA orgs will have more karma than posts of equivalent semi-objective quality.” Concretely, 2 mechanisms in which this could happen (and almost certainly does happen, at least for me):
1. For various reasons, I’m more likely to read posts by people who are coworkers. Since EAF users have a bias towards upvoting more than downvoting, by default I’d expect this to lead to a higher expected karma for coworkers.2. I’m more likely to give people I know well a benefit of a doubt, and read their posts more charitably. This leads to higher expected karma.
3. I’m at least slightly more likely to respond to comments/posts by coworkers, since I have a stronger belief that they will reply. Since my default forum usage behavior is to upvote replies to my questions (as long as they are even remotely pertinent), this increases karma.#2 seems like a “me problem”, and is an internal problem/bias I am optimistic about being able to correct for. #3 and especially #1 on the other hand seems like something that’s fairly hard to correct for unless we have generalized policies or norms. (One example of a potential norm is to say that people should only upvote posts by coworkers if they think they’re likely to have read the post if they were working in a different field/org, or only upvote with a certain random probability proportional to such).
I’d prefer that people on the Forum not have to worry too much about norms of this kind. If you see a post or comment you think is good, upvote it. If you’re worried that you and others at your org have unequal exposure to coworkers’ content, make a concerted effort to read other Forum posts as well, or even share those posts within your organization.
That said, if you want to set a norm for yourself or suggest one for others, I have no problem with that — I just don’t see the Forum adopting something officially. Part of the problem is that people often have friends or housemates at other orgs, share an alma mater or cause area with a poster. etc. — there are many ways to be biased by personal connections, and I want to push us toward reading and supporting more things rather than trying to limit the extent to which people express excitement out of concern for these biases.
I joined the SHIPs (https://www.notion.so/SHIPs-Student-led-High-Impact-Projects-035514f7b8594205b16e3e3a9cf6e736) program and came up with a few project ideas that I think are worth pursuing, but I don’t have the background to do them effectively. The following are some of my ideas that I encourage others to consider working on:
Advocating (via sending emails and/or calling) to people of influence to set the altruistic option as the default (to take advantage of the default effect since people are less likely to opt in/out of something if it requires effort). Examples of this could include have all citizens be organ donors by default, have vegetarian/vegan meals the default option provided, installing energy efficient equipment as the default in buildings, etc.
Influence Celebrities to Donate Effectively:
The goal of this project is to convince celebrities and other wealthy and/or highly influential people to donate to effective charities. Since they have a vast amount of wealth, the scale of possible donations is quite large. In addition, their influence my inspire their followers to also give more effectively, thereby magnifying the impact of this project. There currently exists High Impact Athletes (https://highimpactathletes.org/athletes), but there is definitely more room to grow this concept to other celebrities such as movie/TV stars, musical artists, business moguls, politicians, etc.
Impact of Switching Protein Subsidies:
Research the pros and cons of the US government switching its subsidies from conventional meat agriculture to alternative protein agriculture. Create a detailed report highlighting these benefits (and drawbacks). Ideally, this report could be given to a politician as the basis for a policy/bill to switch subsidies from conventional meat agriculture to alternative protein agriculture.
Other projects ideas from the SHIPs program can be found below:
Default Advocacy: Advocating (via sending emails and/or calling) to people of influence to set the altruistic option as the default (to take advantage of the default effect since people are less likely to opt in/out of something if it requires effort).
These seem like policies that, while they have “defaults” in common, would be handled by entirely different parts of a country’s government (and different state governments, etc.). Any one of these projects could be a reasonable thing for a group of people to try, but I don’t think there are many logistical similarities to match the conceptual similarities.
The Behavioural Insights Team might be among the best people to talk to to understand what “default”-esque policies are currently being worked on. They’ve implemented many similar policies in the UK (though I’m not sure how much Obama’s American version of this got done while he was in office).
A new-to-me take on the Amazon. Claims deforestation would lead to changing rain patterns “from Argentina up to the American midwest”, “which means that Amazon dieback would disrupt/destroy water and food supplies across much of the western hemisphere”. The article talks about the state of journalism, climate strategy, and climate science.https://savingjournalism.substack.com/p/revisiting-the-amazon-fires
Following up with some thoughts I originally had in response to saulius’ List of ways in which cost-effectiveness estimates can be misleading. Not sure if there has been other write ups of this effect.
If we incentivize charities’ to act as cost-effectively as possible, and if they operate in coordination with other groups working on the same issue, it seems like we might expect in many cases what’s best for an individual charities’ cost-effectiveness to be bad for the overall cost-effectiveness of the space. This issue is compounded if multiple EA / highly cost-effective charities are operating in the same space.
The issue is something like, charities have relative strengths and weaknesses, and by coordinating to take advantage of those, individual charities might lose out on cost-effectiveness, but overall make their work less effective.
I think this occasionally actively happens with animal welfare campaigns, where single donors are giving to several charities doing the same thing.
An example using chicken welfare campaigns in the animal welfare space:
Charity A has 100 good volunteers in City 1, where Company X is headquartered. To run a successful campaign against them would cost Charity A $1000, and Company A uses 10M chickens. Alternatively, Charity A could run a campaign against Company Y in a different city where they have fewer volunteers for $1500 (more expensive because fewer volunteers).
Charity B has 5 good volunteers in City 1, but thinks they could secure a commitment from Company Y in City 2, where they have more volunteers, for $1000. Company B uses 1M chickens. Or, by spending more money, Charity B could secure a commitment from Company X for $1500.
Charities A and B are coordinating, and agree that Companies X and Y committing will put pressure on a major target (Company Z), and want to figure out how to effectively campaign.
They consider three strategies:
Strategy 1: They both campaign against both targets, at half the cost it would be for them to campaign on their own, and a charity evaluator views the campaign as split evenly between them, since they put in equal effort. The cost-effectiveness of both charities is: (5M + 0.5M Chickens / $500 + $750) = 4,400 chickens / dollar, and $2500 total has been spent.
Strategy 2: Charity A targets Company X, and Charity B targets Company Y. Charity A’s cost-effectiveness is 10,000 chickens / dollar, and Charity B’s is 1,000 chickens / dollar, with $2,000 total spent.
Strategy 3: Charity A targets Company Y, Charity B targets Company. Charity A: 667 chickens / dollar, Charity B: 6696 chickens / dollar. $3,000 total spent across all charities.
These charities want to be as effective as possible — clearly, the charities should choose Strategy 2, because the least money will be spent overall (and both charities will spend less for the same outcome).
But if a charity evaluator is fairly influential, and looking at each charity individually, Charity B might push hard for less ideal Strategies 1 or 3, because those make its cost-effectiveness look much better. Strategy 2 is clearly the right choice for Charity B to make, but if they do, an evaluation of their cost-effectiveness will look much worse.
I guess a simple way of putting this is—if multiple charities are working on the same issue, and have different strengths relevant at different times, it seems likely that often they will make decisions that might look bad for their own cost-effectiveness ratings, but were the best thing to do / right decision to make.
I can think of a few examples where charities made less effective decisions explicitly due to reasoning about their own cost-effectiveness, and not thinking about coordination, but I’m not sure how prevalent this actually is as an issue. It mainly makes me a little worried about apples-to-applies comparisons of the cost-effectiveness of charities who do the same thing, and are known to coordinate with each other.
Some reasons not to primarily argue for veganism on health/climate change grounds
I’ve often heard animal advocates claim that since non-vegans are generally more receptive to arguments from health benefits and reducing climate impact, we should prioritize those arguments, in order to reduce farmed animal suffering most effectively.
On its face, this is pretty reasonable, and I personally don’t care intrinsically about how virtuous people’s motivations for going vegan are. Suffering is suffering, no matter its sociological cause.
But there are some reasons I’m nervous about this approach, at least if it comes at the opportunity cost of moral advocacy. None of these are original to me, but I want to summarize them here since I think this is a somewhat neglected point:
Plausibly many who are persuaded by the health/CC arguments won’t want to make the full change to veganism, so they’ll substitute cows for chickens and fish. Both of which are evidently less bad for one’s health and CC risk, but because these animals are so small and have fewer welfare protections, this switch causes a lot more suffering per calorie. More speculatively, there could be a switch to insect consumption.
Health/CC arguments don’t apply to reducing wild animal suffering, and indeed emphasizing environmental motivations for going vegan might exacerbate support for conservation for its own sake, independent of individual animals’ welfare. (To be fair, moral arguments can also backfire if the emphasis is on general care for animals, rather than specifically preventing extreme suffering.)
Relatedly, health/CC arguments don’t motivate one to oppose other potential sources of suffering in voiceless sentient beings, like reckless terraforming and panspermia, or unregulated advanced simulations. This isn’t to say all anti-speciesists will make that connection, but caring about animals themselves rather than avoiding exploiting them for human-centric reasons seems likely to increase concern for other minds.
While the evidence re: CC seems quite robust, nutrition science is super uncertain and messy. Based on both this prior about the field and suspicious convergence concerns, I’d be surprised if a scientific consensus established veganism as systematically better for one’s health than alternatives. That said, I’d also be very surprised about a consensus that it’s worse, and clearly even primarily ethics-based arguments for veganism should also clarify that it’s feasible to live (very) healthily on a vegan diet.
Quick comment. With respect to your first point, this has always struck me as one of the better points as to why non ethical arguments should primarily avoided when it comes to making the case for veganism. However, after reading Tobias Leenaert’s ‘How to Create a Vegan World: A Pragmatic Approach’, I’ve become a bit more agnostic on this notion. He notes a few studies from The Humane League that show that red-meat reducers/avoiders tend to eat less chicken than your standard omnivore. He also referenced a few studies from Nick Cooney’s book, Veganomics, which covers some of this on p. 107-111. Combined with the overall impact non-ethical vegans could have on supply/demand for other vegan products (and their improvement in quality), I’ve been a bit less worried about this reason.I think your other reasons are all extremely important and underrated, though, so still lean overall that the ethical argument should be relied on when possible :)
Wow, that’s promising news! Thanks for sharing.
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck(s) to executing them are the right person/people, buy-in from the right existing organisation, or funding.
I’m not expecting to execute these ideas in the near-term future myself, so if you think one of these ideas sounds promising and relevant to your skills, interests, etc., please feel very free to explore the idea further, to comment here, and/or to reach out to me to discuss it!
Something along the lines of compiling a large set of potentially promising cause areas and interventions; doing rough Fermi estimates, cost-effectiveness analyses, and/or forecasts; thereby narrowing the list down; and then maybe gradually doing more extensive Fermi estimates, cost-effectiveness analyses, and/or forecasts
This is somewhat similar to things that Ozzie Gooen, Nuño Sempere, and Charity Entrepreneurship have done or are doing
Ozzie also discusses some similar ideas here
So it’d probably be worth talking to them about this
Something like a team of part-time paid forecasters, both to forecast on various important questions and to be “on-call” when it looks like a catastrophe or window of opportunity might be looming
I think I got this idea from Linch Zhang, and it might be worth talking to him about it
80,000 Hours-style career reviews on things like diplomacy, arms control, international organisations, becoming a Russia/India/etc specialist
Some discussion here
Could see if 80k would be happy to supervise someone else to do this
Could seek out EAs or EA-aligned people who are working full-time in related areas
Organisations like HIPE, CSET, and EA Russia might have useful connections
I might be open to collaborating with someone on this
Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI
This might allow them to complete additional valuable projects
This also might help the research or writing assistants build career capital and test fit for valuable roles
Maybe BERI can already provide this?
It’s possible it’s not worth being proactive about this, and instead waiting for people to decide they want an assistant and create a job ad for one. But I’d guess that some proactiveness would be useful (i.e., that there are cases where someone would benefit from such an assistant but hasn’t thought of it, or doesn’t think the overhead of a long search for one is worthwhile)
See also this comment from someone who did this sort of role for Toby Ord
Research or writing assistance for certain independent researchers?
Ops assistance for orgs like FHI?
But I think orgs like BERI and the Future of Humanity Foundation are already in this space
Additional “Research Training Programs” like summer research fellowships, “Early Career Conference Programmes”, internships, or similar
Probably best if this is at existing orgs
Could perhaps find an org that isn’t doing this yet but has researchers who would be capable of providing valuable mentorship, suggest the idea to them, and be or find someone who can handle the organisational aspects
Something like the Open Phil AI fellowship, but for another topic
In particular, something that captures the good effects a “fellowship” can have, beyond the provision of funding (since there are already some sources of funding alone, such as the Long-Term Future Fund)
A hub for longtermism-relevant research (or a narrower area, e.g. AI) outside of US and UK
Perhaps ideally a non-Anglophone country? Perhaps ideally in Asia?
Could be a new organisation or a branch/affiliate of an existing one
There’s some relevant discussion here, here, here, and I think here (though I haven’t properly read that post)
Found an organization/community similar to HIPE and/or APPGFG, but in countries other than the UK
I’d guess it’d probably be easiest in countries where there is a substantial EA presence, and perhaps easier in smaller countries like Switzerland rather than in the US
Why this might/might not be good:
I don’t know a huge amount about HIPE or APPGFG, but from my limited info on those orgs they seem valuable
I’d guess that there’s no major reason something similar to HIPE couldn’t be successfully replicated in other countries, if we could find the right person/people
In contrast, I’d guess that there might be more barriers to successfully replicating something like APPGFG
E.g., most countries probably don’t have an institution very similar to APPGs
But I imagine something broadly similar could be replicated elsewhere
Potential next steps:
Talk to people involved in HIPE and APPGFG about whether they think these things could be replicated, how valuable they think that’d be, how they’d suggest it be done, what countries they’d suggest, and who they’d suggest talking to
Talk to other EAs, especially outside of the UK, who are involved in politics, policy, and improving institutional decision-making
Ask them for their thoughts, who they’d suggest reaching out to, and (in some cases) whether they might be interested in collaborating on this
I also had some ideas for specific research or writing projects, but I’m not including them in this list
That’s partly because I might publish something more polished on that later
It’s mostly because people can check out A central directory for open research questions for a broader set of research project ideas
See also Why you (yes, you) should post on the EA Forum
Possible gaps in the EA community—EA Forum
Get Involved—EA Forum
The views I expressed here are my own, and do not necessarily reflect the views of my employers.
“Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI”
As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven’t tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I’d like to throw my hat into the ring of “researchers who would plausibly be interested in assistants” if anyone does set this up.
Edit: By figuring out ethics I mean both right and wrong in the abstract but also what the world empirically looks like so you know what is right and wrong in the particulars of a situation, with an emphasis on the latter.I think a lot about ethics. Specifically, I think a lot about “how do I take the best action (morally), given the set of resources (including information) and constraints (including motivation) that I have.” I understand that in philosophical terminology this is only a small subsection of applied ethics, and yet I spend a lot of time thinking about it.
One thing I learned from my involvement in EA for some years is that ethics is hard. Specifically, I think ethics is hard in the way that researching a difficult question or maintaining a complicated relationship or raising a child well is hard, rather than hard in the way that regularly going to the gym is hard.
When I first got introduced to EA, I believed almost the opposite (this article presents something close to my past views well): that the hardness of living ethically is a matter of execution and will, rather than that of constantly making tradeoffs in a difficult-to-navigate domain.
I still think the execution/will stuff matters a lot, but now I think it is relatively more important to be making the right macro- and micro- decisions regularly.
I don’t have strong opinions on what this means for EA at the margin, or for individual EAs. For example, I’m not saying that this should push us much towards risk aversion or other conservatism biases (there are tremendous costs, too, to inaction!) Perhaps this is an important lesson for us to communicate to new EAs, or to non-EAs we have some influence over. But there are so many useful differences/divergences, and I’m not sure this should really be prioritized all that highly as an explicit introductory message.But at any rate, I feel like this is an important realization in my own growth journey, and maybe it’d be helpful for others on this forum to realize that I made this update.
I am seeking funding so I can work on my collective action project over the next year without worrying about money so much. If this interests you, you can book a call with me here. If you know nothing about me, one legible accomplishment of mine is creating the EA Focusmate group, which has 395 members as of writing.
Note: This shortform is now superseded by a top-level post I adapted it into. There is no longer any reason to read the shortform version.
Here I list all the EA-relevant books I’ve read or listened to as audiobooks since learning about EA, in roughly descending order of how useful I perceive/remember them being to me.
I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser’s lists very useful.) That said, this isn’t exactly a recommendation list, because:
some of factors making these books more/less useful to me won’t generalise to most other people
I’m including all relevant books I’ve read (not just the top picks)
Let me know if you want more info on why I found something useful or not so useful.
(See also this list of EA-related podcasts and this list of sources of EA-related videos.)
The Precipice, by Ord, 2020
See here for a list of things I’ve written that summarise, comment on, or take inspiration from parts of The Precipice.
I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren’t included in the audiobook
The book Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
Superforecasting, by Tetlock & Gardner, 2015
How to Measure Anything, by Hubbard, 2011
Rationality: From AI to Zombies, by Yudkowsky, 2006-2009
I.e., “the sequences”
Superintelligence, by Bostrom, 2014
Maybe this would’ve been a little further down the list if I’d already read The Precipice
Expert Political Judgement, by Tetlock, 2005
I read this after having already read Superforecasting, yet still found it very useful
Normative Uncertainty, by MacAskill, 2014
This is actually a thesis, rather than a book
I assume it’s now a better idea to read MacAskill, Bykvist, and Ord’s book on the same subject, which is available as a free PDF
Though I haven’t read the book version myself
Secret of Our Success, by Henrich, 2015
See also this interesting Slate Star Codex review
The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous, by Henrich, 2020
See also the Wikipedia page on the book, this review on LessWrong, and my notes on the book.
I rank Secret of Our Success as more useful to me, but that may be partly because I read it first; if I only read either this book or Secret of Our Success, I’m not sure which I’d find more useful.
The Strategy of Conflict, by Schelling, 1960
See here for my notes on this book, and here for some more thoughts on this and other nuclear-risk-related books.
This and other nuclear-war-related books are more useful for me than they would be for most people, since I’m currently doing research related to nuclear war
This is available as an audiobook, but a few Audible reviewers suggest using the physical book due to the book’s use of equations and graphs. So I downloaded this free PDF into my iPad’s Kindle app.
Human-Compatible, by Russell, 2019
The Book of Why, by Pearl, 2018
I found an online PDF rather than listening to the audiobook version, as the book makes substantial use of diagrams
Blueprint, by Plomin, 2018
This is useful primarily in relation to some specific research I was doing, rather than more generically.
Moral Tribes, by Greene, 2013
Algorithms to Live By, by Christian & Griffiths, 2016
The Better Angels of Our Nature, by Pinker, 2011
See here for some thoughts on this and other nuclear-risk-related books.
Command and Control, by Schlosser, 2013
The Doomsday Machine, by Ellsberg, 2017
The Bomb: Presidents, Generals, and the Secret History of Nuclear War, by Kaplan, 2020
The Alignment Problem, by Christian, 2020
This might be better than Superintelligence and Human-Compatible as an introduction to the topic of AI risk. It also seemed to me to be a surprisingly good introduction to the history of AI, how AI works, etc.
But I’m not sure this’ll be very useful for people who’ve already read/listened to a decent amount (e.g., the equivalent of 4 books) about those topics.
That’s why it’s ranked as low as it is for me.
But maybe I’m underestimating how useful it’d be to many other people in a similar position.
Evidence for that is that someone told me that an AI safety researcher friend of theirs found the book helpful.
The Sense of Style, by Pinker, 2019
One thing to note is that I think a lot of chapter 6 (which accounts for roughly a third of the book) can be summed up as “Don’t worry too much about a bunch of alleged ‘rules’ about grammar, word choice, etc. that prescriptivist purists sometimes criticise people for breaking.”
And I already wasn’t worried most of those alleged rules, and hadn’t even heard of some of them.
And I think one could get the basic point without seeing all the examples and discussion.
So a busy reader might want to skip or skim most of that chapter.
Though I think many people would benefit from the part on commas.
I read an ebook rather than listening to the audiobook, because I thought that might be a better way to absorb the lessons about writing style
The Dead Hand, by Hoffman, 2009
Thinking, Fast and Slow, by Kahneman, 2011
This might be the most useful of all these books for people who have little prior familiarity with the ideas, but I happened to already know a decent portion of what was covered.
Against the Grain, by Scott, 2017
I read this after Sapiens and thought the content would overlap a lot, but in the end I actually thought it provided a lot of independent value.
Sapiens, by Harari, 2015
Destined for War, by Allison, 2017
The Dictator’s Handbook, by de Mesquita & Smith, 2012
Age of Ambition, by Osnos, 2014
Moral Mazes, by Jackall, 1989
The Myth of the Rational Voter, by Caplan, 2007
The Hungry Brain, by Guyenet, 2017
If I recall correctly, I found this surprisingly useful for purposes unrelated to the topics of weight, hunger, etc.
E.g., it gave me a better understanding of the liking-wanting distinction
See also this Slate Star Codex review (which I can’t remember whether I read)
The Quest: Energy, Security, and the Remaking of the Modern World, by Yergin, 2011
Harry Potter and the Methods of Rationality, by Yudkowsky, 2010-2015
I found this both surprisingly useful and very surprisingly enjoyable
To be honest, I was somewhat amused and embarrassed to find what is ultimately Harry Potter fan fiction as enjoyable and thought-provoking as I found this
This overlaps in many ways with Rationality: AI to Zombies, so it would be more valuable to someone who hadn’t already read those sequences
But I’d recommend such a person read those sequences before reading this; I think they’re more useful (though less enjoyable)
Within the 2 hours before I go to sleep, I try not to stimulate my brain too much—e.g., I try to avoid listening to most nonfiction audiobooks during that time. But I found that I could listen to this during that time without it keeping my brain too active. This is a perk, as that period of my day is less crowded with other things to do.
Same goes for the books Steve Jobs, Power Broker, Animal Farm, and Consider the Lobster.
Steve Jobs, by Walter Isaacson, 2011
Surprisingly useful, considering the facts that I don’t plan to at all emulate Jobs’ life and that I don’t work in a relevant industry
Enlightenment Now, by Pinker, 2018
The Undercover Economist Strikes Back, by Harford, 2014
Against Empathy, by Bloom, 2016
Inadequate Equilibria, by Yudkowksy, 2017
Radical Markets, by Posner & Weyl, 2018
How to Be a Dictator: The Cult of Personality in the Twentieth Century, by Dikötter, 2019
On Tyranny: 20 Lessons for the 20th Century, by Snyder, 2017
It seemed to me that most of what Snyder said was either stuff I already knew, stuff that seemed kind-of obvious or platitude-like, or stuff I was skeptical of
This might be partly due to the book being under 2 hours, and thus giving just a quick overview of the “basics” of certain things
So I do think it might be fairly useful per minute for someone who knew quite little about things like Hitler and the Soviet Union
Climate Matters: Ethics in a Warming World, by John Broome, 2012
The Power Broker, by Caro, 1975
Very interesting and engaging, but also very long and probably not super useful.
Science in the Twentieth Century: A Social-Intellectual Survey, by Goldman, 2004
This is actually a series of audio recordings of lectures, rather than a book
Animal Farm, by Orwell, 1945
Brave New World, by Huxley, 1932
Consider the Lobster, by Wallace, 2005
To be honest, I’m not sure why Wiblin recommended this. But I benefitted from many of Wiblin’s other recommendations. And I did find this book somewhat interesting.
Honorable mention: 1984, by Orwell, 1949. I haven’t included that in the above list because I read it before I learned about EA. But I think the book, despite being a novel, is actually the most detailed exploration I’ve seen of how a stable, global totalitarian system could arise and sustain itself. (I think this is a sign that there needs to be more actual research on that topic—a novel published more than 70 years ago shouldn’t be one of the best sources on an important topic!)
(Hat tip to Aaron Gertler for sort-of prompting me to post this list.)
I recommend making this a top-level post. I think it should be one of the most-upvoted posts on the “EA Books” tag, but I can’t tag it as a Shortform post.
I had actually been thinking I should probably do that sometime, so your message inspired me to pull the trigger and do it now. Thanks!
(I also made a few small improvements/additions while I was at it.)
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program. I’m unsure how useful this idea is. But twice this week I felt it’d be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with better chances of securing highly positive futures (not just avoiding existential catastrophes). It could also help us avoid negative futures that may not appear negative when superficially considered in advance. Finally, such positive visions of the future could facilitate cooperation and mitigate potential risks from competition (see Dafoe, 2018 on “AI Ideal Governance”). Researchers have begun outlining particular possible futures, arguing for or against them, and surveying people’s preferences for them. It’d be valuable to conduct similar projects (via online surveys) that address several limitations of prior efforts.
First, these projects should provide relatively detailed portrayals of the potential futures under consideration. This could be done using summaries of scenarios richly imagined in existing sources (e.g., Tegmark’s Life 3.0, Hanson’s Age of Em) or generated during the “world-building” efforts to be conducted at the Augmented Intelligence Summit. This could address people’s apparent tendency to be repelled by descriptions of futures that simplistically maximise things they claim to intrinsically value while stripping away things they don’t. It could also allow for quantitative and qualitative feedback on these scenarios and various elements of them. People may find it easier to critique and build upon presented scenarios than to imagine ideal scenarios from scratch.
Second, these projects should include large, representative, cross-national samples. Existing research has typically included only small samples which often differ greatly from the general population. This doesn’t fully capture the three above-mentioned benefits of efforts to understand what futures we actually want.
Third, experimental manipulations could be embedded within the surveys to explore the impact of different framings, different information, and different arguments, partly to reveal how fragile people’s preferences are.
It would be useful to also similarly survey medium-term-relevant preferences (e.g., regarding institutions for managing adaptations to increasing AI capabilities; Dafoe, 2018).
One concern with this idea is that the long-term future may be so radically unfamiliar and unpredictable that any information regarding people’s present preferences for it would be irrelevant to scenarios that are actually plausible. Another concern is that present preferences may not be worth following anyway, as they may reflect intuitions that make sense in our current environment but wouldn’t in radically different future environments. They may also not be worth following if issues like framing effects and scope neglect become particularly impactful when evaluating such unfamiliar and astronomical options.
 I wrote this application when I was very new to EA and I was somewhat grasping at straws to come up with longtermism-relevant research ideas that would make use of my psychology degree.
Socrates makes the following argument:
Just like we only allow skilled pilots to fly airplanes, licensed doctors to operate on patients or trained firefighters to use fire enignes, similarly we should only allow informed voters to vote in elections.
“The best argument against democracy is a five minute conversation with the average voter”. Half of American adults don’t know that each state gets two senators and two thirds don’t know what the FDA does.
(Whether a voter is informed can be evaluated by a short test on the basics of elections, for example.)
Pros: better quality of candidates elected, would give uninformed voters a strong incentive to learn aout elections.
Cons: would be crazy unpopular, possibility of the small group of informed voters acting acting in self-interest—which would worsen inequality.
(I did a shallow search and couldn’t find something like this on the EA Forum or Center for Election Science.)
What’s the proposed policy change? Making understanding of elections a requirement to vote?
Yep, that’s what comes to my mind atleast :P