I considered Fuzzies/āUtilons and The Unit of Caring, but it was hard to find excerpts that didnāt use obfuscating jargon or dive off into tangents; trying to work around those bits hurt the flow of the bits I most wanted to use. But both were important for my own EA journey as well, and Iāll keep thinking about ways to fit in some of those concepts (maybe through other pieces that reference them with fewer tangents).
As for āBeyond the Reach of God,ā Iād prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.
Scottās piece was part of the second edition of the Handbook, and I agree that itās a classic; Iād like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scottās piece fits in well there). As an addition to this section, I think it makes a lot of the same points as the Singer and Soares pieces, though it might be better than one or the other of those.
As for āBeyond the Reach of God,ā Iād prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.
I think that if the essay said things like āReligious people are stupid isnāt it obviousā and attempted to do social shaming of religious people, then Iād be pretty open to suggesting edits to such parts.
But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.
I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case theyāll be perfectly able to appreciate the finer points of the post even though itās written in a way that misunderstands their relationship to their religion.
I think either way, if theyāre going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesnāt expect that thereās an interventionist aligned superintelligence (my terminology, I donāt mean nothing by it). I donāt think itās right to walk on eggshells around religious people, and I donāt think it makes sense to throw out powerful ideas and pieces of strongly emotional/āartistic work to make sure such people donāt need to learn to engage with art and ideas that donāt share their specific assumptions about the world.
Scottās piece was part of the second edition of the Handbook, and I agree that itās a classic; Iād like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scottās piece fits in well there).
I think either way, if theyāre going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesnāt expect that thereās an interventionist aligned superintelligence.
If there were no great essays with similar themes aside from Eliezerās, Iād be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesnāt get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, Iām likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.
Sometimes, Eliezerisms are great; I enjoy almost everything heās ever written. But I think weād both agree that his writing style is a miss for a good number of people, including many who have made great contributions to the EA movement. Perhaps the chance of catching people especially well makes his essays the highest-EV option, but there are a lot of other great writers who have tackled these topics.
(Thereās also the trickiness of having CEAās name attached to this, which means that ā however many disclaimers we may attach, and ā there will be readers who assume itās an important part of EA to be atheist, or to support cryonics, etc.)
To clarify, I wouldnāt expect an essay like this to turn off most religious readers, or even to complete alienate any one person; itās just got a few slings and arrows that I think can be avoided without compromising on quality.
Of course, there are many bits of Eliezer that Iād be glad to excerpt, including from this essay; if the excerpt sections in this series get more material added to them, I might be interested in something like this:
What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Natureās little challenges arenāt always fair. When you run into a challenge thatās too difficult, you suffer the penalty; when you run into a lethal penalty, you die. Thatās how it is for people, and it isnāt any different for planets. Someone who wants to dance the deadly dance with Nature, does need to understand what theyāre up against: Absolute, utter, exceptionless neutrality.
If there were no great essays with similar themes aside from Eliezerās, Iād be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesnāt get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, Iām likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.
I see. As I hear you, itās not that we must go overboard on avoiding atheism, but that itās a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.
From my vantage point, there arenāt essays in this series that deal with giving up hope as directly as this essay. I think Singerās piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you donāt even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.
*nods* Iāll respond to the specific things you said about the different essays. I split this into two comments for length.
I considered Fuzzies/āUtilons and The Unit of Caring, but it was hard to find excerpts that didnāt use obfuscating jargon or dive off into tangents
I think thereās a few pieces of jargon that you could change (e.g. Unit of Caring talks about āakrasiaā, which isnāt relevant). I imagine itād be okay to request a few small edits to the essay.
But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I donāt think you should be afraid to show people a high-level discussion, just because they donāt know all of the details being discussed already. Itās okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear ā I think this is good, it helps see that there are levels above to reach.
Itās like how EA student group events would always be āIntro to EAā. Instead, I think itās really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you donāt fully understand all the terms they use like āSystem 1 /ā System 2ā³ or āconservation lawā or āderivatives marketā. I would give the Feynman lectures to a young teenager who doesnāt know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.
Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Grahamās essays are always said in simple language, but I donāt think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and thatās necessary to make their points.
See the graph about dating strategies here. The goal is to get strong hits that make a person say āThis is one of the most important things Iāve ever readā, not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if thereās true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.
I also think the essays are exciting and have a good track record of convincing people. And my goal with the Handbook isnāt to avoid jargon altogether. To some extent, though, Iām trying to pack a lot of points into a smallish space, which isnāt how Eliezerās style typically works out. Were the essay making the same points at half the length, I think it would be a better candidate.
Maybe Iāll try to produce an edited version at some point (with fewer digressions, and e.g. noting that ego depletion failed to replicate in a āFuzzies and Utilonsā footnote). But the more edits happen in a piece, the longer I expect it to take to get approval, especially from someone who doesnāt have much time to spare ā another trade-off I had to consider when selecting pieces (I donāt think anything in the current series had more than a paragraph removed, unless it were printed as an excerpt).
I donāt want to push you to spend a lot of time on this, but if youāre game, would you want to suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay? This wonāt be necessary for all readers, but itās something Iāve been aiming for.
I do expect that further material for this project will contain a lot more jargon and complexity, because it wonāt be explicitly pitched as an introduction to the basic concepts of EA (and you really canāt get far in e.g. global development without digging into economics, or X-risk without getting into topics like ācorrigibilityā).
A note on the Thiel point: As far as I recall, his thinking on startups became a popular phenomenon only after Blake Masters published notes on his class, though I donāt know whether the notes did much to make Thielās thinking more clear (maybe they were just the first widely-available source of that thinking).
suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay?
Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. Iād suggest the following from the Fuzzies/āUtilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)
--
If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:
To purchase warm fuzzies, find some hard-working but poverty-stricken woman whoās about to drop out of state college after her husbandās hours were cut back, and personally, but anonymously, give her a cashierās check for $10,000. Repeat as desired.
To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price. Make a big deal out of it, show up for their press events, and brag about it for the next five years.
Thenāwith absolute cold-blooded calculationāwithout scope insensitivity or ambiguity aversionāwithout concern for status or warm fuzziesāfiguring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiessāfind the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.
But the main lesson is that all three of these thingsāwarm fuzzies, status, and expected utilonsācan be bought far more efficiently when you buy separately, optimizing for only one thing at a timeā¦ Of course, if youāre not a millionaire or even a billionaireāthen you canāt be quite as efficient about things, canāt so easily purchase in bulk. But I would still sayāfor warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but donāt confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.
And when it comes to purchasing expected utilonsāthen, of course, shut up and multiply.
Thanks for this feedback!
I considered Fuzzies/āUtilons and The Unit of Caring, but it was hard to find excerpts that didnāt use obfuscating jargon or dive off into tangents; trying to work around those bits hurt the flow of the bits I most wanted to use. But both were important for my own EA journey as well, and Iāll keep thinking about ways to fit in some of those concepts (maybe through other pieces that reference them with fewer tangents).
As for āBeyond the Reach of God,ā Iād prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.
Scottās piece was part of the second edition of the Handbook, and I agree that itās a classic; Iād like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scottās piece fits in well there). As an addition to this section, I think it makes a lot of the same points as the Singer and Soares pieces, though it might be better than one or the other of those.
I think that if the essay said things like āReligious people are stupid isnāt it obviousā and attempted to do social shaming of religious people, then Iād be pretty open to suggesting edits to such parts.
But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.
I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case theyāll be perfectly able to appreciate the finer points of the post even though itās written in a way that misunderstands their relationship to their religion.
I think either way, if theyāre going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesnāt expect that thereās an interventionist aligned superintelligence (my terminology, I donāt mean nothing by it). I donāt think itās right to walk on eggshells around religious people, and I donāt think it makes sense to throw out powerful ideas and pieces of strongly emotional/āartistic work to make sure such people donāt need to learn to engage with art and ideas that donāt share their specific assumptions about the world.
Checks out, that makes sense.
If there were no great essays with similar themes aside from Eliezerās, Iād be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesnāt get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, Iām likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.
Sometimes, Eliezerisms are great; I enjoy almost everything heās ever written. But I think weād both agree that his writing style is a miss for a good number of people, including many who have made great contributions to the EA movement. Perhaps the chance of catching people especially well makes his essays the highest-EV option, but there are a lot of other great writers who have tackled these topics.
(Thereās also the trickiness of having CEAās name attached to this, which means that ā however many disclaimers we may attach, and ā there will be readers who assume itās an important part of EA to be atheist, or to support cryonics, etc.)
To clarify, I wouldnāt expect an essay like this to turn off most religious readers, or even to complete alienate any one person; itās just got a few slings and arrows that I think can be avoided without compromising on quality.
Of course, there are many bits of Eliezer that Iād be glad to excerpt, including from this essay; if the excerpt sections in this series get more material added to them, I might be interested in something like this:
I see. As I hear you, itās not that we must go overboard on avoiding atheism, but that itās a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.
From my vantage point, there arenāt essays in this series that deal with giving up hope as directly as this essay. I think Singerās piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you donāt even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.
*nods* Iāll respond to the specific things you said about the different essays. I split this into two comments for length.
I think thereās a few pieces of jargon that you could change (e.g. Unit of Caring talks about āakrasiaā, which isnāt relevant). I imagine itād be okay to request a few small edits to the essay.
But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I donāt think you should be afraid to show people a high-level discussion, just because they donāt know all of the details being discussed already. Itās okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear ā I think this is good, it helps see that there are levels above to reach.
Itās like how EA student group events would always be āIntro to EAā. Instead, I think itās really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you donāt fully understand all the terms they use like āSystem 1 /ā System 2ā³ or āconservation lawā or āderivatives marketā. I would give the Feynman lectures to a young teenager who doesnāt know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.
Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Grahamās essays are always said in simple language, but I donāt think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and thatās necessary to make their points.
See the graph about dating strategies here. The goal is to get strong hits that make a person say āThis is one of the most important things Iāve ever readā, not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if thereās true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.
I also think the essays are exciting and have a good track record of convincing people. And my goal with the Handbook isnāt to avoid jargon altogether. To some extent, though, Iām trying to pack a lot of points into a smallish space, which isnāt how Eliezerās style typically works out. Were the essay making the same points at half the length, I think it would be a better candidate.
Maybe Iāll try to produce an edited version at some point (with fewer digressions, and e.g. noting that ego depletion failed to replicate in a āFuzzies and Utilonsā footnote). But the more edits happen in a piece, the longer I expect it to take to get approval, especially from someone who doesnāt have much time to spare ā another trade-off I had to consider when selecting pieces (I donāt think anything in the current series had more than a paragraph removed, unless it were printed as an excerpt).
I donāt want to push you to spend a lot of time on this, but if youāre game, would you want to suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay? This wonāt be necessary for all readers, but itās something Iāve been aiming for.
I do expect that further material for this project will contain a lot more jargon and complexity, because it wonāt be explicitly pitched as an introduction to the basic concepts of EA (and you really canāt get far in e.g. global development without digging into economics, or X-risk without getting into topics like ācorrigibilityā).
A note on the Thiel point: As far as I recall, his thinking on startups became a popular phenomenon only after Blake Masters published notes on his class, though I donāt know whether the notes did much to make Thielās thinking more clear (maybe they were just the first widely-available source of that thinking).
Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. Iād suggest the following from the Fuzzies/āUtilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)
--
If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:
To purchase warm fuzzies, find some hard-working but poverty-stricken woman whoās about to drop out of state college after her husbandās hours were cut back, and personally, but anonymously, give her a cashierās check for $10,000. Repeat as desired.
To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price. Make a big deal out of it, show up for their press events, and brag about it for the next five years.
Thenāwith absolute cold-blooded calculationāwithout scope insensitivity or ambiguity aversionāwithout concern for status or warm fuzziesāfiguring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiessāfind the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.
But the main lesson is that all three of these thingsāwarm fuzzies, status, and expected utilonsācan be bought far more efficiently when you buy separately, optimizing for only one thing at a timeā¦ Of course, if youāre not a millionaire or even a billionaireāthen you canāt be quite as efficient about things, canāt so easily purchase in bulk. But I would still sayāfor warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but donāt confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.
And when it comes to purchasing expected utilonsāthen, of course, shut up and multiply.