I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn’t use obfuscating jargon or dive off into tangents; trying to work around those bits hurt the flow of the bits I most wanted to use. But both were important for my own EA journey as well, and I’ll keep thinking about ways to fit in some of those concepts (maybe through other pieces that reference them with fewer tangents).
As for “Beyond the Reach of God,” I’d prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.
Scott’s piece was part of the second edition of the Handbook, and I agree that it’s a classic; I’d like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott’s piece fits in well there). As an addition to this section, I think it makes a lot of the same points as the Singer and Soares pieces, though it might be better than one or the other of those.
As for “Beyond the Reach of God,” I’d prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.
I think that if the essay said things like “Religious people are stupid isn’t it obvious” and attempted to do social shaming of religious people, then I’d be pretty open to suggesting edits to such parts.
But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.
I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case they’ll be perfectly able to appreciate the finer points of the post even though it’s written in a way that misunderstands their relationship to their religion.
I think either way, if they’re going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn’t expect that there’s an interventionist aligned superintelligence (my terminology, I don’t mean nothing by it). I don’t think it’s right to walk on eggshells around religious people, and I don’t think it makes sense to throw out powerful ideas and pieces of strongly emotional/artistic work to make sure such people don’t need to learn to engage with art and ideas that don’t share their specific assumptions about the world.
Scott’s piece was part of the second edition of the Handbook, and I agree that it’s a classic; I’d like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott’s piece fits in well there).
I think either way, if they’re going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn’t expect that there’s an interventionist aligned superintelligence.
If there were no great essays with similar themes aside from Eliezer’s, I’d be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn’t get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I’m likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.
Sometimes, Eliezerisms are great; I enjoy almost everything he’s ever written. But I think we’d both agree that his writing style is a miss for a good number of people, including many who have made great contributions to the EA movement. Perhaps the chance of catching people especially well makes his essays the highest-EV option, but there are a lot of other great writers who have tackled these topics.
(There’s also the trickiness of having CEA’s name attached to this, which means that — however many disclaimers we may attach, and — there will be readers who assume it’s an important part of EA to be atheist, or to support cryonics, etc.)
To clarify, I wouldn’t expect an essay like this to turn off most religious readers, or even to complete alienate any one person; it’s just got a few slings and arrows that I think can be avoided without compromising on quality.
Of course, there are many bits of Eliezer that I’d be glad to excerpt, including from this essay; if the excerpt sections in this series get more material added to them, I might be interested in something like this:
What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Nature’s little challenges aren’t always fair. When you run into a challenge that’s too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That’s how it is for people, and it isn’t any different for planets. Someone who wants to dance the deadly dance with Nature, does need to understand what they’re up against: Absolute, utter, exceptionless neutrality.
If there were no great essays with similar themes aside from Eliezer’s, I’d be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn’t get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I’m likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.
I see. As I hear you, it’s not that we must go overboard on avoiding atheism, but that it’s a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.
From my vantage point, there aren’t essays in this series that deal with giving up hope as directly as this essay. I think Singer’s piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you don’t even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.
*nods* I’ll respond to the specific things you said about the different essays. I split this into two comments for length.
I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn’t use obfuscating jargon or dive off into tangents
I think there’s a few pieces of jargon that you could change (e.g. Unit of Caring talks about ‘akrasia’, which isn’t relevant). I imagine it’d be okay to request a few small edits to the essay.
But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I don’t think you should be afraid to show people a high-level discussion, just because they don’t know all of the details being discussed already. It’s okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear – I think this is good, it helps see that there are levels above to reach.
It’s like how EA student group events would always be “Intro to EA”. Instead, I think it’s really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you don’t fully understand all the terms they use like “System 1 / System 2″ or “conservation law” or “derivatives market”. I would give the Feynman lectures to a young teenager who doesn’t know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.
Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Graham’s essays are always said in simple language, but I don’t think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and that’s necessary to make their points.
See the graph about dating strategies here. The goal is to get strong hits that make a person say “This is one of the most important things I’ve ever read”, not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if there’s true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.
I also think the essays are exciting and have a good track record of convincing people. And my goal with the Handbook isn’t to avoid jargon altogether. To some extent, though, I’m trying to pack a lot of points into a smallish space, which isn’t how Eliezer’s style typically works out. Were the essay making the same points at half the length, I think it would be a better candidate.
Maybe I’ll try to produce an edited version at some point (with fewer digressions, and e.g. noting that ego depletion failed to replicate in a “Fuzzies and Utilons” footnote). But the more edits happen in a piece, the longer I expect it to take to get approval, especially from someone who doesn’t have much time to spare — another trade-off I had to consider when selecting pieces (I don’t think anything in the current series had more than a paragraph removed, unless it were printed as an excerpt).
I don’t want to push you to spend a lot of time on this, but if you’re game, would you want to suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay? This won’t be necessary for all readers, but it’s something I’ve been aiming for.
I do expect that further material for this project will contain a lot more jargon and complexity, because it won’t be explicitly pitched as an introduction to the basic concepts of EA (and you really can’t get far in e.g. global development without digging into economics, or X-risk without getting into topics like “corrigibility”).
A note on the Thiel point: As far as I recall, his thinking on startups became a popular phenomenon only after Blake Masters published notes on his class, though I don’t know whether the notes did much to make Thiel’s thinking more clear (maybe they were just the first widely-available source of that thinking).
suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay?
Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. I’d suggest the following from the Fuzzies/Utilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)
--
If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:
To purchase warm fuzzies, find some hard-working but poverty-stricken woman who’s about to drop out of state college after her husband’s hours were cut back, and personally, but anonymously, give her a cashier’s check for $10,000. Repeat as desired.
To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price. Make a big deal out of it, show up for their press events, and brag about it for the next five years.
Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.
But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time… Of course, if you’re not a millionaire or even a billionaire—then you can’t be quite as efficient about things, can’t so easily purchase in bulk. But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but don’t confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.
And when it comes to purchasing expected utilons—then, of course, shut up and multiply.
Thanks for this feedback!
I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn’t use obfuscating jargon or dive off into tangents; trying to work around those bits hurt the flow of the bits I most wanted to use. But both were important for my own EA journey as well, and I’ll keep thinking about ways to fit in some of those concepts (maybe through other pieces that reference them with fewer tangents).
As for “Beyond the Reach of God,” I’d prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.
Scott’s piece was part of the second edition of the Handbook, and I agree that it’s a classic; I’d like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott’s piece fits in well there). As an addition to this section, I think it makes a lot of the same points as the Singer and Soares pieces, though it might be better than one or the other of those.
I think that if the essay said things like “Religious people are stupid isn’t it obvious” and attempted to do social shaming of religious people, then I’d be pretty open to suggesting edits to such parts.
But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.
I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case they’ll be perfectly able to appreciate the finer points of the post even though it’s written in a way that misunderstands their relationship to their religion.
I think either way, if they’re going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn’t expect that there’s an interventionist aligned superintelligence (my terminology, I don’t mean nothing by it). I don’t think it’s right to walk on eggshells around religious people, and I don’t think it makes sense to throw out powerful ideas and pieces of strongly emotional/artistic work to make sure such people don’t need to learn to engage with art and ideas that don’t share their specific assumptions about the world.
Checks out, that makes sense.
If there were no great essays with similar themes aside from Eliezer’s, I’d be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn’t get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I’m likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.
Sometimes, Eliezerisms are great; I enjoy almost everything he’s ever written. But I think we’d both agree that his writing style is a miss for a good number of people, including many who have made great contributions to the EA movement. Perhaps the chance of catching people especially well makes his essays the highest-EV option, but there are a lot of other great writers who have tackled these topics.
(There’s also the trickiness of having CEA’s name attached to this, which means that — however many disclaimers we may attach, and — there will be readers who assume it’s an important part of EA to be atheist, or to support cryonics, etc.)
To clarify, I wouldn’t expect an essay like this to turn off most religious readers, or even to complete alienate any one person; it’s just got a few slings and arrows that I think can be avoided without compromising on quality.
Of course, there are many bits of Eliezer that I’d be glad to excerpt, including from this essay; if the excerpt sections in this series get more material added to them, I might be interested in something like this:
I see. As I hear you, it’s not that we must go overboard on avoiding atheism, but that it’s a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.
From my vantage point, there aren’t essays in this series that deal with giving up hope as directly as this essay. I think Singer’s piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you don’t even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.
*nods* I’ll respond to the specific things you said about the different essays. I split this into two comments for length.
I think there’s a few pieces of jargon that you could change (e.g. Unit of Caring talks about ‘akrasia’, which isn’t relevant). I imagine it’d be okay to request a few small edits to the essay.
But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I don’t think you should be afraid to show people a high-level discussion, just because they don’t know all of the details being discussed already. It’s okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear – I think this is good, it helps see that there are levels above to reach.
It’s like how EA student group events would always be “Intro to EA”. Instead, I think it’s really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you don’t fully understand all the terms they use like “System 1 / System 2″ or “conservation law” or “derivatives market”. I would give the Feynman lectures to a young teenager who doesn’t know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.
Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Graham’s essays are always said in simple language, but I don’t think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and that’s necessary to make their points.
See the graph about dating strategies here. The goal is to get strong hits that make a person say “This is one of the most important things I’ve ever read”, not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if there’s true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.
I also think the essays are exciting and have a good track record of convincing people. And my goal with the Handbook isn’t to avoid jargon altogether. To some extent, though, I’m trying to pack a lot of points into a smallish space, which isn’t how Eliezer’s style typically works out. Were the essay making the same points at half the length, I think it would be a better candidate.
Maybe I’ll try to produce an edited version at some point (with fewer digressions, and e.g. noting that ego depletion failed to replicate in a “Fuzzies and Utilons” footnote). But the more edits happen in a piece, the longer I expect it to take to get approval, especially from someone who doesn’t have much time to spare — another trade-off I had to consider when selecting pieces (I don’t think anything in the current series had more than a paragraph removed, unless it were printed as an excerpt).
I don’t want to push you to spend a lot of time on this, but if you’re game, would you want to suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay? This won’t be necessary for all readers, but it’s something I’ve been aiming for.
I do expect that further material for this project will contain a lot more jargon and complexity, because it won’t be explicitly pitched as an introduction to the basic concepts of EA (and you really can’t get far in e.g. global development without digging into economics, or X-risk without getting into topics like “corrigibility”).
A note on the Thiel point: As far as I recall, his thinking on startups became a popular phenomenon only after Blake Masters published notes on his class, though I don’t know whether the notes did much to make Thiel’s thinking more clear (maybe they were just the first widely-available source of that thinking).
Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. I’d suggest the following from the Fuzzies/Utilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)
--
If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:
To purchase warm fuzzies, find some hard-working but poverty-stricken woman who’s about to drop out of state college after her husband’s hours were cut back, and personally, but anonymously, give her a cashier’s check for $10,000. Repeat as desired.
To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price. Make a big deal out of it, show up for their press events, and brag about it for the next five years.
Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.
But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time… Of course, if you’re not a millionaire or even a billionaire—then you can’t be quite as efficient about things, can’t so easily purchase in bulk. But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but don’t confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.
And when it comes to purchasing expected utilons—then, of course, shut up and multiply.