This article might be good and satisfying to many people because it gives a plausible sense of what happened in EA related to SBF, and what EA leaders might have known. The article goes beyond the “press releases” we have seen, does not come from an EA source, and is somewhat authoritative.
Rob Wiblin appears quite a few times and is quoted. In my opinion, he is right and most EAs and regular people would agree with him. New Yorker articles include details to suggest a sense of intimacy and understanding, but the associated narrative is not always substantive or true. This style is what Wiblin is reacting to.
Gideon-Lewis makes some characterizations that don’t seem that insightful. As in his last piece, Gideon-Lewis maintains an odd sense of surprise that a large movement with billions of dollars, and a history of dealing with bad actors, has an “inner circle”.
Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum.
My intuitions differ some here. I don’t know about Will MacAskill’s notion of moral pluralism. But my notion of moral pluralism involves assigning some weight to views even if they’re informed by less data or less reflection, and also doing some upweighting on outsider views simply because they’re likely to be different (similar to the idea of extremization in the context of forecast aggregation).
If a regular person thinks “our great virtue is being right” sounds like hubris, that’s evidence of actual hubris. You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better. You make that replacement because the second statement harmonizes with a wider variety of moral and epistemic views.
PR is more corrosive than reputation because “reputation” allows for the possibility of observers outside your group who can form justified opinions about your character and give you useful critical feedback on your thinking and behavior.
(One of the FTX Future Fund researchers piped up to make a countervailing point, referring, presumably, to donations that Thiel made to the campaigns of J. D. Vance and Blake Masters: “Might be a useful ally at some point given he is trying to buy a couple Senate seats.”)
There’s a sense in which reputational harm from a vignette like this is justified. People who read it can reasonably guess that the speaker has few instinctive misgivings to ally with a “semi-fascist” who’s buying political power and violating widely held “common sense” American morality.
One would certainly hope that deontological considerations (beyond just PR) would come up at some point, were EA considering an alliance with Thiel. But it concerns me that Lewis-Kraus quotes so much “PR” discussion, and so little discussion of deontological safeguards. I don’t see anything here that reassures me ethical injunctions would actually come up.
And instinctive misgivings actually matter, because it’s best to nip your own immoral behavior in the bud. You don’t want to be in a situation where each individual decision seems fine and you don’t realize how big their sum was until the end, as SBF put it in this interview (paraphrased). That’s where Lewis-Kraus’ references to Schelling fences and “momentum” come in.
The best time to get a “hey this could be immoral” mental alert is as soon as you have the idea of doing the thing. Maybe you do the thing despite the alert. I’m in favor of redirecting the trolley despite the “you will be responsible for the death of a person” alert. But an alert is generally a valuable opportunity to reflect.
Finally, some meta notes:
I doubt I’m the only person thinking along these lines. Matt Yglesias also seems concerned, for example. The paragraphs above are an attempt to steelman what seems to be a common reaction on e.g. Twitter in a way that senior EAs will understand.
The above paragraphs, to a large degree, reflect updates around my thinking about EA which have occurred over the past years and especially the past weeks. My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants.
I’ve noticed it’s easy for me to get into a mode of wondering if I’m a good person and trying to defend my past actions. Generally speaking it has felt more useful to reflect on how I can improve. Growth mindset over fixed mindset, essentially.
That said, I think it is a major credit to EAs that they work so hard to do good. Lack of interest in identifying and solving the world’s biggest problems strikes me as a major problem with common-sense morality. So I don’t think of EAs in the Slack channel as bad people. I think of them as people working hard to do good, but the notion of “good” they were optimizing for was a bit off. (I used to be optimizing for that thing myself!)
I’ve also noticed that when experiencing an identity threat (e.g. sunk cost fallacy), it’s useful for me to write a specific alternative plan, without committing to it during the process of writing it. This could look like: Make a big list of things I could’ve done differently… then circle the ones I think I should’ve done, in hindsight, with the benefit of reflection. Or, if I’m feeling doubtful about current plans, avoid letting those doubts consume me and instead outline one or more specific alternative plans to consider.
I’m unsure what part of my comment you are replying to. I’m happy to own up to valuing “being right over optics/politics”. I’m OK if you became aggressive or even hostile, through making good inferences about me.
However, many of the things you said are confusing to me. I don’t know how the blog post on “PR”/”reputation” is relevant. Also, I agree with Matt Yglesias (I’m in touch with him!).
Importantly, I think it would be good for you to be aware of how your writing in your comment might present to some people.
Your comment begins with “my intuitions differ some here…” which implies I share these the views you are opposing below your comment. This seems confirmed throughout, e.g. “My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants”.
If I tried to reply, I think I would have be obligated to refute or deal with the the associations you imply for me, which include “ally with a semi-fascist”, “violating American morality”, “little discussion of deontological safeguards”, “nip [my] own immoral behavior in the bud”.
I don’t think the following idea is in the article, much less my comment: “You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better.”
I don’t think you intended this, but I think some people would find this strange and somewhat offensive.
I actually found your reply interesting and filled with content. I think you have interesting opinions to share.
My comment was a sloppy attempt at simultaneously replying to (a) attitudes I’ve personally observed in EA, (b) the PR Slack channel as described in the article, and (c) your comment. I apologize if I misunderstood your comment or mischaracterized your position.
My reply was meant as a vague gesture at how I would like EA leadership to change relative to what came through in the New Yorker article. I wouldn’t read too much into what I wrote. It’s tricky to make a directional recommendation, because there’s always the possibility that the reader has already made the update you want them to make, and your directional recommendation causes them to over-update.
Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum.
This is true, but it’s about as good as can be expected since it’s an online New Yorker piece. Their online pieces are much closer to blog posts. The MacAskill profile that ran in the magazine was the result of months of reporting, writing, editing, and fact-checking, all with expenses for things like travel.
Editorial/Speculation/Personal comments:
This article might be good and satisfying to many people because it gives a plausible sense of what happened in EA related to SBF, and what EA leaders might have known. The article goes beyond the “press releases” we have seen, does not come from an EA source, and is somewhat authoritative.
Rob Wiblin appears quite a few times and is quoted. In my opinion, he is right and most EAs and regular people would agree with him. New Yorker articles include details to suggest a sense of intimacy and understanding, but the associated narrative is not always substantive or true. This style is what Wiblin is reacting to.
Gideon-Lewis makes some characterizations that don’t seem that insightful. As in his last piece, Gideon-Lewis maintains an odd sense of surprise that a large movement with billions of dollars, and a history of dealing with bad actors, has an “inner circle”.
Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum.
My intuitions differ some here. I don’t know about Will MacAskill’s notion of moral pluralism. But my notion of moral pluralism involves assigning some weight to views even if they’re informed by less data or less reflection, and also doing some upweighting on outsider views simply because they’re likely to be different (similar to the idea of extremization in the context of forecast aggregation).
If a regular person thinks “our great virtue is being right” sounds like hubris, that’s evidence of actual hubris. You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better. You make that replacement because the second statement harmonizes with a wider variety of moral and epistemic views.
PR is more corrosive than reputation because “reputation” allows for the possibility of observers outside your group who can form justified opinions about your character and give you useful critical feedback on your thinking and behavior.
There’s a sense in which reputational harm from a vignette like this is justified. People who read it can reasonably guess that the speaker has few instinctive misgivings to ally with a “semi-fascist” who’s buying political power and violating widely held “common sense” American morality.
One would certainly hope that deontological considerations (beyond just PR) would come up at some point, were EA considering an alliance with Thiel. But it concerns me that Lewis-Kraus quotes so much “PR” discussion, and so little discussion of deontological safeguards. I don’t see anything here that reassures me ethical injunctions would actually come up.
And instinctive misgivings actually matter, because it’s best to nip your own immoral behavior in the bud. You don’t want to be in a situation where each individual decision seems fine and you don’t realize how big their sum was until the end, as SBF put it in this interview (paraphrased). That’s where Lewis-Kraus’ references to Schelling fences and “momentum” come in.
The best time to get a “hey this could be immoral” mental alert is as soon as you have the idea of doing the thing. Maybe you do the thing despite the alert. I’m in favor of redirecting the trolley despite the “you will be responsible for the death of a person” alert. But an alert is generally a valuable opportunity to reflect.
Finally, some meta notes:
I doubt I’m the only person thinking along these lines. Matt Yglesias also seems concerned, for example. The paragraphs above are an attempt to steelman what seems to be a common reaction on e.g. Twitter in a way that senior EAs will understand.
The above paragraphs, to a large degree, reflect updates around my thinking about EA which have occurred over the past years and especially the past weeks. My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants.
I’ve noticed it’s easy for me to get into a mode of wondering if I’m a good person and trying to defend my past actions. Generally speaking it has felt more useful to reflect on how I can improve. Growth mindset over fixed mindset, essentially.
That said, I think it is a major credit to EAs that they work so hard to do good. Lack of interest in identifying and solving the world’s biggest problems strikes me as a major problem with common-sense morality. So I don’t think of EAs in the Slack channel as bad people. I think of them as people working hard to do good, but the notion of “good” they were optimizing for was a bit off. (I used to be optimizing for that thing myself!)
I’ve also noticed that when experiencing an identity threat (e.g. sunk cost fallacy), it’s useful for me to write a specific alternative plan, without committing to it during the process of writing it. This could look like: Make a big list of things I could’ve done differently… then circle the ones I think I should’ve done, in hindsight, with the benefit of reflection. Or, if I’m feeling doubtful about current plans, avoid letting those doubts consume me and instead outline one or more specific alternative plans to consider.
I’m unsure what part of my comment you are replying to. I’m happy to own up to valuing “being right over optics/politics”. I’m OK if you became aggressive or even hostile, through making good inferences about me.
However, many of the things you said are confusing to me. I don’t know how the blog post on “PR”/”reputation” is relevant. Also, I agree with Matt Yglesias (I’m in touch with him!).
Importantly, I think it would be good for you to be aware of how your writing in your comment might present to some people.
Your comment begins with “my intuitions differ some here…” which implies I share these the views you are opposing below your comment. This seems confirmed throughout, e.g. “My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants”.
If I tried to reply, I think I would have be obligated to refute or deal with the the associations you imply for me, which include “ally with a semi-fascist”, “violating American morality”, “little discussion of deontological safeguards”, “nip [my] own immoral behavior in the bud”.
I don’t think the following idea is in the article, much less my comment: “You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better.”
I don’t think you intended this, but I think some people would find this strange and somewhat offensive.
I actually found your reply interesting and filled with content. I think you have interesting opinions to share.
My comment was a sloppy attempt at simultaneously replying to (a) attitudes I’ve personally observed in EA, (b) the PR Slack channel as described in the article, and (c) your comment. I apologize if I misunderstood your comment or mischaracterized your position.
My reply was meant as a vague gesture at how I would like EA leadership to change relative to what came through in the New Yorker article. I wouldn’t read too much into what I wrote. It’s tricky to make a directional recommendation, because there’s always the possibility that the reader has already made the update you want them to make, and your directional recommendation causes them to over-update.
This is true, but it’s about as good as can be expected since it’s an online New Yorker piece. Their online pieces are much closer to blog posts. The MacAskill profile that ran in the magazine was the result of months of reporting, writing, editing, and fact-checking, all with expenses for things like travel.