I agree with the central thrust of this post, and I’m really grateful that you made it. This might be the single biggest thing I want to change about EA leaders’ behavior. And relatedly, I think “be more candid, and less nervous about PR risks” is probably the biggest thing I want to change about rank-and-file EAs’ behavior. Not because the risks are nonexistent, but because trying hard to avoid the risks via not-super-honest tactics tends to cause more harm than benefit. It’s the wrong general policy and mindset.
Q: Is your approach utilitarian? A: It’s utilitarian flavoured.
This seems like an unusually good answer to me! I’m impressed, and this updates me positively about Ben Todd’s honesty and precision in answering questions like these.
I think a good description of EA is “the approach that behaves sort of like utilitarianism, when decisions are sufficiently high-stakes and there aren’t ethical injunctions in play”. I don’t think utilitarianism is true, and it’s obvious that many EAs aren’t utilitarians, and obvious that utilitarianism isn’t required for working on EA cause areas, or for being quantitative, systematic, and rigorous in your moral reasoning, etc. Yet it’s remarkable how often our prescriptions look like the prescriptions of utilitarianism anyway.
I don’t know of any better compact way of describing EA’s moral perspective than “we endorse role-playing utilitarianism (at least when the stakes are high and there aren’t relevant deontology-ish prohibitions)”. And I think it’s good and wholesome when EAs don’t try to distance themselves from utilitarianism (given how useful it is as a way of summarizing a ton of different moral views we tend to endorse), but also don’t oversimplify our relationship to utilitarianism.
it raises questions about Will’s unadvertised relationship with a controversial public figure, and one who founded a wildly successful AI Capabilities Research Lab.
I agree that it was a terrible idea to found OpenAI, and reflects very poorly on Musk (especially given the stated reasoning at the time).
I think it’s an awful idea to require every EA who’s ever sent text messages to someone in Musk’s reference class (or talked to him at a party, etc.) to publicly disclose the fact that they chatted. I don’t see the point—is the idea that talking to Elon Musk somehow taints you as a person?
Various MIRI staff have had conversations with Elon Musk in the past, and the idea that this fact is scandalous just sounds silly to me. I’d be more scandalized if EAs didn’t talk to people like Musk, given the opportunity. (Or Bill Gates, or Demis Hassabis, or Barack Obama, etc.)
On some level I just think your whole framing here—‘oh no, an EA talked to a Controversial Public Figure!’—is misguided. “Controversial” is a statement about what’s popular, not about what’s true or good. I think that the impulse to avoid interacting in any way with people who seem Controversial is the same impulse that’s behind the misguided behavior the rest of your post is talking about. It’s the mindset of cancel culture, of guilt-by-association, of ‘there’s something unwholesome about talking to the Other Side at all, even to try to convince them to come around to doing the right thing’.
If we think that someone is doing the Wrong Thing, then by default we should talk to them and try to convince them to do things differently. EAs should primarily just be advocating for what they think is true and good, in a clear and honest voice, not playing the Six Degrees of PR Contagion game.
Which part implied that to you? I don’t see Will lying about this, and I don’t see how it matters for the thread whether Will and Elon ever send each other text messages, or whether Will tries to get Elon’s buy-in on a project.
AFAIK lots of EAs have tried to get Elon to help with (or ditch!) various projects over the years, though I’m unimpressed with the results.
I was pretty alarmed by this thread from Kerry Vaughan which touches on Ben Delo, a major EA donor prior to SBF with a fraud conviction: https://twitter.com/KerryLVaughan/status/1591508697372663810. The implication here is that Ben Delo’s involvement with EA just quietly stopped being talked about without any kind of public reflection on what could be done better moving forwards.
I’d still like more detail about what actually happened there, before I assume Kerry’s account is correct. Various other recent Kerry-claims have turned out to be false or exaggerated, though I can’t say I’m surprised if EAs responded super weirdly to the Ben Delo thing.
Presumably the part where Will says “So, er, it seems that Elon Musk just tweeted about What We Owe The Future. Crazy times!” I agree that this is not something you’d say about someone you knew to the extent that Will knew Elon to the extent shown by the Signal messages, unless you were trying to obscure this fact.
Elon was keynote speaker at EA Global 2015, so would have known Will since then.
Ah, I guess you’re saying that the “Crazy times!” part sounds starstruck and has a vibe of “this just occurred out of the blue”, and it would be weird to sound starstruck and astounded if Elon’s someone you talk to all the time and are good friends with?
I agree that would be a bit weird, though the Will-text-messages I saw didn’t cause me to think Will and Elon are that close, just that they’ve exchanged words and contact info at all. (Maybe I missed some text messages that do suggest a closer relationship?)
I agree that ‘utilitarian-flavoured’ isn’t an inherently bad answer from Ben. My internal reaction at the time, perhaps due to how the night had been marketed, was something like ‘ah he doesn’t want to scare me off if I’m a Kantian or something’, and this probably wasn’t a charitable interpretation.
On the Elon stuff, I agree that talking to Elon is not something that should require reporting. I think the shock for me was that I saw Will’s tweet in August, which as wock agreed implied to me they didn’t know each other, so when I saw the signal conversation I felt misled and started wondering how close they actually were. That said I had no idea Elon was an EAG keynote speaker, which is obviously public knowledge and makes the whole thing a lot less suspicious. I would also remove the word ‘controversial’ if I was to write this again, and that I think Elon’s done harm re: AI, as I agree it’s not relevant to the point I’m trying to make.
EAs and Musk have lots of connections/interactions—e.g., Musk is thanked in the acknowledgments of Bostrom’s 2014 book Superintelligence for providing feedback on the draft of the book. Musk attended FLI’s Jan 2015 Puerto Rico conference. Tegmark apparently argues with Musk about AI a bunch at parties. Various Open Phil staff were on the board of OpenAI at the same time as Musk, before Musk’s departure. Etc.
I agree with the central thrust of this post, and I’m really grateful that you made it. This might be the single biggest thing I want to change about EA leaders’ behavior. And relatedly, I think “be more candid, and less nervous about PR risks” is probably the biggest thing I want to change about rank-and-file EAs’ behavior. Not because the risks are nonexistent, but because trying hard to avoid the risks via not-super-honest tactics tends to cause more harm than benefit. It’s the wrong general policy and mindset.
This seems like an unusually good answer to me! I’m impressed, and this updates me positively about Ben Todd’s honesty and precision in answering questions like these.
I think a good description of EA is “the approach that behaves sort of like utilitarianism, when decisions are sufficiently high-stakes and there aren’t ethical injunctions in play”. I don’t think utilitarianism is true, and it’s obvious that many EAs aren’t utilitarians, and obvious that utilitarianism isn’t required for working on EA cause areas, or for being quantitative, systematic, and rigorous in your moral reasoning, etc. Yet it’s remarkable how often our prescriptions look like the prescriptions of utilitarianism anyway.
I don’t know of any better compact way of describing EA’s moral perspective than “we endorse role-playing utilitarianism (at least when the stakes are high and there aren’t relevant deontology-ish prohibitions)”. And I think it’s good and wholesome when EAs don’t try to distance themselves from utilitarianism (given how useful it is as a way of summarizing a ton of different moral views we tend to endorse), but also don’t oversimplify our relationship to utilitarianism.
I agree that it was a terrible idea to found OpenAI, and reflects very poorly on Musk (especially given the stated reasoning at the time).
I think it’s an awful idea to require every EA who’s ever sent text messages to someone in Musk’s reference class (or talked to him at a party, etc.) to publicly disclose the fact that they chatted. I don’t see the point—is the idea that talking to Elon Musk somehow taints you as a person?
Various MIRI staff have had conversations with Elon Musk in the past, and the idea that this fact is scandalous just sounds silly to me. I’d be more scandalized if EAs didn’t talk to people like Musk, given the opportunity. (Or Bill Gates, or Demis Hassabis, or Barack Obama, etc.)
On some level I just think your whole framing here—‘oh no, an EA talked to a Controversial Public Figure!’—is misguided. “Controversial” is a statement about what’s popular, not about what’s true or good. I think that the impulse to avoid interacting in any way with people who seem Controversial is the same impulse that’s behind the misguided behavior the rest of your post is talking about. It’s the mindset of cancel culture, of guilt-by-association, of ‘there’s something unwholesome about talking to the Other Side at all, even to try to convince them to come around to doing the right thing’.
If we think that someone is doing the Wrong Thing, then by default we should talk to them and try to convince them to do things differently. EAs should primarily just be advocating for what they think is true and good, in a clear and honest voice, not playing the Six Degrees of PR Contagion game.
Which part implied that to you? I don’t see Will lying about this, and I don’t see how it matters for the thread whether Will and Elon ever send each other text messages, or whether Will tries to get Elon’s buy-in on a project.
AFAIK lots of EAs have tried to get Elon to help with (or ditch!) various projects over the years, though I’m unimpressed with the results.
I’d still like more detail about what actually happened there, before I assume Kerry’s account is correct. Various other recent Kerry-claims have turned out to be false or exaggerated, though I can’t say I’m surprised if EAs responded super weirdly to the Ben Delo thing.
Presumably the part where Will says “So, er, it seems that Elon Musk just tweeted about What We Owe The Future. Crazy times!” I agree that this is not something you’d say about someone you knew to the extent that Will knew Elon to the extent shown by the Signal messages, unless you were trying to obscure this fact.
Elon was keynote speaker at EA Global 2015, so would have known Will since then.
Ah, I guess you’re saying that the “Crazy times!” part sounds starstruck and has a vibe of “this just occurred out of the blue”, and it would be weird to sound starstruck and astounded if Elon’s someone you talk to all the time and are good friends with?
I agree that would be a bit weird, though the Will-text-messages I saw didn’t cause me to think Will and Elon are that close, just that they’ve exchanged words and contact info at all. (Maybe I missed some text messages that do suggest a closer relationship?)
Upvoted. I think these are all fair points.
I agree that ‘utilitarian-flavoured’ isn’t an inherently bad answer from Ben. My internal reaction at the time, perhaps due to how the night had been marketed, was something like ‘ah he doesn’t want to scare me off if I’m a Kantian or something’, and this probably wasn’t a charitable interpretation.
On the Elon stuff, I agree that talking to Elon is not something that should require reporting. I think the shock for me was that I saw Will’s tweet in August, which as wock agreed implied to me they didn’t know each other, so when I saw the signal conversation I felt misled and started wondering how close they actually were. That said I had no idea Elon was an EAG keynote speaker, which is obviously public knowledge and makes the whole thing a lot less suspicious. I would also remove the word ‘controversial’ if I was to write this again, and that I think Elon’s done harm re: AI, as I agree it’s not relevant to the point I’m trying to make.
EAs and Musk have lots of connections/interactions—e.g., Musk is thanked in the acknowledgments of Bostrom’s 2014 book Superintelligence for providing feedback on the draft of the book. Musk attended FLI’s Jan 2015 Puerto Rico conference. Tegmark apparently argues with Musk about AI a bunch at parties. Various Open Phil staff were on the board of OpenAI at the same time as Musk, before Musk’s departure. Etc.