Being an early adopter is not the same as being a founder, let alone the same as your influencers being founders.
You said “The rationalist community also wasn’t involved from the start”. I think this is false almost no matter how you slice it. “OK, the rationalist community was involved but not all of the founders were rationalists” is a different claim, and I agree with that claim.
You challenged my claim that EA was founded on naive consequentialism, by which, to clarify, I mean the ideas of Toby Ord and Brian Tomasik to combine greater generosity with greater effectiveness for a multiplicative effect, and to a slightly lesser extent the ideas of Givewell to simply be more effective in one’s giving.
If by “The EA movement was founded on people doing ‘naive’ consequentialism extremely well” you just meant “Toby Ord and Brian Tomasik did a lot of good by helping propagate the idea that generosity and effectiveness are good things, and GiveWell did a lot of good by encouraging people to try to donate to charity in a more discerning way”, then I don’t disagree.
I initially misunderstood you as making a claim that early EAs were philosophically committed to “naive consequentialism” in the sense of “willingness to lie, steal, cheat, murder, etc. whenever the first-order effects of this seem to outweigh the costs”. I’d want to see more evidence before I believed that, and I’d also want to be very clear that I don’t consider the same set of people (Ord, Tomasik, GiveWell) to be a majority of “early EA”, whether we’re measuring in head count, idea influence, idea quality and novelty, etc.
(Some examples of early-EA ideas that strike me as pushing against naive consequentialism: Eliezer’s entire oeuvre, especially the discussion of TDT, ethical injunctions, and “ends don’t always justify the means”; moral uncertainty, e.g. across different moral theories; general belief in moral progress and in the complexity of value, which increase the risk that we’re morally mistaken today; approaches to moral uncertainty such as the parliamentary model; “excited altruism” and other framings that emphasized personal values and interests over obligation; the unilateralist’s curse; ideas like Chesterton’s Fence and “your confidence level inside the argument is different from your confidence level outside the argument”, which push toward skepticism of first-order utility calculations.)
I do think that early EA was too CDT-ish, and (relatedly) too quick to play down the costs of things like lying and manipulating others. I think it’s good that EA grew out of that to some degree, and I hope we continue to grow out of it more.
You said “The rationalist community also wasn’t involved from the start”. I think this is false almost no matter how you slice it. “OK, the rationalist community was involved but not all of the founders were rationalists” is a different claim, and I agree with that claim.
If by “The EA movement was founded on people doing ‘naive’ consequentialism extremely well” you just meant “Toby Ord and Brian Tomasik did a lot of good by helping propagate the idea that generosity and effectiveness are good things, and GiveWell did a lot of good by encouraging people to try to donate to charity in a more discerning way”, then I don’t disagree.
I initially misunderstood you as making a claim that early EAs were philosophically committed to “naive consequentialism” in the sense of “willingness to lie, steal, cheat, murder, etc. whenever the first-order effects of this seem to outweigh the costs”. I’d want to see more evidence before I believed that, and I’d also want to be very clear that I don’t consider the same set of people (Ord, Tomasik, GiveWell) to be a majority of “early EA”, whether we’re measuring in head count, idea influence, idea quality and novelty, etc.
(Some examples of early-EA ideas that strike me as pushing against naive consequentialism: Eliezer’s entire oeuvre, especially the discussion of TDT, ethical injunctions, and “ends don’t always justify the means”; moral uncertainty, e.g. across different moral theories; general belief in moral progress and in the complexity of value, which increase the risk that we’re morally mistaken today; approaches to moral uncertainty such as the parliamentary model; “excited altruism” and other framings that emphasized personal values and interests over obligation; the unilateralist’s curse; ideas like Chesterton’s Fence and “your confidence level inside the argument is different from your confidence level outside the argument”, which push toward skepticism of first-order utility calculations.)
I do think that early EA was too CDT-ish, and (relatedly) too quick to play down the costs of things like lying and manipulating others. I think it’s good that EA grew out of that to some degree, and I hope we continue to grow out of it more.