There’s an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it’s better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: “Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world’s poorest!”
But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people outside EA don’t give 10% of their earnings to any effective charity. Most AI work outside EA is focused on making money or producing “cool” results, not mitigating disaster or planning for the long-term benefit of humanity.
Practically all EAs agree people should give 10% of their earnings to effective developing-world charities instead of 1% to ineffective developed-world ones. And practically all EAs agree that AI development should be done with significantly more thought and care. (I think even Émile Torres may agree on that! Could someone ask?)
It’s unfortunate that the internal nearterm vs longterm debate gets so much coverage, given that what we agree on is way more action-relevant to outsiders.
In any case, I mention this because it could play into your “ideologically diverse group of public figures” point somehow. Your idea seems interesting, but I also don’t like the idea of amplifying internal debates further. I would love to see public statements like “Even though I have cause prioritization disagreements with Person X, y’all should really do as they suggest!” And acquiring a norm of using the media to gain leverage in internal debates seems pretty bad.
Yeah, it’s the narcissism of small differences. If we’re gonna emphasize our diversity more, we should also emphasize our unity. The narrative could be “EA is a framework for how to apply morality, and it’s compatible with several moral systems.”
Great points.
There’s an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it’s better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: “Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world’s poorest!”
But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people outside EA don’t give 10% of their earnings to any effective charity. Most AI work outside EA is focused on making money or producing “cool” results, not mitigating disaster or planning for the long-term benefit of humanity.
Practically all EAs agree people should give 10% of their earnings to effective developing-world charities instead of 1% to ineffective developed-world ones. And practically all EAs agree that AI development should be done with significantly more thought and care. (I think even Émile Torres may agree on that! Could someone ask?)
It’s unfortunate that the internal nearterm vs longterm debate gets so much coverage, given that what we agree on is way more action-relevant to outsiders.
In any case, I mention this because it could play into your “ideologically diverse group of public figures” point somehow. Your idea seems interesting, but I also don’t like the idea of amplifying internal debates further. I would love to see public statements like “Even though I have cause prioritization disagreements with Person X, y’all should really do as they suggest!” And acquiring a norm of using the media to gain leverage in internal debates seems pretty bad.
Yeah, it’s the narcissism of small differences. If we’re gonna emphasize our diversity more, we should also emphasize our unity. The narrative could be “EA is a framework for how to apply morality, and it’s compatible with several moral systems.”