I came across this article from the Carnegie Council’s Artificial Intelligence and Equality Initiative and I can’t help but feel like they misunderstand longtermism and EA. The article mentions the popularity of William Macaskill’s new book “What We Owe the Future” and the case for considering future generations and civilization. I would recommend you read the article before you read my take below, but Carnegie Council made the common fallacious arguments against longtermism.
They make it seem like in order to address longtermism, you have to completely ignore the present. I have never heard an EA argue for disregarding contemporary issues.
They convey that longtermism requires you to “put all your eggs in one basket,” the basket being longtermism and not today’s problems.
Regulating AI will result in a slowdown in production. Yes, this is true but risking an uncontrollable accelerating risky technology like AI can result in the end of humanity and mass suffering. Therefore, the trade-off should be worth it much like regulating carbon emissions is (better word for worth it).