I agree somewhat, but I think this represents a real difference between rationalist communities like LessWrong and the EA community. Rationalists like LessWrong focus on truth, Effective Altruism is focused on goodness. Quite different goals when we get down to it.
While Effective Altruism uses a lot more facts than most moral communities, it is a community focused on morality, and their lens is essentially “weak utilitarianism.” They don’t accept the strongest conclusions of utilitarianism, but there is no “absolute dos or don’ts”, unlike dentologists.
The best example is “What if P=NP?” was proven true. It isn’t, but I will use it as an example of the difference between rationalists and EAs. Rationalists would publish it for the world, focusing on the truth. EAs would not, because one of the problems we’d be able to solve efficiently is encryption. Essentially this deals a death blow to any sort of security on computers. It’s a hacker’s paradise. They would focus on how bad such an information hazard it would be, this for the good of the world, they wouldn’t publish it.
So what’s all those words for? To illustrate point of view differences between rationalists like LessWrong and EAs on the question of prioritization of truth vs goodness.
I disagree—Rationalists (well, wherever you want to put Bostrom) invented the term infohazard. See Scott Alexander on Virtue of Silence. They take the risks of information as power very seriously, and if knowledge of P equaling NP posed a threat to lots of beings and they thought the best thing was suppress that, they would do it. In my experience, both EAs and rationalists are very respectful of the need for discretion.
I think I see the distinction you’re making and I think the general idea is correct, but this specific example is wrong.
I agree somewhat, but I think this represents a real difference between rationalist communities like LessWrong and the EA community. Rationalists like LessWrong focus on truth, Effective Altruism is focused on goodness. Quite different goals when we get down to it.
While Effective Altruism uses a lot more facts than most moral communities, it is a community focused on morality, and their lens is essentially “weak utilitarianism.” They don’t accept the strongest conclusions of utilitarianism, but there is no “absolute dos or don’ts”, unlike dentologists.
The best example is “What if P=NP?” was proven true. It isn’t, but I will use it as an example of the difference between rationalists and EAs. Rationalists would publish it for the world, focusing on the truth. EAs would not, because one of the problems we’d be able to solve efficiently is encryption. Essentially this deals a death blow to any sort of security on computers. It’s a hacker’s paradise. They would focus on how bad such an information hazard it would be, this for the good of the world, they wouldn’t publish it.
So what’s all those words for? To illustrate point of view differences between rationalists like LessWrong and EAs on the question of prioritization of truth vs goodness.
I disagree—Rationalists (well, wherever you want to put Bostrom) invented the term infohazard. See Scott Alexander on Virtue of Silence. They take the risks of information as power very seriously, and if knowledge of P equaling NP posed a threat to lots of beings and they thought the best thing was suppress that, they would do it. In my experience, both EAs and rationalists are very respectful of the need for discretion.
I think I see the distinction you’re making and I think the general idea is correct, but this specific example is wrong.