Greater knowledge of psychology would be powerful, but why should we expect the sign to be positive, instead of say making the world worse by improving propaganda and marketing?
Thank you for the question; this is an important topic.
We believe that advances in psychology could make improvements to many people’s lives by helping with depression, increasing happiness, improving relationships, and helping people think more clearly and rationally. As a result, we’re optimistic that the sign can be positive. Our past work was primarily focused on these kinds of upsides, especially self-improvement; developing skills, improving rationality, and helping people solve problems in their lives.
Leverage 1.0 thought a lot about the impact of psychology research and came to the view that sharing the research would be positive. Evaluating this is an area where it’s hard to build detailed models though so I’d be keen to learn more about EA research on these kinds of questions.
Greater knowledge of psychology would be powerful, but why should we expect the sign to be positive, instead of say making the world worse by improving propaganda and marketing?
Hi Casebash,
Thank you for the question; this is an important topic.
We believe that advances in psychology could make improvements to many people’s lives by helping with depression, increasing happiness, improving relationships, and helping people think more clearly and rationally. As a result, we’re optimistic that the sign can be positive. Our past work was primarily focused on these kinds of upsides, especially self-improvement; developing skills, improving rationality, and helping people solve problems in their lives.
That said, there are potential downsides to advancing knowledge in a lot of areas, which are important to think through in advance. I know the EA community has thought about some of the relevant areas such as flow-through effects and how to think about them (e.g. the impact of AMF on population and the meat-eater problem) and cases where extra effort might be harmful (e.g. possible risks to AI safety from increasing hardware capacities and whether or not working on AI safety might contribute to capabilities).
Leverage 1.0 thought a lot about the impact of psychology research and came to the view that sharing the research would be positive. Evaluating this is an area where it’s hard to build detailed models though so I’d be keen to learn more about EA research on these kinds of questions.