Are you wanting to follow EA research, but finding papers and longform forum posts too dry?
Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce Insights for Impact, a YouTube channel that’s all about communicating the key insights of EA-aligned research papers.
In our first video, How Science Misunderstands Power, we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering.
A 2018 paper by Samo Burja and Zachary Lerangis, The Scientists, the Statesman, and the Bomb, served as the basis for this video. However, we also drew inspiration from HaydnBelfield’s post, especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact.
In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please get in touch!
Announcing Insights for Impact
Hey all!
Are you wanting to follow EA research, but finding papers and longform forum posts too dry?
Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce Insights for Impact, a YouTube channel that’s all about communicating the key insights of EA-aligned research papers.
In our first video, How Science Misunderstands Power, we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering.
A 2018 paper by Samo Burja and Zachary Lerangis, The Scientists, the Statesman, and the Bomb, served as the basis for this video. However, we also drew inspiration from HaydnBelfield’s post, especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact.
In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please get in touch!