How does OpenPhil decide what x-risk related research to do itself, such as its “worldview investigations”? For example, why did OpenPhil decide to research on “How much computational power it takes to match the human brain” by Joseph Carlsmith and “Forecasting transformative AI with biological anchors” by Ajeya Cotra?
How does OpenPhil decide what x-risk related research to do itself, such as its “worldview investigations”? For example, why did OpenPhil decide to research on “How much computational power it takes to match the human brain” by Joseph Carlsmith and “Forecasting transformative AI with biological anchors” by Ajeya Cotra?