Research into increasing the “surface area” of important problems
Artificial Intelligence, Biorisk and Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Great Power Relations, Space Governance, Effective Altruism
The idea here is that 80,000 Hours seems to follow an approach along the lines of (1) What are the biggest problems? (2) What are the obvious ways to make progress on these problems? (3) How can we get people to implement these obvious ways?
If we hold the first question constant, we can instead ask: (2) What are the skillsets of the people interested in solving these problems? (3) How can people with those skillsets make progress on these problems?
This way we might find that (say) there are many cultural anthropologists who want to avert risks from AI. So how can a cultural anthropologist specialize or make an easy career change to contribute to AI safety? That is the hard question that will take a lot of research to answer. But if, like in the made-up example, there are enough cultural anthropologists like that, the research may be worth it, even if the work of each cultural anthropologist may be less impactful than that of a machine learning specialist.
This example is about increasing the surface area that can be used by people, but one might also increase the surface area that can be used by entrepreneurs or funders, e.g., find creative ways in which foundations that are bound by restrictive by-laws can yet contribute to AI safety – maybe they can’t donate to MIRI but they can fund a conference on automated theorem provers in Haskell that is useful for MIRI for recruiting.
Research into increasing the “surface area” of important problems
Artificial Intelligence, Biorisk and Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Great Power Relations, Space Governance, Effective Altruism
The idea here is that 80,000 Hours seems to follow an approach along the lines of (1) What are the biggest problems? (2) What are the obvious ways to make progress on these problems? (3) How can we get people to implement these obvious ways?
If we hold the first question constant, we can instead ask: (2) What are the skillsets of the people interested in solving these problems? (3) How can people with those skillsets make progress on these problems?
This way we might find that (say) there are many cultural anthropologists who want to avert risks from AI. So how can a cultural anthropologist specialize or make an easy career change to contribute to AI safety? That is the hard question that will take a lot of research to answer. But if, like in the made-up example, there are enough cultural anthropologists like that, the research may be worth it, even if the work of each cultural anthropologist may be less impactful than that of a machine learning specialist.
This example is about increasing the surface area that can be used by people, but one might also increase the surface area that can be used by entrepreneurs or funders, e.g., find creative ways in which foundations that are bound by restrictive by-laws can yet contribute to AI safety – maybe they can’t donate to MIRI but they can fund a conference on automated theorem provers in Haskell that is useful for MIRI for recruiting.