I really like this. In fact, I would take it a step further. I believe we should expand the multi-armed bandit model to cover exploring areas like:
Philosophy, particularly ethics. It would be nice to know whether Hedonic or Preference utilitarianism is correct without having to compute all of our Coherent Extrapolated Volition. Perhaps a few people doing such narrow, targeted research could make headway in our lifetimes with university funding rather than EA money. This seems likely to make a large impact in what EAs fund for generations to come.
Neuroscience, especially regarding qualia and types of thought with moral concern. Principia Qualia is an EA attempt to solve this. This could resolve the questions which divide EA between animal and human charities.
Finding new big ideas. There are already people working on big projects like x-risks, s-risk, space colonization/industrialization, curing aging, cryogenic freezing, simulating neurons digitally, brain-computer interfaces, AGI, nanotechnology, self-replicating machines, etc. Most of these will likely fail, but perhaps a few will succeed if we’re not just deluding ourselves on all counts. Are there entirely new fields which no one has thought of yet? I suspect the answer is yes. Having a better understanding of our own utility function would narrow the search space of valuable ideas significantly, but I think we likely can make headway based on existing philosophy. Robin Hanson made some suggestions just a few days ago.
Improving EA thought, mental-tools, physical tools, methodologies, and other ways of exploring/exploiting more efficiently. (This was mentioned in the OP, but I wanted to highlight it.)
I really like this. In fact, I would take it a step further. I believe we should expand the multi-armed bandit model to cover exploring areas like:
Philosophy, particularly ethics. It would be nice to know whether Hedonic or Preference utilitarianism is correct without having to compute all of our Coherent Extrapolated Volition. Perhaps a few people doing such narrow, targeted research could make headway in our lifetimes with university funding rather than EA money. This seems likely to make a large impact in what EAs fund for generations to come.
Neuroscience, especially regarding qualia and types of thought with moral concern. Principia Qualia is an EA attempt to solve this. This could resolve the questions which divide EA between animal and human charities.
Finding new big ideas. There are already people working on big projects like x-risks, s-risk, space colonization/industrialization, curing aging, cryogenic freezing, simulating neurons digitally, brain-computer interfaces, AGI, nanotechnology, self-replicating machines, etc. Most of these will likely fail, but perhaps a few will succeed if we’re not just deluding ourselves on all counts. Are there entirely new fields which no one has thought of yet? I suspect the answer is yes. Having a better understanding of our own utility function would narrow the search space of valuable ideas significantly, but I think we likely can make headway based on existing philosophy. Robin Hanson made some suggestions just a few days ago.
Improving EA thought, mental-tools, physical tools, methodologies, and other ways of exploring/exploiting more efficiently. (This was mentioned in the OP, but I wanted to highlight it.)
Poor matching of employees/employers in impoverished countries seems particularly neglected, tractable, and scalable.