I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I’m not aware of a good macrohistorical dataset that could answer this question—reality in any case may prove underpowered.
Yet whether or not in fact things would change with more democratised decision-making/​intelligence gathering/​ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many ‘bits’ of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there’s a ‘clownside’ risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate ‘X is an important issue’ may be much lower than ‘can contribute usefully to X’.
A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I’d guess relevant ‘EA open problems’ are a mix, but this makes me hesitant for there to be a general shove in this direction.
I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some ‘research agenda for the most important open problems in EA’). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/​feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a ‘cause X’ (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health—or non-communicable disease—but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/​SCI/​etc.)
I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I’m not aware of a good macrohistorical dataset that could answer this question—reality in any case may prove underpowered.
Yet whether or not in fact things would change with more democratised decision-making/​intelligence gathering/​ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many ‘bits’ of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there’s a ‘clownside’ risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate ‘X is an important issue’ may be much lower than ‘can contribute usefully to X’.
A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I’d guess relevant ‘EA open problems’ are a mix, but this makes me hesitant for there to be a general shove in this direction.
I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some ‘research agenda for the most important open problems in EA’). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/​feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a ‘cause X’ (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health—or non-communicable disease—but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/​SCI/​etc.)