First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of ‘they should change their decisions a lot if they put in much more of the community’s brainpower into analysing data from a granular level upwards’.
So I appreciate that you actually gave specific reasons for why you’d be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and I’m just going to take up your opinion here.
Interestingly, your interpretation that this is evidence for that there shouldn’t be a radical alteration in what causes we focus can be seen both as an outside view and inside view. It’s an outside view in the sense that it weights the views of people who’ve decided to move into the direction of working on the long term future. It’s also an inside view in that it doesn’t consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).
A historical example where this went wrong is how in the 1920′s Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalin’s authoritarian regime and Nazi Germany, respectively. Although I haven’t done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that we’ll radically alter what we’ll work on after 20 years if we’d make a concerted effort now to structure the community around enabling a significant portion of our ‘members’ (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).
It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). I’ve made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. I’d be curious to hear if any of these points has caused you to update any of your other intuitions:
Worldviews
more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences
research into how different humans trade off suffering and eudaimonia differently
a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)
Focus areas:
Global poverty
use of better metrics for wellbeing – e.g. life satisfaction scores and future use of real-time tracking of experiential well-being – that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)
use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points
Existential risk
more research on how to avoid evolutionary/game theoretical “Moloch” dynamics instead of the current “Maxipok” focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there
for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hanson’s work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.
Animal welfare
more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target
shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems
Community building
Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust
However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms
I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I’m not aware of a good macrohistorical dataset that could answer this question—reality in any case may prove underpowered.
Yet whether or not in fact things would change with more democratised decision-making/intelligence gathering/ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many ‘bits’ of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there’s a ‘clownside’ risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate ‘X is an important issue’ may be much lower than ‘can contribute usefully to X’.
A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I’d guess relevant ‘EA open problems’ are a mix, but this makes me hesitant for there to be a general shove in this direction.
I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some ‘research agenda for the most important open problems in EA’). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a ‘cause X’ (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health—or non-communicable disease—but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/SCI/etc.)
First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of ‘they should change their decisions a lot if they put in much more of the community’s brainpower into analysing data from a granular level upwards’.
So I appreciate that you actually gave specific reasons for why you’d be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and I’m just going to take up your opinion here.
Interestingly, your interpretation that this is evidence for that there shouldn’t be a radical alteration in what causes we focus can be seen both as an outside view and inside view. It’s an outside view in the sense that it weights the views of people who’ve decided to move into the direction of working on the long term future. It’s also an inside view in that it doesn’t consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).
A historical example where this went wrong is how in the 1920′s Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalin’s authoritarian regime and Nazi Germany, respectively. Although I haven’t done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that we’ll radically alter what we’ll work on after 20 years if we’d make a concerted effort now to structure the community around enabling a significant portion of our ‘members’ (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).
It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). I’ve made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. I’d be curious to hear if any of these points has caused you to update any of your other intuitions:
Worldviews
more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences
research into how different humans trade off suffering and eudaimonia differently
a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)
Focus areas:
Global poverty
use of better metrics for wellbeing – e.g. life satisfaction scores and future use of real-time tracking of experiential well-being – that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)
use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points
Existential risk
more research on how to avoid evolutionary/game theoretical “Moloch” dynamics instead of the current “Maxipok” focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there
for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hanson’s work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.
Animal welfare
more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target
shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems
Community building
Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust
However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms
I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I’m not aware of a good macrohistorical dataset that could answer this question—reality in any case may prove underpowered.
Yet whether or not in fact things would change with more democratised decision-making/intelligence gathering/ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many ‘bits’ of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there’s a ‘clownside’ risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate ‘X is an important issue’ may be much lower than ‘can contribute usefully to X’.
A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I’d guess relevant ‘EA open problems’ are a mix, but this makes me hesitant for there to be a general shove in this direction.
I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some ‘research agenda for the most important open problems in EA’). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a ‘cause X’ (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health—or non-communicable disease—but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/SCI/etc.)