Excellent work. I hope youâll forgive me taking issue with a smaller point:
Given the uncertainty they are facing, most of OpenPhilâs charity recommendations and CEAâs community-building policies should be overturned or radically altered in the next few decades. That is, if they actually discover their mistakes. This means itâs crucial for them to encourage more people to do local, contained experiments and then integrate their results into more accurate models. (my emphasis)
Iâm not so sure that this is true, although it depends on how big an area you imagine will /â should be âoverturnedâ. This also somewhat ties into the discussion about how likely we should expect to be missing a âcause Xâ.
If cause X is another entire cause area, Iâd be pretty surprised to see a new one in (say) 10 years which is similar to animals or global health, and even more surprised to see one that supplants long term future. My rationale for this is I see broad funnel where EAs tend to move into the long term future/âx-risk/âAI, and once there they tend not to leave (I can think of a fair number of people who made the move from (e.g.) global health --> far future, but Iâm not aware of anyone who moved from far future --> anything else). There are also people who have been toiling in the long term future vinyard for a long time (e.g. MIRI), and the fact we do not see many people moving elsewhere suggests this is pretty stable attractor.
There are other reasons for a cause area being a stable attractor besides all reasonable roads lead to it. That said, Iâd suggest one can point to general principles which would somewhat favour this (e.g. the scope of the long term future, that the light cone commons, stewarded well, permits mature moral action in the universe to whatever in fact has most value, etc.) Iâd say similar points to a lesser degree to apply to the broad landscape of âon reflection moral commitmentsâ, and so the existing cause areas mostly exhaust this moral landscape.
Naturally, I wouldnât want to bet the farm on what might prove overconfidence, but insofar as it goes it supplies less impetus for lots of exploratory work of this type. At a finer level of granulariy (and so a bit further down your diagram), I see less resilience (e.g. maybe we should tilt the existing global poverty portfolio more one way or the other depending how the cash transfer literature turns out, maybe we should add more âavoid great power conflictâ to the long term future cause area, etc.) Yet I still struggle to see this adding up to radical alteration.
First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of âthey should change their decisions a lot if they put in much more of the communityâs brainpower into analysing data from a granular level upwardsâ.
So I appreciate that you actually gave specific reasons for why youâd be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and Iâm just going to take up your opinion here.
Interestingly, your interpretation that this is evidence for that there shouldnât be a radical alteration in what causes we focus can be seen both as an outside view and inside view. Itâs an outside view in the sense that it weights the views of people whoâve decided to move into the direction of working on the long term future. Itâs also an inside view in that it doesnât consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).
A historical example where this went wrong is how in the 1920â˛s Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalinâs authoritarian regime and Nazi Germany, respectively. Although I havenât done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that weâll radically alter what weâll work on after 20 years if weâd make a concerted effort now to structure the community around enabling a significant portion of our âmembersâ (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).
It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). Iâve made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. Iâd be curious to hear if any of these points has caused you to update any of your other intuitions:
Worldviews
more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences
research into how different humans trade off suffering and eudaimonia differently
a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)
Focus areas:
Global poverty
use of better metrics for wellbeing â e.g. life satisfaction scores and future use of real-time tracking of experiential well-being â that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)
use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points
Existential risk
more research on how to avoid evolutionary/âgame theoretical âMolochâ dynamics instead of the current âMaxipokâ focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there
for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hansonâs work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.
Animal welfare
more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target
shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems
Community building
Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust
However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms
I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), Iâm not aware of a good macrohistorical dataset that could answer this questionâreality in any case may prove underpowered.
Yet whether or not in fact things would change with more democratised decision-making/âintelligence gathering/â etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many âbitsâ of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and thereâs a âclownsideâ risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate âX is an important issueâ may be much lower than âcan contribute usefully to Xâ.
A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. Iâd guess relevant âEA open problemsâ are a mix, but this makes me hesitant for there to be a general shove in this direction.
I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some âresearch agenda for the most important open problems in EAâ). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/âfeel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a âcause Xâ (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental healthâor non-communicable diseaseâbut not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/âSCI/âetc.)
I appreciate you mentioning this! Itâs probably not a minor point because if taken seriously, it should make me a lot less worried about people in the community getting stuck in ideologies.
I admit I havenât thought this through systematically. Let me mull over your arguments and come back to you here.
BTW, could you perhaps explain what you meant with the âThere are other causes of an area...â sentence? Iâm having trouble understanding that bit.
And with âon-reflection moral commitmentsâ do you mean considerations like population ethics and trade-offs between eudaimonia and suffering?
Sorry for being unclear. Iâve changed the sentence to (hopefully) make it clearer. The idea was there could be other explanations for why people tend to gravitate to future stuff (group think, information cascades, selection effects) besides the balance of reason weighs on its side.
I do mean considerations like population ethics etc. for the second thing. :)
Excellent work. I hope youâll forgive me taking issue with a smaller point:
Iâm not so sure that this is true, although it depends on how big an area you imagine will /â should be âoverturnedâ. This also somewhat ties into the discussion about how likely we should expect to be missing a âcause Xâ.
If cause X is another entire cause area, Iâd be pretty surprised to see a new one in (say) 10 years which is similar to animals or global health, and even more surprised to see one that supplants long term future. My rationale for this is I see broad funnel where EAs tend to move into the long term future/âx-risk/âAI, and once there they tend not to leave (I can think of a fair number of people who made the move from (e.g.) global health --> far future, but Iâm not aware of anyone who moved from far future --> anything else). There are also people who have been toiling in the long term future vinyard for a long time (e.g. MIRI), and the fact we do not see many people moving elsewhere suggests this is pretty stable attractor.
There are other reasons for a cause area being a stable attractor besides all reasonable roads lead to it. That said, Iâd suggest one can point to general principles which would somewhat favour this (e.g. the scope of the long term future, that the light cone commons, stewarded well, permits mature moral action in the universe to whatever in fact has most value, etc.) Iâd say similar points to a lesser degree to apply to the broad landscape of âon reflection moral commitmentsâ, and so the existing cause areas mostly exhaust this moral landscape.
Naturally, I wouldnât want to bet the farm on what might prove overconfidence, but insofar as it goes it supplies less impetus for lots of exploratory work of this type. At a finer level of granulariy (and so a bit further down your diagram), I see less resilience (e.g. maybe we should tilt the existing global poverty portfolio more one way or the other depending how the cash transfer literature turns out, maybe we should add more âavoid great power conflictâ to the long term future cause area, etc.) Yet I still struggle to see this adding up to radical alteration.
First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of âthey should change their decisions a lot if they put in much more of the communityâs brainpower into analysing data from a granular level upwardsâ.
So I appreciate that you actually gave specific reasons for why youâd be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and Iâm just going to take up your opinion here.
Interestingly, your interpretation that this is evidence for that there shouldnât be a radical alteration in what causes we focus can be seen both as an outside view and inside view. Itâs an outside view in the sense that it weights the views of people whoâve decided to move into the direction of working on the long term future. Itâs also an inside view in that it doesnât consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).
A historical example where this went wrong is how in the 1920â˛s Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalinâs authoritarian regime and Nazi Germany, respectively. Although I havenât done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that weâll radically alter what weâll work on after 20 years if weâd make a concerted effort now to structure the community around enabling a significant portion of our âmembersâ (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).
It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). Iâve made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. Iâd be curious to hear if any of these points has caused you to update any of your other intuitions:
Worldviews
more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences
research into how different humans trade off suffering and eudaimonia differently
a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)
Focus areas:
Global poverty
use of better metrics for wellbeing â e.g. life satisfaction scores and future use of real-time tracking of experiential well-being â that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)
use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points
Existential risk
more research on how to avoid evolutionary/âgame theoretical âMolochâ dynamics instead of the current âMaxipokâ focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there
for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hansonâs work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.
Animal welfare
more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target
shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems
Community building
Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust
However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms
I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), Iâm not aware of a good macrohistorical dataset that could answer this questionâreality in any case may prove underpowered.
Yet whether or not in fact things would change with more democratised decision-making/âintelligence gathering/â etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many âbitsâ of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and thereâs a âclownsideâ risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate âX is an important issueâ may be much lower than âcan contribute usefully to Xâ.
A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. Iâd guess relevant âEA open problemsâ are a mix, but this makes me hesitant for there to be a general shove in this direction.
I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some âresearch agenda for the most important open problems in EAâ). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/âfeel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a âcause Xâ (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental healthâor non-communicable diseaseâbut not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/âSCI/âetc.)
I appreciate you mentioning this! Itâs probably not a minor point because if taken seriously, it should make me a lot less worried about people in the community getting stuck in ideologies.
I admit I havenât thought this through systematically. Let me mull over your arguments and come back to you here.
BTW, could you perhaps explain what you meant with the âThere are other causes of an area...â sentence? Iâm having trouble understanding that bit.
And with âon-reflection moral commitmentsâ do you mean considerations like population ethics and trade-offs between eudaimonia and suffering?
Sorry for being unclear. Iâve changed the sentence to (hopefully) make it clearer. The idea was there could be other explanations for why people tend to gravitate to future stuff (group think, information cascades, selection effects) besides the balance of reason weighs on its side.
I do mean considerations like population ethics etc. for the second thing. :)