Toby has articulated what I was thinking quite well.
I also think this diagram is useful in highlighting the core disagreement:
However I would see it a bit more like this:
Ethics
Worldview
Cause
Intervention
Charity
Plan
Trad. longtermism
Long-term planning
Personally I’m surprised to see long-term planning stretching so far to the left. Can you expand on how long-term planning helps with worldview and cause choices?
Worldview: I presume this is something like the contention that reducing existential risk should be our overriding concern? If so I don’t really understand how long-term planning tools help us get there. Longtermists got here essentially through academic papers like this one that relies on EV reasoning, and the contention that existential risk reduction is neglected and tractable.
Cause: I presume this would be identifying the most pressing existential risks? Maybe your tools (e.g. vulnerability assessments) would help here but I don’t really know enough to comment. Longtermists essentially got to the importance of misaligned AI for example through writings like Bostrom’s Superintelligence which I would say to some extent relies on thought experiments. Remember existential risks are different to global catastrophic risks, with the former unprecedented—so we may have to think outside the box a bit to identify them. I’m still unsure if established tools are genuinely up to the task (although they may be) - do you think we might get radically different answers on the most pressing existential risks if we use established tools as opposed to traditional longtermist thinking?
EDIT: long-term planning extending into worldview may be a glitch as it does on my laptop but not on my phone...
Hi Jack, some semi-unstructured thoughts for you on this topic. And as the old quote goes “sorry, if I had more time, I would have written you a shorter letter”:
What are we aiming to do here with this discussion? It seems like we are trying to work out what are the best thinking tools for various kinds of questions an altruist might want answered. And we are deciding between two somewhat distinct and also overlapping and also not well defined ‘ways of thinking’ whilst still acknowledging that both are going to be at least somewhat useful across the spectrum and that everyone will have a preference for the tools they are used to using (and speaking at least for myself I have limited expertise in judging the value of academic type work) !!! …. Well sure why not. Let’s do that. Happy to throw some uniformed opinions out into the void and see where it goes ….
How should or could we even make such a judgement call? I believe we have evidence from domains such as global health and policy design that is you are not evaluating and testing you are likely not having the impact you expect. I don’t see why this would not apply to research. In my humble view of you wanted to know what research was most valuable you would want monitoring and evaluation and iteration. Clear theories of change and justification for research topics. Research could be judged by evidence of real world impact, feedback from users of the research, prediction markets, expert views, etc. Space for more exploratory research would be carved out. This would all be done transparently and maybe there would be competing research organisations* that faced independent evaluation. And so forth. And then you wouldn’t need to guess at what tools were useful, you’d find out.
But easier said that done? I think people would push back, say they know they don’t need evaluating, creativity will be stifled, evaluation is difficult, we would agree on how to evaluate, that’s not how academia works, we already know we are great, etc. They might be right. I have mostly looked at EA research orgs as a potential donor so this is at least the direction I would want things to move in – but I am sure there are other perspectives. And either way I don’t know if anyone is making an push to move EA research organisations in that direction.
So what can be done? I guess one thing would be if the people who used research to make decisions, would give feedback to researchers about what research they find more useful and less useful, that could be a step in the right direction. As I have done somewhat here. And my feedback from the policy change side of the fence is that the research that looks more like the long-term planning tools (that I am used to using) is more useful. I have found it more useful to date and expect it would be more useful going forward. And I would like to see more of it. I don’t think that feedback from me is sufficient to answer the question, it is a partial answer at best!! There are (I am sure) at least some other folk in the world that use FHI/GPI/EA research who will hopefully have different views.
So with that lengthy caveat about partial answers / plea for more monitoring and evaluation out of the way, you asked for specific examples:
Long-term planning and worldview. I think the basic idea that the long-term future matters is probably true. I think how tractable this is and how much that should effect our decisions is less clear. How to make decisions about influencing the world given uncertainty/cluelessness? How tractable it is to have that influence? Are attractor states the only way to have that long-run influence? Are stopping risks the only easy to reach attractor states? I think the long-term planning tools give us at least one way to approach these kinds of questions (as I set out above in Section B. 4.-5.). To design plans and to influence the future, ensure their robustness, then put them into action and see how they work, and so on. Honestly I doubt you would get radically different answers (although I also doubt other ways of approaching this question would lead to radically different answers either, I just quite uncertain how tractable more worldview research work is).
Long-term planning and cause choice. This seems obvious to me. If you want to know, as per your example, what risks to care about – then mapping out future paths and scenarios the world the world might take, doing estimates of risks on 5, 10, 25, 50, 100 year timescales, explicitly evaluating your assumptions, doing forecasting, using risk management tools, identifying the signs to watch-out for that would warrant a change in priorities, and so on – all seems to be super useful. Also I think there might be a misunderstanding but the whole point of all the tools listed above is that they are for use in situations where you are dealing with the “unprecedented” and the unknown black swans. If something is easy to predict then you can just predict it and do a simple expected value calculation and you are done (and EA folk are already good at that). Overall I doubt you would get drastically different answers to which risks matter, although I would expect there may be a greater focus on building a world that is robust to other “unforeseen anthropogenic risks”, not just AI and bio. I also think in specific circumstances that people might be in, say a meeting with a politician or writing a strategy, they would hopefully have a better sense of which risks to focus on.
* P.S. Not sure anyone will have read this far but if anyone is reading this and actually thinks it could be a good (or v bad) idea to start an research org with a focus on demonstrating impact, policy research, and planning type tools – then do get in touch.
Toby has articulated what I was thinking quite well.
I also think this diagram is useful in highlighting the core disagreement:
Personally I’m surprised to see long-term planning stretching so far to the left. Can you expand on how long-term planning helps with worldview and cause choices?
Worldview: I presume this is something like the contention that reducing existential risk should be our overriding concern? If so I don’t really understand how long-term planning tools help us get there. Longtermists got here essentially through academic papers like this one that relies on EV reasoning, and the contention that existential risk reduction is neglected and tractable.
Cause: I presume this would be identifying the most pressing existential risks? Maybe your tools (e.g. vulnerability assessments) would help here but I don’t really know enough to comment. Longtermists essentially got to the importance of misaligned AI for example through writings like Bostrom’s Superintelligence which I would say to some extent relies on thought experiments. Remember existential risks are different to global catastrophic risks, with the former unprecedented—so we may have to think outside the box a bit to identify them. I’m still unsure if established tools are genuinely up to the task (although they may be) - do you think we might get radically different answers on the most pressing existential risks if we use established tools as opposed to traditional longtermist thinking?
EDIT: long-term planning extending into worldview may be a glitch as it does on my laptop but not on my phone...
Hi Jack, some semi-unstructured thoughts for you on this topic. And as the old quote goes “sorry, if I had more time, I would have written you a shorter letter”:
What are we aiming to do here with this discussion? It seems like we are trying to work out what are the best thinking tools for various kinds of questions an altruist might want answered. And we are deciding between two somewhat distinct and also overlapping and also not well defined ‘ways of thinking’ whilst still acknowledging that both are going to be at least somewhat useful across the spectrum and that everyone will have a preference for the tools they are used to using (and speaking at least for myself I have limited expertise in judging the value of academic type work) !!! …. Well sure why not. Let’s do that. Happy to throw some uniformed opinions out into the void and see where it goes ….
How should or could we even make such a judgement call? I believe we have evidence from domains such as global health and policy design that is you are not evaluating and testing you are likely not having the impact you expect. I don’t see why this would not apply to research. In my humble view of you wanted to know what research was most valuable you would want monitoring and evaluation and iteration. Clear theories of change and justification for research topics. Research could be judged by evidence of real world impact, feedback from users of the research, prediction markets, expert views, etc. Space for more exploratory research would be carved out. This would all be done transparently and maybe there would be competing research organisations* that faced independent evaluation. And so forth. And then you wouldn’t need to guess at what tools were useful, you’d find out.
But easier said that done? I think people would push back, say they know they don’t need evaluating, creativity will be stifled, evaluation is difficult, we would agree on how to evaluate, that’s not how academia works, we already know we are great, etc. They might be right. I have mostly looked at EA research orgs as a potential donor so this is at least the direction I would want things to move in – but I am sure there are other perspectives. And either way I don’t know if anyone is making an push to move EA research organisations in that direction.
So what can be done? I guess one thing would be if the people who used research to make decisions, would give feedback to researchers about what research they find more useful and less useful, that could be a step in the right direction. As I have done somewhat here. And my feedback from the policy change side of the fence is that the research that looks more like the long-term planning tools (that I am used to using) is more useful. I have found it more useful to date and expect it would be more useful going forward. And I would like to see more of it. I don’t think that feedback from me is sufficient to answer the question, it is a partial answer at best!! There are (I am sure) at least some other folk in the world that use FHI/GPI/EA research who will hopefully have different views.
So with that lengthy caveat about partial answers / plea for more monitoring and evaluation out of the way, you asked for specific examples:
Long-term planning and worldview. I think the basic idea that the long-term future matters is probably true. I think how tractable this is and how much that should effect our decisions is less clear. How to make decisions about influencing the world given uncertainty/cluelessness? How tractable it is to have that influence? Are attractor states the only way to have that long-run influence? Are stopping risks the only easy to reach attractor states? I think the long-term planning tools give us at least one way to approach these kinds of questions (as I set out above in Section B. 4.-5.). To design plans and to influence the future, ensure their robustness, then put them into action and see how they work, and so on. Honestly I doubt you would get radically different answers (although I also doubt other ways of approaching this question would lead to radically different answers either, I just quite uncertain how tractable more worldview research work is).
Long-term planning and cause choice. This seems obvious to me. If you want to know, as per your example, what risks to care about – then mapping out future paths and scenarios the world the world might take, doing estimates of risks on 5, 10, 25, 50, 100 year timescales, explicitly evaluating your assumptions, doing forecasting, using risk management tools, identifying the signs to watch-out for that would warrant a change in priorities, and so on – all seems to be super useful.
Also I think there might be a misunderstanding but the whole point of all the tools listed above is that they are for use in situations where you are dealing with the “unprecedented” and the unknown black swans. If something is easy to predict then you can just predict it and do a simple expected value calculation and you are done (and EA folk are already good at that).
Overall I doubt you would get drastically different answers to which risks matter, although I would expect there may be a greater focus on building a world that is robust to other “unforeseen anthropogenic risks”, not just AI and bio. I also think in specific circumstances that people might be in, say a meeting with a politician or writing a strategy, they would hopefully have a better sense of which risks to focus on.
* P.S. Not sure anyone will have read this far but if anyone is reading this and actually thinks it could be a good (or v bad) idea to start an research org with a focus on demonstrating impact, policy research, and planning type tools – then do get in touch.