Thank you for the thoughts and for engaging. I think this is a really good point. I mostly agree.
To check what you are saying. It seems like the idea here is that there are different aims of altruistic research. In fact we could imagine a spectrum, something like:
Ethics
Worldview
Cause
Intervention
Charity
Plan
At the top end, for people thinking about ethics etc, traditional longtermist ways of thinking are best and at the lower end for someone thinking about plans etc, long-term planning tools are the best.
I think this is roughly correct.
My hope is to provide a set of long-term planning tools that people might find useful, not to rubbish the old tools.
That said, reading between the lines a bit, it feels like there is still some disconnect about the usefulness and importance of different kinds of research. I go into each of these a little bit below.
On the usefulness of different ways of thinking
You said:
A useful analogy would be the difference between using cost-effectiveness as a tool for selecting a top cause or intervention to work on, vs using it to work out the most cost-effective way to do what you are already doing.
I am speculating a bit (so correct me if I am wrong) but I get the impression from that analogy that you would see the best tools to use a bit like this
Ethics
Worldview
Cause
Intervention
Charity
Plan
Traditional longtermist ways of thinking
Long-term planning
(Diagram is an over-simplification as both ways of thinking will be good across the spectrum so the cut of would be vague, but this chart is the best I can do on this forum.)
However I would see it a bit more like this:
Ethics
Worldview
Cause
Intervention
Charity
Plan
Trad. longtermism
Long-term planning
And the analogy I would use would be something like:
A useful analogy would be the difference between using philosophical “would you rather?” thought experiments as a tool for selecting an ethical view , vs using thought experiments to work out the most important causes to work on .
Deciding what the best ways of thinking are best suited for different intellectual challenges is a huge debate. I could give views but not sure we are going to solve it here. And it makes sense that we each prefer to rely on the ways of thinking that we are experienced in using.
One of my aims of writing this post is to give feedback to researchers, as a practitioner about what kind of work I find useful. Basically trying to shorten the feedback loop as much as I can to help guide future research.
So what I can do is provide my own experiences. I do have to on a semi-regular basis make decisions about causes and interventions to focus on (do we engage politicians about AI or bio or improving institutions, etc). And in making these high level decisions there is some good research and some less useful research (of the type I discuss in my post) and my general view is that more work like: scenarios, short term estimates of x-risk, vulnerability assessments, etc – would be particularly useful for me making even very high-level cause decisions.
Maybe that is useful for you or others (I hope so).
On the value of different research
Perhaps we also have different views on what work is valuable. I guess I already think that the future matters and see less value in more work on is longtermism true and more value on work into what are the risks we face now and how can we address them.
You said:
[Long term planning] is not always required in order to get large gains
Let’s flesh out what we mean by “gains”.
If gains at philosophy / ethics / deciding if longtermism is true, then yes this would apply.
If gains at actually reducing the chance of an existential catastrophe (other than in cases where the solution is predominately technical) then I don’t think this would be true.
I expect we agree on that. So maybe the question is less about the best way of thinking about the world and and more about what the aims of additional research should be? Should we be pushing resources to more philosophy or towards more actionable plans to affect the long-term and/or reduce risks?
(Also worth considering the extent to which demonstrating practical actionable plans is useful for the philosophy, either for learning how to deal with uncertainty or for making the case that the long-term is a thing people can act on).
Toby has articulated what I was thinking quite well.
I also think this diagram is useful in highlighting the core disagreement:
However I would see it a bit more like this:
Ethics
Worldview
Cause
Intervention
Charity
Plan
Trad. longtermism
Long-term planning
Personally I’m surprised to see long-term planning stretching so far to the left. Can you expand on how long-term planning helps with worldview and cause choices?
Worldview: I presume this is something like the contention that reducing existential risk should be our overriding concern? If so I don’t really understand how long-term planning tools help us get there. Longtermists got here essentially through academic papers like this one that relies on EV reasoning, and the contention that existential risk reduction is neglected and tractable.
Cause: I presume this would be identifying the most pressing existential risks? Maybe your tools (e.g. vulnerability assessments) would help here but I don’t really know enough to comment. Longtermists essentially got to the importance of misaligned AI for example through writings like Bostrom’s Superintelligence which I would say to some extent relies on thought experiments. Remember existential risks are different to global catastrophic risks, with the former unprecedented—so we may have to think outside the box a bit to identify them. I’m still unsure if established tools are genuinely up to the task (although they may be) - do you think we might get radically different answers on the most pressing existential risks if we use established tools as opposed to traditional longtermist thinking?
EDIT: long-term planning extending into worldview may be a glitch as it does on my laptop but not on my phone...
Hi Jack, some semi-unstructured thoughts for you on this topic. And as the old quote goes “sorry, if I had more time, I would have written you a shorter letter”:
What are we aiming to do here with this discussion? It seems like we are trying to work out what are the best thinking tools for various kinds of questions an altruist might want answered. And we are deciding between two somewhat distinct and also overlapping and also not well defined ‘ways of thinking’ whilst still acknowledging that both are going to be at least somewhat useful across the spectrum and that everyone will have a preference for the tools they are used to using (and speaking at least for myself I have limited expertise in judging the value of academic type work) !!! …. Well sure why not. Let’s do that. Happy to throw some uniformed opinions out into the void and see where it goes ….
How should or could we even make such a judgement call? I believe we have evidence from domains such as global health and policy design that is you are not evaluating and testing you are likely not having the impact you expect. I don’t see why this would not apply to research. In my humble view of you wanted to know what research was most valuable you would want monitoring and evaluation and iteration. Clear theories of change and justification for research topics. Research could be judged by evidence of real world impact, feedback from users of the research, prediction markets, expert views, etc. Space for more exploratory research would be carved out. This would all be done transparently and maybe there would be competing research organisations* that faced independent evaluation. And so forth. And then you wouldn’t need to guess at what tools were useful, you’d find out.
But easier said that done? I think people would push back, say they know they don’t need evaluating, creativity will be stifled, evaluation is difficult, we would agree on how to evaluate, that’s not how academia works, we already know we are great, etc. They might be right. I have mostly looked at EA research orgs as a potential donor so this is at least the direction I would want things to move in – but I am sure there are other perspectives. And either way I don’t know if anyone is making an push to move EA research organisations in that direction.
So what can be done? I guess one thing would be if the people who used research to make decisions, would give feedback to researchers about what research they find more useful and less useful, that could be a step in the right direction. As I have done somewhat here. And my feedback from the policy change side of the fence is that the research that looks more like the long-term planning tools (that I am used to using) is more useful. I have found it more useful to date and expect it would be more useful going forward. And I would like to see more of it. I don’t think that feedback from me is sufficient to answer the question, it is a partial answer at best!! There are (I am sure) at least some other folk in the world that use FHI/GPI/EA research who will hopefully have different views.
So with that lengthy caveat about partial answers / plea for more monitoring and evaluation out of the way, you asked for specific examples:
Long-term planning and worldview. I think the basic idea that the long-term future matters is probably true. I think how tractable this is and how much that should effect our decisions is less clear. How to make decisions about influencing the world given uncertainty/cluelessness? How tractable it is to have that influence? Are attractor states the only way to have that long-run influence? Are stopping risks the only easy to reach attractor states? I think the long-term planning tools give us at least one way to approach these kinds of questions (as I set out above in Section B. 4.-5.). To design plans and to influence the future, ensure their robustness, then put them into action and see how they work, and so on. Honestly I doubt you would get radically different answers (although I also doubt other ways of approaching this question would lead to radically different answers either, I just quite uncertain how tractable more worldview research work is).
Long-term planning and cause choice. This seems obvious to me. If you want to know, as per your example, what risks to care about – then mapping out future paths and scenarios the world the world might take, doing estimates of risks on 5, 10, 25, 50, 100 year timescales, explicitly evaluating your assumptions, doing forecasting, using risk management tools, identifying the signs to watch-out for that would warrant a change in priorities, and so on – all seems to be super useful. Also I think there might be a misunderstanding but the whole point of all the tools listed above is that they are for use in situations where you are dealing with the “unprecedented” and the unknown black swans. If something is easy to predict then you can just predict it and do a simple expected value calculation and you are done (and EA folk are already good at that). Overall I doubt you would get drastically different answers to which risks matter, although I would expect there may be a greater focus on building a world that is robust to other “unforeseen anthropogenic risks”, not just AI and bio. I also think in specific circumstances that people might be in, say a meeting with a politician or writing a strategy, they would hopefully have a better sense of which risks to focus on.
* P.S. Not sure anyone will have read this far but if anyone is reading this and actually thinks it could be a good (or v bad) idea to start an research org with a focus on demonstrating impact, policy research, and planning type tools – then do get in touch.
Hi Toby,
Thank you for the thoughts and for engaging. I think this is a really good point. I mostly agree.
To check what you are saying. It seems like the idea here is that there are different aims of altruistic research. In fact we could imagine a spectrum, something like:
Ethics
Worldview
Cause
Intervention
Charity
Plan
At the top end, for people thinking about ethics etc, traditional longtermist ways of thinking are best and at the lower end for someone thinking about plans etc, long-term planning tools are the best.
I think this is roughly correct.
My hope is to provide a set of long-term planning tools that people might find useful, not to rubbish the old tools.
That said, reading between the lines a bit, it feels like there is still some disconnect about the usefulness and importance of different kinds of research. I go into each of these a little bit below.
On the usefulness of different ways of thinking
You said:
I am speculating a bit (so correct me if I am wrong) but I get the impression from that analogy that you would see the best tools to use a bit like this
Ethics
Worldview
Cause
Intervention
Charity
Plan
(Diagram is an over-simplification as both ways of thinking will be good across the spectrum so the cut of would be vague, but this chart is the best I can do on this forum.)
However I would see it a bit more like this:
Ethics
Worldview
Cause
Intervention
Charity
Plan
And the analogy I would use would be something like:
A useful analogy would be the difference between using philosophical “would you rather?” thought experiments as a tool for selecting an ethical view , vs using thought experiments to work out the most important causes to work on .
Deciding what the best ways of thinking are best suited for different intellectual challenges is a huge debate. I could give views but not sure we are going to solve it here. And it makes sense that we each prefer to rely on the ways of thinking that we are experienced in using.
One of my aims of writing this post is to give feedback to researchers, as a practitioner about what kind of work I find useful. Basically trying to shorten the feedback loop as much as I can to help guide future research.
So what I can do is provide my own experiences. I do have to on a semi-regular basis make decisions about causes and interventions to focus on (do we engage politicians about AI or bio or improving institutions, etc). And in making these high level decisions there is some good research and some less useful research (of the type I discuss in my post) and my general view is that more work like: scenarios, short term estimates of x-risk, vulnerability assessments, etc – would be particularly useful for me making even very high-level cause decisions.
Maybe that is useful for you or others (I hope so).
On the value of different research
Perhaps we also have different views on what work is valuable. I guess I already think that the future matters and see less value in more work on is longtermism true and more value on work into what are the risks we face now and how can we address them.
You said:
Let’s flesh out what we mean by “gains”.
If gains at philosophy / ethics / deciding if longtermism is true, then yes this would apply.
If gains at actually reducing the chance of an existential catastrophe (other than in cases where the solution is predominately technical) then I don’t think this would be true.
I expect we agree on that. So maybe the question is less about the best way of thinking about the world and and more about what the aims of additional research should be? Should we be pushing resources to more philosophy or towards more actionable plans to affect the long-term and/or reduce risks?
(Also worth considering the extent to which demonstrating practical actionable plans is useful for the philosophy, either for learning how to deal with uncertainty or for making the case that the long-term is a thing people can act on).
Toby has articulated what I was thinking quite well.
I also think this diagram is useful in highlighting the core disagreement:
Personally I’m surprised to see long-term planning stretching so far to the left. Can you expand on how long-term planning helps with worldview and cause choices?
Worldview: I presume this is something like the contention that reducing existential risk should be our overriding concern? If so I don’t really understand how long-term planning tools help us get there. Longtermists got here essentially through academic papers like this one that relies on EV reasoning, and the contention that existential risk reduction is neglected and tractable.
Cause: I presume this would be identifying the most pressing existential risks? Maybe your tools (e.g. vulnerability assessments) would help here but I don’t really know enough to comment. Longtermists essentially got to the importance of misaligned AI for example through writings like Bostrom’s Superintelligence which I would say to some extent relies on thought experiments. Remember existential risks are different to global catastrophic risks, with the former unprecedented—so we may have to think outside the box a bit to identify them. I’m still unsure if established tools are genuinely up to the task (although they may be) - do you think we might get radically different answers on the most pressing existential risks if we use established tools as opposed to traditional longtermist thinking?
EDIT: long-term planning extending into worldview may be a glitch as it does on my laptop but not on my phone...
Hi Jack, some semi-unstructured thoughts for you on this topic. And as the old quote goes “sorry, if I had more time, I would have written you a shorter letter”:
What are we aiming to do here with this discussion? It seems like we are trying to work out what are the best thinking tools for various kinds of questions an altruist might want answered. And we are deciding between two somewhat distinct and also overlapping and also not well defined ‘ways of thinking’ whilst still acknowledging that both are going to be at least somewhat useful across the spectrum and that everyone will have a preference for the tools they are used to using (and speaking at least for myself I have limited expertise in judging the value of academic type work) !!! …. Well sure why not. Let’s do that. Happy to throw some uniformed opinions out into the void and see where it goes ….
How should or could we even make such a judgement call? I believe we have evidence from domains such as global health and policy design that is you are not evaluating and testing you are likely not having the impact you expect. I don’t see why this would not apply to research. In my humble view of you wanted to know what research was most valuable you would want monitoring and evaluation and iteration. Clear theories of change and justification for research topics. Research could be judged by evidence of real world impact, feedback from users of the research, prediction markets, expert views, etc. Space for more exploratory research would be carved out. This would all be done transparently and maybe there would be competing research organisations* that faced independent evaluation. And so forth. And then you wouldn’t need to guess at what tools were useful, you’d find out.
But easier said that done? I think people would push back, say they know they don’t need evaluating, creativity will be stifled, evaluation is difficult, we would agree on how to evaluate, that’s not how academia works, we already know we are great, etc. They might be right. I have mostly looked at EA research orgs as a potential donor so this is at least the direction I would want things to move in – but I am sure there are other perspectives. And either way I don’t know if anyone is making an push to move EA research organisations in that direction.
So what can be done? I guess one thing would be if the people who used research to make decisions, would give feedback to researchers about what research they find more useful and less useful, that could be a step in the right direction. As I have done somewhat here. And my feedback from the policy change side of the fence is that the research that looks more like the long-term planning tools (that I am used to using) is more useful. I have found it more useful to date and expect it would be more useful going forward. And I would like to see more of it. I don’t think that feedback from me is sufficient to answer the question, it is a partial answer at best!! There are (I am sure) at least some other folk in the world that use FHI/GPI/EA research who will hopefully have different views.
So with that lengthy caveat about partial answers / plea for more monitoring and evaluation out of the way, you asked for specific examples:
Long-term planning and worldview. I think the basic idea that the long-term future matters is probably true. I think how tractable this is and how much that should effect our decisions is less clear. How to make decisions about influencing the world given uncertainty/cluelessness? How tractable it is to have that influence? Are attractor states the only way to have that long-run influence? Are stopping risks the only easy to reach attractor states? I think the long-term planning tools give us at least one way to approach these kinds of questions (as I set out above in Section B. 4.-5.). To design plans and to influence the future, ensure their robustness, then put them into action and see how they work, and so on. Honestly I doubt you would get radically different answers (although I also doubt other ways of approaching this question would lead to radically different answers either, I just quite uncertain how tractable more worldview research work is).
Long-term planning and cause choice. This seems obvious to me. If you want to know, as per your example, what risks to care about – then mapping out future paths and scenarios the world the world might take, doing estimates of risks on 5, 10, 25, 50, 100 year timescales, explicitly evaluating your assumptions, doing forecasting, using risk management tools, identifying the signs to watch-out for that would warrant a change in priorities, and so on – all seems to be super useful.
Also I think there might be a misunderstanding but the whole point of all the tools listed above is that they are for use in situations where you are dealing with the “unprecedented” and the unknown black swans. If something is easy to predict then you can just predict it and do a simple expected value calculation and you are done (and EA folk are already good at that).
Overall I doubt you would get drastically different answers to which risks matter, although I would expect there may be a greater focus on building a world that is robust to other “unforeseen anthropogenic risks”, not just AI and bio. I also think in specific circumstances that people might be in, say a meeting with a politician or writing a strategy, they would hopefully have a better sense of which risks to focus on.
* P.S. Not sure anyone will have read this far but if anyone is reading this and actually thinks it could be a good (or v bad) idea to start an research org with a focus on demonstrating impact, policy research, and planning type tools – then do get in touch.