Iām not confident that this is fully outside the scope of RP, but I think backchaining-in-practice is plausibly underrated by EA/ālongtermism, despite a lot of chatter about it in theory.
By backchaining in practice I mean tracing backwards fully from the world we want (eg a just, kind, safe world capable of long reflection), to specific efforts and actions individuals and small groups can do, in AI safety, biosecurity, animal welfare, movement building, etc.
Specific things that I think will be difficult to be under RPās purview include things that require specific AI Safety or biosecurity stories, though those things plausibly have information hazards so Iād encourage people who are doing these extensive diagrams to be a) somewhat careful about information security and b) talk to the relevant people within EA (eg FHI) before creating and certainly before publishing them.
An obvious caveat here is that itās possible many such backchaining documents exist and I am unaware of them. Another caveat is that maybe backchaining is just dumb, for various epistemic reasons.
Iām not really sure what is included in the scope of āprioritization researchā. One thing we definitely do not do and very likely will never do, and that I am glad others do is technical AI safety research.
Other than that, I think pretty much anything in longtermism could be fair game for Rethink Priorities at some point.
I am surprised that you mention technical AI Safety as something you donāt do under what I consider āprioritization researchā, which I didnāt before posting my question was apparently a concept I used mostly internally š Linchās mention of it below was in the context of understanding itās importance rather than trying to solve it, which I guess is how Iād carve up āprioritization researchā.
I guess that for similar reasons Iād expect RP to focus less on solving (longtermist or other) problems. Just to make sure, could examples like the following be in RPās scope if you had the right people/āsituation?
Suggesting safe ways to use certain geoengineering mehcanisms.
Developing methods for increased empathy toward future people.
Proposing and defining a governmental institute for future generations.
Developing economic models for incentives of great power war under futuristic scenarios like space expansion and proposing mechanisms to manage the risk of war.
Linchās mention of it below was in the context of understanding its importance rather than trying to solve it, which I guess is how Iād carve up āprioritization researchā.
I think what counts as prioritization vs object-level research of the form ātrying to solve Xā does not obviously have clean boundaries, for example a scoping paper like Concrete Problems in AI Safety is something that a) should arguably be considered prioritization research and b) is arguably better done by somebody whoās familiar with (and connected in) AI.
Yes, I think all the things you mentioned are projects that are āwithin the scopeā of RP (not that we would necessarily do them). We see our scope as being very broad so that we can always do the highest impact projects.
Thanks, thatās interesting to hear. I guess that the mission statement is broad enough to allow it :)
I have some concerns about this approach, mostly as it relates to developing research and organizational expertise, and possibly discouraging the creation of new research organizations. However, Iām sure that these kinds of considerations go into your case-by-case decision-making process and I imagine that these problems would only be crucial when EA and RP scales-up and matures more.
Could you expand a bit on what you mean by prioritization research? Do you mean something like āefforts to find the most important causes to work on and compare interventions across different areas, so that we can do as much good as possible with the resources available to usā?
If so, how narrowly do you intend ācausesā to be interpreted? E.g., would you count research that informs how much to prioritise technical AI safety work vs AI governance work? Or only research that informs decisions like how much to prioritise AI risk vs biorisk? Or only research that informs decisions like how much to prioritise longtermism vs near-termist animal welfare?
(I think this is a good question, btw! I just feel like it could go in a few different directions depending on how itās intended/āoperationalised.)
Thanks for asking for clarification. I intended something wide that includes everything from, say, ranking interventions through cause prioritization to global priorities research and basic research that aims at improving practical prioritization making.
What are some possible efforts within prioritization research that is outside your scope and youād like to see more of?
Iām not confident that this is fully outside the scope of RP, but I think backchaining-in-practice is plausibly underrated by EA/ālongtermism, despite a lot of chatter about it in theory.
By backchaining in practice I mean tracing backwards fully from the world we want (eg a just, kind, safe world capable of long reflection), to specific efforts and actions individuals and small groups can do, in AI safety, biosecurity, animal welfare, movement building, etc.
Specific things that I think will be difficult to be under RPās purview include things that require specific AI Safety or biosecurity stories, though those things plausibly have information hazards so Iād encourage people who are doing these extensive diagrams to be a) somewhat careful about information security and b) talk to the relevant people within EA (eg FHI) before creating and certainly before publishing them.
An obvious caveat here is that itās possible many such backchaining documents exist and I am unaware of them. Another caveat is that maybe backchaining is just dumb, for various epistemic reasons.
Iām not really sure what is included in the scope of āprioritization researchā. One thing we definitely do not do and very likely will never do, and that I am glad others do is technical AI safety research.
Other than that, I think pretty much anything in longtermism could be fair game for Rethink Priorities at some point.
I am surprised that you mention technical AI Safety as something you donāt do under what I consider āprioritization researchā, which I didnāt before posting my question was apparently a concept I used mostly internally š Linchās mention of it below was in the context of understanding itās importance rather than trying to solve it, which I guess is how Iād carve up āprioritization researchā.
I guess that for similar reasons Iād expect RP to focus less on solving (longtermist or other) problems. Just to make sure, could examples like the following be in RPās scope if you had the right people/āsituation?
Suggesting safe ways to use certain geoengineering mehcanisms.
Developing methods for increased empathy toward future people.
Proposing and defining a governmental institute for future generations.
Developing economic models for incentives of great power war under futuristic scenarios like space expansion and proposing mechanisms to manage the risk of war.
I think what counts as prioritization vs object-level research of the form ātrying to solve Xā does not obviously have clean boundaries, for example a scoping paper like Concrete Problems in AI Safety is something that a) should arguably be considered prioritization research and b) is arguably better done by somebody whoās familiar with (and connected in) AI.
Yes, I think all the things you mentioned are projects that are āwithin the scopeā of RP (not that we would necessarily do them). We see our scope as being very broad so that we can always do the highest impact projects.
Thanks, thatās interesting to hear. I guess that the mission statement is broad enough to allow it :)
I have some concerns about this approach, mostly as it relates to developing research and organizational expertise, and possibly discouraging the creation of new research organizations. However, Iām sure that these kinds of considerations go into your case-by-case decision-making process and I imagine that these problems would only be crucial when EA and RP scales-up and matures more.
Hi Edo,
Could you expand a bit on what you mean by prioritization research? Do you mean something like āefforts to find the most important causes to work on and compare interventions across different areas, so that we can do as much good as possible with the resources available to usā?
If so, how narrowly do you intend ācausesā to be interpreted? E.g., would you count research that informs how much to prioritise technical AI safety work vs AI governance work? Or only research that informs decisions like how much to prioritise AI risk vs biorisk? Or only research that informs decisions like how much to prioritise longtermism vs near-termist animal welfare?
(I think this is a good question, btw! I just feel like it could go in a few different directions depending on how itās intended/āoperationalised.)
Thanks for asking for clarification. I intended something wide that includes everything from, say, ranking interventions through cause prioritization to global priorities research and basic research that aims at improving practical prioritization making.