Quick thoughts on the question: āIs it better to try to stop the development of a technology, or to try to get there first and shape how it is used?ā
(Someone asked that question in a Slack workspace Iām part of, and I spent 10 mins writing a response. Iāve copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe thisāll be a little useful to some people.)
I think the ultimate answer to that question is really something like āWhichever option has better outcomes, given the specifics of the situation.ā
I donāt itās just almost always best to stop the development or to shape how itās used.
And I think we should view it in terms of consequences, not in terms of something like deontology or a doing vs allowing harm distinction.
It might be the case that itās (say) 55-90% of the time better to do one approach or the other. But I donāt know which way around thatād be, and I think itād be better to focus on the details of the case.
For this reason, I think itās sort of understandable and appropriate that the EA/ālongtermist community doesnāt have a principled overall stance on this sort of thing.
OTOH, itād be nice to have something like a collection of considerations, heuristics, etc. that can then be applied, perhaps in a checklist-like manner, to the case at hand. And Iām not aware of such a thing. And that does seem like a failing of the EA/ālongtermist community.
[Person] is writing a paper on differential technological development, and it probably makes a step in this direction, but mostly doesnāt aim to do this (if I recall correctly from the draft).
Some quick thoughts on things that could be included in that collection of considerations, heuristics, etc.:
How much (if at all) will your action actually make it more likely that the tech is developed?
(Or ādeveloped before society is radically transformed for some other reasonā, to account for Bostromās technological completion conjecture.)
How much (if at all) will your action actually speed up when the tech is developed?
How (if at all) will your action change the exact shape/ānature of the resulting tech?
E.g., maybe the same basic thing is developed, but with more safety features or in a way more conducive to guiding welfare interventions
E.g., maybe your action highlights the potential military benefits of an AI thing and so leads to more development of militarily relevant features
How (if at all) will your action change important aspects of the process by which the tech is developed?
This can be relevant to e.g. AI safety
E.g., we donāt only care what the AI system is like, but also whether the development process has a race-like dynamic, or whether the development process is such that along the way powerful and dangerous AI may be released upon the world accidentally
E.g., is a biotech thing being developed in such a way that makes lab leaks more likely?
How (if at all) will your action change how the tech is deployed?
How (if at all) will your action let you influence all the above things to the better via giving you āa seat at the tableā, or something like that, rather than via the action directly?
Small case study:
Letās say an EA-aligned funder donates to an AI lab, and thereby gets some level of influence over them or an advisory role or something.
And letās say it seems about equally likely that this labās existence/āwork increases x-risk as that it decreases it.
It might still be good for the world that the funder funds that lab, if:
that doesnāt really much change the lab likelihood of existing or the speed of their work or whatever
but it does give a very thoughtful EA a position of notable influence over them (which could then lead to more safety-conscious development, deployment, messaging, etc.)
Quick thoughts on the question: āIs it better to try to stop the development of a technology, or to try to get there first and shape how it is used?ā
(This is related to the general topic of differential progress.)
(Someone asked that question in a Slack workspace Iām part of, and I spent 10 mins writing a response. Iāve copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe thisāll be a little useful to some people.)
I think the ultimate answer to that question is really something like āWhichever option has better outcomes, given the specifics of the situation.ā
I donāt itās just almost always best to stop the development or to shape how itās used.
And I think we should view it in terms of consequences, not in terms of something like deontology or a doing vs allowing harm distinction.
It might be the case that itās (say) 55-90% of the time better to do one approach or the other. But I donāt know which way around thatād be, and I think itād be better to focus on the details of the case.
For this reason, I think itās sort of understandable and appropriate that the EA/ālongtermist community doesnāt have a principled overall stance on this sort of thing.
OTOH, itād be nice to have something like a collection of considerations, heuristics, etc. that can then be applied, perhaps in a checklist-like manner, to the case at hand. And Iām not aware of such a thing. And that does seem like a failing of the EA/ālongtermist community.
[Person] is writing a paper on differential technological development, and it probably makes a step in this direction, but mostly doesnāt aim to do this (if I recall correctly from the draft).
Some quick thoughts on things that could be included in that collection of considerations, heuristics, etc.:
How much (if at all) will your action actually make it more likely that the tech is developed?
(Or ādeveloped before society is radically transformed for some other reasonā, to account for Bostromās technological completion conjecture.)
How much (if at all) will your action actually speed up when the tech is developed?
How (if at all) will your action change the exact shape/ānature of the resulting tech?
E.g., maybe the same basic thing is developed, but with more safety features or in a way more conducive to guiding welfare interventions
E.g., maybe your action highlights the potential military benefits of an AI thing and so leads to more development of militarily relevant features
How (if at all) will your action change important aspects of the process by which the tech is developed?
This can be relevant to e.g. AI safety
E.g., we donāt only care what the AI system is like, but also whether the development process has a race-like dynamic, or whether the development process is such that along the way powerful and dangerous AI may be released upon the world accidentally
E.g., is a biotech thing being developed in such a way that makes lab leaks more likely?
How (if at all) will your action change how the tech is deployed?
How (if at all) will your action let you influence all the above things to the better via giving you āa seat at the tableā, or something like that, rather than via the action directly?
Small case study:
Letās say an EA-aligned funder donates to an AI lab, and thereby gets some level of influence over them or an advisory role or something.
And letās say it seems about equally likely that this labās existence/āwork increases x-risk as that it decreases it.
It might still be good for the world that the funder funds that lab, if:
that doesnāt really much change the lab likelihood of existing or the speed of their work or whatever
but it does give a very thoughtful EA a position of notable influence over them (which could then lead to more safety-conscious development, deployment, messaging, etc.)