“Firstly, under the standard ITN (Importance Tractability Neglectedness) framework, you only focus on importance. If there are orders of magnitude differences in, let’s say, traceability (seems most important here), then longtermists maybe shouldn’t work on AI.”
I think this makes sense when we’re in the domain of non-existential areas. I think that in practice when you’re confident on existential outcomes and don’t know how to solve them yet, you probably should still focus on it though.
”which probably leads to an overly narrow interprets of what might pose X-Risk. I also think the dismissal of climate change and nuclear war seems to imply that human extinction=X-Risk. This isn’t true (definitionally),” Not sure what you mean by “this isn’t true (definitionnally”. Do you mean irrecoverable collapse, or do you mean for animals?
“although you may make an argument nuclear war and climate change aren’t X-Risks, that argument is not made here.” The posts I linked to were meant to have that purpose.
”I am not hear claiming you are wrong, but rather you need stronger evidence to support your conclusions. “ An intuition for why it’s hard to kill everyone till only 1000 persons survive: - For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs. So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left. Same goes for nuclear power. It’s not too hard to kill 90% of people with a nuclear winter but it’s very hard to kill the remaining 10-1-0.1% etc.
“I think this makes sense when we’re in the domain of non-existential areas. I think that in practice when you’re confident on existential outcomes and don’t know how to solve them yet, you probably should still focus on it though”
-I think this somewhat misinterprets what I said. This is only the case if you are CERTAIN that biorisk, climate, nuclear etc aren’t X-Risks. Otherwise it matters. If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable, then it doesn’t seem that AI risk is the thing to work on.
“Not sure what you mean by “this isn’t true (definitionnally”. Do you mean irrecoverable collapse, or do you mean for animals? ”
-Sorry, I worded this badly. What I meant is that argument assumes that X-Risk and human extinction are identical. They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential (which I think is a somewhat problematic concept) are all X-Risks as well. Apologies for the lack of clarity.
“The posts I linked to were meant to have that purpose.”
-I think my problem is that I don’t think the articles necessarily do a great job at evidencing the claims they make. Take the 80K one. It seems to ignore the concept of vulnerabilities and exposures, instead just going for a hazard centric approach. Secondly, it ignores a lot of important stuff that goes on in the climate discussion, for example what is discussed in this (https://www.pnas.org/doi/10.1073/pnas.2108146119) and this (https://www.cser.ac.uk/resources/assessing-climate-changes-contribution-global-catastrophic-risk/). Basically, I think it fails to adequately address systemic risk, cascading risk and latent risk. Also, it seems to (mostly) equate X-Risk to human extinction without massively exploring the question of if civilisation collapses whether we WILL recover not just whether we could. The Luisa Rodriguez piece also doesn’t do this (this isn’t a critique of her piece, as far as I can tell it didn’t intend to do this either).
An intuition for why it’s hard to kill everyone till only 1000 persons survive:
- For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs.
So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left.
Same goes for nuclear power. It’s not too hard to kill 90% of people with a nuclear winter but it’s very hard to kill the remaining 10-1-0.1% etc.
-Again, this comes back to the idea that for something to be an X-Risk it needs to, in one single event, wipe out humanity or most of it. But X-risks may be a collapse we don’t recover from. Note this isn’t the same as a collapse we can’t recover from, but merely because “progress” (itself a very problematic term) seems highly contingent, even if we COULD recover doesn’t mean there isn’t a very high probability that we WILL.
Moreover, if we retain this loss of complexity for a long time, ethical drift (making srisks far more likely even given recovery) is more likely. As is other catastrophes wiping us out, even if recoverable from alone, either in concert, by cascades or by discontinuous local catastrophes. It seems like it needs a lot more justification to have a very high probability that a civilisation we think is valuable would recover from a collapse that even leaves 100s of millions of people alive.
This discussion over how likely a collapse or GCR would be converted into an X-Risk is still very open for debate, as is the discussion of contingency vs convergence. But for your position to hold, you need very high certainty on this point, which I think is highly debatable and perhaps at this point premature and unjustified. Sorry I can’t link the papers I need to right now, as I am on my phone, but will link later.
“If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable”. I think that indeed 2 or 3 OOMs of difference would be needed at least to compensate (especially given that positively shaping biorisks is not extremely positive) and as I argued above I think it’s unlikely.
“They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential”. I think that irrecoverable collapse is the biggest crux. What likelihood do you put on it? For other type of risks, it once again favors working on AI.
Your point below is also on irrecoverable collapse. Personnally I put a small weight on this but I could update pretty quickly because I haven’t thought that strongly about it. I just have these few arguments: - Asymptotically, that would be surprising if you couldn’t find other ways to recover. The world in which our species used the ONLY way to make progress is a tiny fraction of all possible worlds. - There are arguments about stocks of current material (huge) which could be used to recover. - Humans are very adaptable.
I think that biorisks causing >90% of deaths are not for tomorrow and will most likely appear in the second half of the century which makes that it doesn’t compete in terms of timelines with AGI. The reasons why I think that is: - Building viruses is still quite hard— Doing gain-of-function research at a sufficient degree to be able to reach very high degree of lethalities + contamination is really not trivial. - The world is still not connected enough for a virus to spread unstealthy and contaminate everyone
I actually think our big crux here is the amount of uncertainty. Each of the points I raise and each new assumption you are putting in should raise you uncertainty. Given you claim 95% ofongtermists should work on AI, high uncertainties fo not seem to weigh in the favour of your argument.
Note I am not saying and haven’t that either AI isn’t the most important X-Risk or that we shouldn’t work on it. Just arguing against the certainty from your post
I think you make a good point if we were close in terms but what matters primarily is the EV and I expect this to dominate uncertainty here. I didn’t do the computations but I feel like if u have something which is OOMs more important than others, even with very large bars of uncertainty you’d probably put >19/20 of your resources on the highest EV thing. In the same way we don’t give to another less cost-effective org to hedge against AMF even though they might have some tail chances of having a very significant positive impact on society because the bars of estimate are very large.
“Firstly, under the standard ITN (Importance Tractability Neglectedness) framework, you only focus on importance. If there are orders of magnitude differences in, let’s say, traceability (seems most important here), then longtermists maybe shouldn’t work on AI.”
I think this makes sense when we’re in the domain of non-existential areas. I think that in practice when you’re confident on existential outcomes and don’t know how to solve them yet, you probably should still focus on it though.
”which probably leads to an overly narrow interprets of what might pose X-Risk. I also think the dismissal of climate change and nuclear war seems to imply that human extinction=X-Risk. This isn’t true (definitionally),”
Not sure what you mean by “this isn’t true (definitionnally”. Do you mean irrecoverable collapse, or do you mean for animals?
“although you may make an argument nuclear war and climate change aren’t X-Risks, that argument is not made here.”
The posts I linked to were meant to have that purpose.
”I am not hear claiming you are wrong, but rather you need stronger evidence to support your conclusions. “
An intuition for why it’s hard to kill everyone till only 1000 persons survive:
- For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs.
So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left.
Same goes for nuclear power. It’s not too hard to kill 90% of people with a nuclear winter but it’s very hard to kill the remaining 10-1-0.1% etc.
“I think this makes sense when we’re in the domain of non-existential areas. I think that in practice when you’re confident on existential outcomes and don’t know how to solve them yet, you probably should still focus on it though” -I think this somewhat misinterprets what I said. This is only the case if you are CERTAIN that biorisk, climate, nuclear etc aren’t X-Risks. Otherwise it matters. If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable, then it doesn’t seem that AI risk is the thing to work on.
“Not sure what you mean by “this isn’t true (definitionnally”. Do you mean irrecoverable collapse, or do you mean for animals? ” -Sorry, I worded this badly. What I meant is that argument assumes that X-Risk and human extinction are identical. They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential (which I think is a somewhat problematic concept) are all X-Risks as well. Apologies for the lack of clarity.
“The posts I linked to were meant to have that purpose.” -I think my problem is that I don’t think the articles necessarily do a great job at evidencing the claims they make. Take the 80K one. It seems to ignore the concept of vulnerabilities and exposures, instead just going for a hazard centric approach. Secondly, it ignores a lot of important stuff that goes on in the climate discussion, for example what is discussed in this (https://www.pnas.org/doi/10.1073/pnas.2108146119) and this (https://www.cser.ac.uk/resources/assessing-climate-changes-contribution-global-catastrophic-risk/). Basically, I think it fails to adequately address systemic risk, cascading risk and latent risk. Also, it seems to (mostly) equate X-Risk to human extinction without massively exploring the question of if civilisation collapses whether we WILL recover not just whether we could. The Luisa Rodriguez piece also doesn’t do this (this isn’t a critique of her piece, as far as I can tell it didn’t intend to do this either).
An intuition for why it’s hard to kill everyone till only 1000 persons survive: - For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs. So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left. Same goes for nuclear power. It’s not too hard to kill 90% of people with a nuclear winter but it’s very hard to kill the remaining 10-1-0.1% etc. -Again, this comes back to the idea that for something to be an X-Risk it needs to, in one single event, wipe out humanity or most of it. But X-risks may be a collapse we don’t recover from. Note this isn’t the same as a collapse we can’t recover from, but merely because “progress” (itself a very problematic term) seems highly contingent, even if we COULD recover doesn’t mean there isn’t a very high probability that we WILL. Moreover, if we retain this loss of complexity for a long time, ethical drift (making srisks far more likely even given recovery) is more likely. As is other catastrophes wiping us out, even if recoverable from alone, either in concert, by cascades or by discontinuous local catastrophes. It seems like it needs a lot more justification to have a very high probability that a civilisation we think is valuable would recover from a collapse that even leaves 100s of millions of people alive. This discussion over how likely a collapse or GCR would be converted into an X-Risk is still very open for debate, as is the discussion of contingency vs convergence. But for your position to hold, you need very high certainty on this point, which I think is highly debatable and perhaps at this point premature and unjustified. Sorry I can’t link the papers I need to right now, as I am on my phone, but will link later.
“If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable”. I think that indeed 2 or 3 OOMs of difference would be needed at least to compensate (especially given that positively shaping biorisks is not extremely positive) and as I argued above I think it’s unlikely.
“They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential”. I think that irrecoverable collapse is the biggest crux. What likelihood do you put on it? For other type of risks, it once again favors working on AI.
Your point below is also on irrecoverable collapse. Personnally I put a small weight on this but I could update pretty quickly because I haven’t thought that strongly about it. I just have these few arguments:
- Asymptotically, that would be surprising if you couldn’t find other ways to recover. The world in which our species used the ONLY way to make progress is a tiny fraction of all possible worlds.
- There are arguments about stocks of current material (huge) which could be used to recover.
- Humans are very adaptable.
I think that biorisks causing >90% of deaths are not for tomorrow and will most likely appear in the second half of the century which makes that it doesn’t compete in terms of timelines with AGI. The reasons why I think that is:
- Building viruses is still quite hard—
Doing gain-of-function research at a sufficient degree to be able to reach very high degree of lethalities + contamination is really not trivial.
- The world is still not connected enough for a virus to spread unstealthy and contaminate everyone
I actually think our big crux here is the amount of uncertainty. Each of the points I raise and each new assumption you are putting in should raise you uncertainty. Given you claim 95% ofongtermists should work on AI, high uncertainties fo not seem to weigh in the favour of your argument. Note I am not saying and haven’t that either AI isn’t the most important X-Risk or that we shouldn’t work on it. Just arguing against the certainty from your post
I think you make a good point if we were close in terms but what matters primarily is the EV and I expect this to dominate uncertainty here.
I didn’t do the computations but I feel like if u have something which is OOMs more important than others, even with very large bars of uncertainty you’d probably put >19/20 of your resources on the highest EV thing.
In the same way we don’t give to another less cost-effective org to hedge against AMF even though they might have some tail chances of having a very significant positive impact on society because the bars of estimate are very large.