I expect this debate week to get tripped up a lot by the term “extinction”. So here I’m going to distinguish:
Human extinction — the population of Homo sapiens, or members of the human lineage (including descendant species, post-humans, and human uploads), goes to 0.
Total extinction — the population of Earth-originating intelligent life goes to 0.
Human extinction doesn’t entail total extinction. Human extinction is compatible with: (i) AI taking over and creating a civilisation for as long as it can; (ii) non-human biological life evolving higher intelligence and building a (say) Gorilla sapiens civilisation.
The debate week prompt refers to total extinction. I think this is conceptually cleanest. But it’ll trip people up as it means that most work on AI safety and alignment is about “increasing the value of futures where we survive” and not about “reducing the chance of our extinction” — which is very different than how AI takeover risk has been traditionally presented. I.e. you could be strongly in favour of “increasing value of futures in which we survive” and by that mean that the most important thing is to prevent the extinction of Homo sapiens at the hands of superintelligence. In fact, because most work on AI safety and alignment is about “increasing the value of futures where we survive”, I expect there won’t be that many people who properly understand the prompt and vote “yes”.
So I think we might want to make things more fine-grained. Here are four different activities you could do (not exhaustive):
Ensure there’s a future for Earth-originating intelligent life at all.
Make human-controlled futures better.
Make AI-controlled futures better.
Make human-controlled futures more likely.
For short, I’ll call these activities:
Future at all.
Better human futures.
Better AI futures.
More human futures.
I expect a lot more interesting disagreement over which of (1)-(4) is highest-priority than about whether (1) is higher-priority than (2)-(4). So, when we get into debates, it might be worth saying which of (1)-(4) you think is highest-priority, rather than just “better futures vs extinction”.
Fairly strong agree—I’m personally higher on all of (2), (3), (4) than I am on (1).
The main complication is that I think among realistic activities we can pursue, often they won’t correspond to a particular one of these; instead having beneficial effects on multiple. But I still think it’s worth asking “which is it high priority to make plans targetting?”, even if many of the best plans end up being those which aren’t so narrow as to target one to the exclusion of the others.
Clarifying “Extinction”
I expect this debate week to get tripped up a lot by the term “extinction”. So here I’m going to distinguish:
Human extinction — the population of Homo sapiens, or members of the human lineage (including descendant species, post-humans, and human uploads), goes to 0.
Total extinction — the population of Earth-originating intelligent life goes to 0.
Human extinction doesn’t entail total extinction. Human extinction is compatible with: (i) AI taking over and creating a civilisation for as long as it can; (ii) non-human biological life evolving higher intelligence and building a (say) Gorilla sapiens civilisation.
The debate week prompt refers to total extinction. I think this is conceptually cleanest. But it’ll trip people up as it means that most work on AI safety and alignment is about “increasing the value of futures where we survive” and not about “reducing the chance of our extinction” — which is very different than how AI takeover risk has been traditionally presented. I.e. you could be strongly in favour of “increasing value of futures in which we survive” and by that mean that the most important thing is to prevent the extinction of Homo sapiens at the hands of superintelligence. In fact, because most work on AI safety and alignment is about “increasing the value of futures where we survive”, I expect there won’t be that many people who properly understand the prompt and vote “yes”.
So I think we might want to make things more fine-grained. Here are four different activities you could do (not exhaustive):
Ensure there’s a future for Earth-originating intelligent life at all.
Make human-controlled futures better.
Make AI-controlled futures better.
Make human-controlled futures more likely.
For short, I’ll call these activities:
Future at all.
Better human futures.
Better AI futures.
More human futures.
I expect a lot more interesting disagreement over which of (1)-(4) is highest-priority than about whether (1) is higher-priority than (2)-(4). So, when we get into debates, it might be worth saying which of (1)-(4) you think is highest-priority, rather than just “better futures vs extinction”.
Fairly strong agree—I’m personally higher on all of (2), (3), (4) than I am on (1).
The main complication is that I think among realistic activities we can pursue, often they won’t correspond to a particular one of these; instead having beneficial effects on multiple. But I still think it’s worth asking “which is it high priority to make plans targetting?”, even if many of the best plans end up being those which aren’t so narrow as to target one to the exclusion of the others.