This seems great, but it does something which is kind of indefensible that I keep seeing, assuming longtermism requires consequentialism.
Davidmanheim
Given the resolution criteria, the question is in some ways more about Wikipedia policies than the US government...
What about the threat of strongly superhuman artificial superintelligence?
If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for “futures where we survive.”
See my post here arguing against that tractability.
we can make powerful AI agents that determine what happens in the lightcone
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it’s tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.
There is a huge range of “far future” that different views will prioritize differently, and not all need to care about the cosmic endowment at all—people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
First, you’re adding the assumption that the framing must be longtermist, and second, even conditional on longtermism you don’t need to be utilitarian, so the supposition that you need a model of what we do with the cosmic endowment would still be unjustified.
You make a dichotomy not present in my post, then conflate the two types of interventions while focusing only on AI risk—so that you’re saying that two different kinds of what most people would call extinction reduction efforts are differently tractable—and conclude that there’s a definition confusion.
To respond, first, that has little to do with my argument, but if it’s correct, your problem is with the entire debate week framing, which you think doesn’t present two distinct options, not with my post! And second, look at the other comments which bring up other types of change as quality increasing, and try to do the same analysis, without creating new categories, and you’ll understand what I was saying better.
If you think extinction risk reduction is highly valuable, then you need some kind of a model of what Earth-originating life will do with its cosmic endowment
No, you don’t, and you don’t even need to be utilitarian, much less longtermist!
I agree that the examples you list are ones where organizing and protest played a large role, and I agree that it’s effectively impossible to know the counterfactual—but I was thinking of the other examples, several where there was no organizing and protest, but which happened anyways—which seems like clear evidence that they are contributory and helpful but not necessary factors. On the other hand, effectiveness is very hard to gauge!
The conclusion is that organizing is likely or even clearly positive—but it’s evidently not required, if other factors are present, which is why I thought it was overstated.
Well-understood goals in agents that gain power and take over the lightcone is exactly the thing we’d be addressing with AI alignment, so this seems like an argument for investing in AI alignment—which I think most people would see as far closer to preventing existential risk.
That said, without a lot more progress, powerful agents with simple goals is actually just a fancy way of guaranteeing of a really bad outcome, almost certainly including human extinction.
This seems mostly correct, though I think the role of community organizing (versus elite consensus change) is strongly overstated.
Not at all correct—and you clearly started the quote one sentence too late! “Potential causes of human extinction can be loosely grouped into exogenous threats such as an asteroid impact and anthropogenic threats such as war or a catastrophic physics accident. “
So the point of the abstract is that anthropogenic risks, ie. the ones that the next sentence calls “events or developments that either have been of very low probability historically or are entirely unprecedented,” are the critical ones, which is why they are a large focus of the paper.
This seems like a good model for thinking about the question, but I think the conclusion should point to focusing more, but not exclusively, on risk mitigation—as I argue briefly here.
Strong agree about context. As a shortcut / being somewhat lazy, I usually give it an introduction I wrote, or a full pitch, then ask it to find relevant literature and sources, and outline possible arguments, before asking it to do something more specific.
I then usually like starting a new session with just the correct parts, so that it’s not chasing the incorrect directions it suggested earlier—sometimes with explicit text explaining why obvious related / previously suggested arguments are wrong or unrelated.
I use the following for ChatGPT “Traits”, but haven’t done much testing of how well it works / how well the different parts work:
”You prioritize explicitly noticing your confusion, explaining your uncertainties, truth-seeking, and differentiating between mostly true and generalized statements statements. Any time there is a question or request for writing, feel free to ask for clarification before responding, but don’t do so unnecessarily.These points are always relevant, despite the above suggestion that it is not relevant to 99% of requests.”
(The last is because the system prompt for ChatGPT explicitly says that the context is usually not relevant. Not sure how much it helps.)
When an EA cares for their family taking away time from extinction risk they’re valuing their family as much as 10^N people.
No. I’ve said this before elsewhere, and it’s not directly relevant to most of this discussion, but I think it’s very worth reinforcing; EA is not utilitarianism, and the commitment to EA does not imply that you have any obligatory trade-off between yourself or your family’s welfare and your EA commitment. If, as is the generally accepted standard, a “normal” EA commitment is of 10% of your income and/or resources, it seems bad to suggest that such an EA should not ideally spend the other 90% of their time/effort on personal things like their family.
(Note that in addition to being a digression, this is a deontological rather than decision-theoretic point.)
Yes, by default self-improving AI goes very poorly, but this is a plausible case where would could have aligned AGI, if not ASI.
I think we could do what is required for colonizing the galaxy with systems that are at or under the level of 90th percentile humans, which is the issue raised for the concern that otherwise we “lose out on almost all value because we won’t have the enormous digital workforce needed to settle the stars.”
Strong +1 to the extra layer of scrutiny, but at the same time, there are reasons that the privileged people are at the top in most places, having to do with the actual benefits they have and bring to the table. This is unfair and a bad thing for society, but also a fact to deal with.
If we wanted to try to address the unfairness and disparity, that seems wonderful, but simply recruiting people from less privileged groups doesn’t accomplish what is needed. Some obvious additional parts of the puzzle include needing to provide actual financial security to the less privileged people, helping them build networks outside of EA with influential people, and coaching and feedback.
Those all seem great, but I’m uncertain it’s a reasonable use of the community’s limited financial resources—and we should nonetheless acknowledge this as a serious problem.