I agree that better understanding of progress and which problems are more or less challenging is valuable, but it seems clear that timelines get fare more attention than needed in places where they aren’t decision relevant.
Davidmanheim
You very much do not know what you are talking about, as that linked “explanation” makes clear.
I’m not sure if you’re honestly confused, or intentionally wasting peoples time, but either way, you should spend an hour or two asking 4o to explain in detail why an expert in AI would object to this, and think about the answers.
Yeah, you should talk to someone who knows more about security than myself, but as a couple starting points;
math-proven safe AIs
This is not a thing, and likely cannot he a thing. You can’t prove an AI system isn’t malign, and work that sounds like it says this is actually doing something very different.
You can do everything you do now, even buy or rent GPUs, all of them just will be cloud math-proven safe GPUs
You can’t know that a given matrix multiplication won’t be for an AI system. It’s the same operation, so if you can buy or rent GPU time, how would it know what you are doing?
I think it’s better to play 5d chess—so I’m EA-adjacent-adjacent-adjacent-adjacent-adjacent.
This would benefit greatly from more in-depth technical discussion with people familiar with the technical, regulatory, and economic issues involved. It talks about a number of things that aren’t actually viable as described, and makes a number of assertions that are implausible or false.
That said, I think it’s directionally correct about a lot of things.
You seem to have ignored a central part of what was said by Daniela Amodei; “I’m not the expert on effective altruism,” which seems hard to defend.
Edit to add: the above proof that signing on to the GWWC pledge doesn’t mean you are an EA is correct, but the person you link to is using having signed as a proof that he understands what EA is.
As always, and as I’ve said in other cases, I.don’t think it makes sense to ask a disparate movement to make pronouncements like this.
You should add an edit to clarify the the claim, not just reply.
In addition to the fundamental problem that we don’t know how to tell if models are safe after release, much less in advance, blacklists for software, web sites, etc. historically have been easy to circumvent, for a variety of reasons, effectively all of which seem likely to apply here.
Strong +1 to the extra layer of scrutiny, but at the same time, there are reasons that the privileged people are at the top in most places, having to do with the actual benefits they have and bring to the table. This is unfair and a bad thing for society, but also a fact to deal with.
If we wanted to try to address the unfairness and disparity, that seems wonderful, but simply recruiting people from less privileged groups doesn’t accomplish what is needed. Some obvious additional parts of the puzzle include needing to provide actual financial security to the less privileged people, helping them build networks outside of EA with influential people, and coaching and feedback.
Those all seem great, but I’m uncertain it’s a reasonable use of the community’s limited financial resources—and we should nonetheless acknowledge this as a serious problem.
This seems great, but it does something which is kind of indefensible that I keep seeing, assuming longtermism requires consequentialism.
Given the resolution criteria, the question is in some ways more about Wikipedia policies than the US government...
What about the threat of strongly superhuman artificial superintelligence?
If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for “futures where we survive.”
See my post here arguing against that tractability.
we can make powerful AI agents that determine what happens in the lightcone
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it’s tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.
There is a huge range of “far future” that different views will prioritize differently, and not all need to care about the cosmic endowment at all—people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
First, you’re adding the assumption that the framing must be longtermist, and second, even conditional on longtermism you don’t need to be utilitarian, so the supposition that you need a model of what we do with the cosmic endowment would still be unjustified.
You make a dichotomy not present in my post, then conflate the two types of interventions while focusing only on AI risk—so that you’re saying that two different kinds of what most people would call extinction reduction efforts are differently tractable—and conclude that there’s a definition confusion.
To respond, first, that has little to do with my argument, but if it’s correct, your problem is with the entire debate week framing, which you think doesn’t present two distinct options, not with my post! And second, look at the other comments which bring up other types of change as quality increasing, and try to do the same analysis, without creating new categories, and you’ll understand what I was saying better.
If you think extinction risk reduction is highly valuable, then you need some kind of a model of what Earth-originating life will do with its cosmic endowment
No, you don’t, and you don’t even need to be utilitarian, much less longtermist!
Yeah, I was mostly thinking about policy—if we’re facing 90% unemployment, or existential risk, and need policy solutions, the difference between 5 and 7 years is immaterial. (There are important political differences, but the needed policies are identical.)