Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
I’ve only just CTL-F’d the report so I could have missed something, but I guess the key question for me is what does a multilateral project mean in terms of security/diffusion of the technology?
My intuition is that preventing diffusion of the tech in a multilateral project would be hard, if not impossible and I see this as consideration as something that could kill the desirability of such a project by itself, even if there are several other strong arguments in favour.
I know you mention this in the potential future work section, but I do think it is worthwhile editing in a paragraph or two on why you think we might want to consider this model anyway (it’s impossible to address everyone’s pet objection, but my guess is that this will prove to be one of the major objections that people make).
I expect that some of the older EA’s are more senior and therefore have more responsibilities competing against attending EA Global.
I have neither upvoted nor downvoted this post.
I suspect that the downvoting is because the post assumes this is a good donation target rather than making the argument for it (even a paragraph or two would likely make a difference). Some folks may feel that it’s bad for the community for posts like this to be at +100, even if they agree with the concrete message, as it undermines the norm of EA forum posts containing high-quality reasoning, as opposed to other appeals.
I think it’s worth bringing in the idea of an “endgame” here, defined as “a state in which existential risk from AI is negligible either indefinitely or for long enough that humanity can carefully plan its future”.
Some waypoints are endgames, some aren’t and some may be treated as an endgame by one strategy, but not by another.
It’s quite unclear that attempts to “boost scientific and technological progress” are net-positive at this point in time. I’d much rather see an effort to shift science towards differential technological development.
It’s very hard to say since it wasn’t tried.
I think incremental progress in this direction still would be better than the comparative.
The section “Most discussions about AGI fall into one of three categories” is rather weak, so I wouldn’t place too much confidence in what the AI says yet.
I agree that the role that capitalism plays in pushing us towards doom is an under-discussed angle.
I personally believe that a wisdom explosion would have made more sense for our society to pursue rather than an intelligence explosion given the constraints of capitalism.
Well, there’s also direct work on AI safety and governance.
One challenge here is that many systematic changes take time and so some desirable changes might take long enough that we’d only be able to implement them past the point where it would be useful.
Things in AI have been moving fast, most economists seem to have expected it to have moved slower. Sorry, I don’t really want to get into more detail as writing a proper response would end up taking me more time than I want to spend defending this “Quick take”.
It has some relevance to strategy as well, such as in terms of how fast we develop the tech and how broadly distributed we expect it to be, however there’s a limit to how much additional clarity we can expect to gain over short time period.
As an example, I expect political science and international relations to be better for looking at issues related to power distribution rather than economics (though the economic frame adds some value as well). Historical studies of coups seems pretty relevant as well.
When it comes to predicting future progress, I’d be much more interested in hearing the opinions of folks who combine knowledge of economics with knowledge of ML or computer hardware, rather than those who are solely economists. Forecasting seems like another relevant discipline, as is future studies and history of science.
For the record, I see the new field of “economics of transformative AI” as overrated.
Economics has some useful frames, but it also tilts people towards being too “normy” on the impacts of AI and it doesn’t have a very good track record on advanced AI so far.
I’d much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.(I’d be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it’s pretty late in the game now, so I’m less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).
I just created a new Discord server for generated AI safety reports (ie. using Deep Research or other tools). Would be excited to see you join (ps. Open AI now provides uses on the plus plan 10 queries per month using Deep Research).
Yeah, it provides advice and the agency comes from the humans.
Here’s a short-form with my Wise AI advisors research direction: https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/chris_leong-s-shortform?view=postCommentsNew&postId=SbAofYCgKkaXReDy4&commentId=Zcg9idTyY5rKMtYwo
I agree that for journalism it’s important to be very careful about introducing biases into the field.
On the other hand, I suspect the issue they are highlighting is more that some people are so skeptical that they don’t bother engaging with this possibility or the arguments for it at all.
I think it’ll still take me a while to produce this, so I’ll just link you to my notes for now:
• Some Preliminary Notes on the Promise of a Wisdom Explosion
• Why the focus on wise AI advisors?
In case anyone is interested, I’ve now written up a short-form post arguing for the importance of Wise AI Advisors, which is one of the ideas listed here[1].
- ^
Well, slightly broader as my argument doesn’t focus specifically on wise AI advisors for government.
- ^
Feels like Anthropic has been putting out a lot of good papers recently that help build the case for various AI threats. Given this, “no meaningful path to impact” seem a bit strong.