Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
One challenge here is that many systematic changes take time and so some desirable changes might take long enough that we’d only be able to implement them past the point where it would be useful.
Things in AI have been moving fast, most economists seem to have expected it to have moved slower. Sorry, I don’t really want to get into more detail as writing a proper response would end up taking me more time than I want to spend defending this “Quick take”.
It has some relevance to strategy as well, such as in terms of how fast we develop the tech and how broadly distributed we expect it to be, however there’s a limit to how much additional clarity we can expect to gain over short time period.
As an example, I expect political science and international relations to be better for looking at issues related to power distribution rather than economics (though the economic frame adds some value as well). Historical studies of coups seems pretty relevant as well.
When it comes to predicting future progress, I’d be much more interested in hearing the opinions of folks who combine knowledge of economics with knowledge of ML or computer hardware, rather than those who are solely economists. Forecasting seems like another relevant discipline, as is future studies and history of science.
For the record, I see the new field of “economics of transformative AI” as overrated.
Economics has some useful frames, but it also tilts people towards being too “normy” on the impacts of AI and it doesn’t have a very good track record on advanced AI so far.
I’d much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.(I’d be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it’s pretty late in the game now, so I’m less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).
I just created a new Discord server for generated AI safety reports (ie. using Deep Research or other tools). Would be excited to see you join (ps. Open AI now provides uses on the plus plan 10 queries per month using Deep Research).
Yeah, it provides advice and the agency comes from the humans.
Here’s a short-form with my Wise AI advisors research direction: https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/chris_leong-s-shortform?view=postCommentsNew&postId=SbAofYCgKkaXReDy4&commentId=Zcg9idTyY5rKMtYwo
I agree that for journalism it’s important to be very careful about introducing biases into the field.
On the other hand, I suspect the issue they are highlighting is more that some people are so skeptical that they don’t bother engaging with this possibility or the arguments for it at all.
I think it’ll still take me a while to produce this, so I’ll just link you to my notes for now:
• Some Preliminary Notes on the Promise of a Wisdom Explosion
• Why the focus on wise AI advisors?
In case anyone is interested, I’ve now written up a short-form post arguing for the importance of Wise AI Advisors, which is one of the ideas listed here[1].
- ^
Well, slightly broader as my argument doesn’t focus specifically on wise AI advisors for government.
- ^
This is a great distinction to highlight, though I find it surprising that you haven’t addressed any of the ways that providing AI’s with rights could go horribly wrong (maybe you’ve written on this in the past, if so, you could just drop a link).
“Member of Technical Staff”—That’s surprising. I assumed he was more interested in the policy angle.
I gave this a strong upvote because regardless of whether or not you agree with these timelines or Tobias’ conclusion, this is a discussion that the community needs to be having. As in, it’s hard to argue that the possibility of this is remote enough these days that it makes sense to ignore it.
I would love to see someone running a course focusing on this (something broader than the AI Safety Fundamentals course). Obviously this is speculative, but I wouldn’t be surprised if the EA Infrastructure Fund were interested in funding a high-quality proposal to create such a course.
Thanks for this post. I think it makes some great suggestions about how AI Safety Camp could become a more favorable funding target. One thing I’ll add, I think it would be valuable for AI Safety Camp to refresh its website in order to make it look more professional and polished. The easiest way to accomplish this would be to make it a project in the next round.
Regarding research leads, I don’t think they should focus too much on prestige as they wouldn’t be able to compete on this front, and I think a core part of their value proposition is providing the infrastructure to host “wild and ambitious projects”. That said, I’m not suggesting that they should only host projects along these lines. I think it’s valuable for AI Safety Camp to also host a bunch of solid and less speculative projects for various reasons (not excessively distorting the ecosystem towards wild ideas, reducing the chance that people bouncing off doing an AI safety completely, providing folk with the potential to be a talented research lead with the opportunity to build the cred to be a lead for a more prestigious program), but more for balance, rather than this being the core value that they aim to deliver.
Regarding the funding, I suspect that setting the funding goal to $300,000 likely depresses fundraising as it primes people towards thinking their donation wouldn’t make a difference. It’s very easy for people to overlook that the minimum funding required is only $15,000.
One last point: you can only write “this may be the last AI Safety camp” so many times. Donors want to know that if they donate to keep it alive, you’re going to restructure the program towards something more financially viable. So I’d encourage the organizers to take on board some of the suggestions in this post.
I guess a key question is how much we should expect AI development to recurse back in on itself. The stronger this effect is, the shorter the time period in which it might be optimal to deploy AI in this fashion. In fact, it’s possible that the transition time could be so short that this paradigm becomes hot, then almost immediately becomes outdated.
“Management” consists of many different tasks, I don’t expect all of them to be automatable at the exact same time. As an example, there’s likely a big difference between an AI that manages project timelines or decides how to allocate tasks and one that defines the high-level strategy, both in terms of availability of the training data and how well it needs to perform in order to be deployed (many smaller company’s that can’t afford a project manager would be keen to make-do with an automated one even if it isn’t as good as a professional would be).
Unfortunately, a single organisation can’t do everything. There’s a lot of advantages of picking a particular niche and targeting it, so I think it makes sense for 80,000 Hours to leave serving other groups of people to other organisations.
Have you heard of Probably Good? Some of the career paths they suggest might be more accessible to you.You might also want to consider running iterations of the intro course locally. Facilitating can be challenging at times, and not everyone will necessarily be good at it, but I suspect that most people would become pretty good given enough practice and dedication.
Earning to Give is another option that is more accessible as it just requires a career that pays decently (and there are a lot of different options here).
Whilst it does provide evidence in favour of it being possible to make EA enormous, I actually think that it reduces the case for this overall, since it means that this “gap in the market” is now mostly being addressed[1]. Attempting to compete with the School of Moral Ambition for broad appeal would likely involve watering down elements of EA and I’d much prefer a world where we have these two different movements pursuing two different theories of impacts, instead of two movements pursuing mostly the same theory of impact.
- ^
Not completely, since there are differences, but the gap in the market is now much smaller
- ^
The biggest issue here is that Bregman downplays AI.
Well, there’s also direct work on AI safety and governance.