Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thanks for this post!
I agree there’s lots of room for non-technical work to help improve the impacts of AI. Still, I’m not sure funder interest and experience in non-technical AI work is what’s missing. As some examples of how this interest and experience is already present:
OpenPhil has a program officer who specializes in AI governance and policy, and OpenPhil has given out over $80 million in AI governance grants.
The current fund chair of the Long-Term Future Fund has a background in non-technical AI research.
FTX Future Fund staff have backgrounds in philosophy, economics, and law.
(So why isn’t more AI governance work happening? In case you haven’t seen it yet, you might find discussion of bottlenecks in this post interesting.)
That’s a very good point. I still feel there could be more contests, grants, orgs etc in this area but you’re right in that there’s resources there and there’s some serious knowledge at those orgs. Perhaps talent, not funding, is main bottleneck we need to address. They two may be interrelated to an extent.
It’s really frustrating to see so much governance talent at law conferences but very little within EA working on Longtermist issues. I think it’s a mixture of a lack of outreach in those industries and the fact EA’s reputation has taken a couple of dings in the public media lately. People in those industries are very sensitive to PR risk, understandably.
I’ve been seriously considering writing a longtermist book for the legal and governance sector lately, just to get the conversation on the table, but it’s something one can’t rush into.
Thanks for pointing out that post to be. It’s a great read :) Appreciate it!
Are you talking about things like recidivism scores? If so, it’s a bit of a stretch to describe logistic regression as AI.
A whole range of things, from elements on the ‘fancy spreadsheet’ side such as recidivism and predpol to the more complex elements surrounding evidential aspects. I am aware none of these are close to AGI, considering no current AI is given its hyper-specialism, but the point of that paragraph isn’t about the AI but how humans and organisations have been shown to use AI (or automation software, if you’re more comfortable with that phrase). When the first actual AGI is developed, it is likely to be in a very well-funded lab—a lab likely under control of an organisation who are no exception to the usual weaknesses of capitalistic behaviour. After all, the entire point of developing general intelligence for many people isn’t for scientific endeavour but for commercial reasons.
Maybe humanity has 1 AGI or 1 ASI, but if we end up with dozens or hundreds of systems, we can’t rely on people to do the ‘right thing’ to prevent misalignment problems. There needs to be an actual system of governance ready to keep them in check, so it’s not just a technical problem.
Let me know if that clarified at all.
I still don’t know where I stand on governance. Plausibly there will be laws and policies we need passed; but it’s also plausible that we will mainly need the government to just stay out of the way and not make things worse, such as by adding a bunch of useless regulations that don’t advance safety[1]. But, I suppose even if it’s the latter, it’s exactly the kind of thing you would need policy people for.
Ideally, we would be able to slow down AI, but we are unlikely to be able to do this in every country, so this could easily just make things worse.
Great first post!
Do we have statistics on the number of people and organizations in AI technical safety and people in AI governance?
Thanks! I’m not certain of concrete data, as this has been just from my experience interacting with others within EA’s field, at EAGs, and checking out resource lists/experiencing the courses.
There’s one list here for example that shows current AI Safety resources and the spread of major AI works within EA such as books. In addition there is a tool here but the CompSci focus might not be because that field is that way, but because the creator felt CompSci was most relevant, which I acknowledge.
That said, I was pleasantly surprised that in The Alignment Problem, Brian Christian did talk about law and policy for the first third of the book in some detail as a foundation of AI alignment issues which was useful. I’m hopeful that indicates a wider view of the field.
I have the impression that one of the reasons for the focus on technical AI is the fact that once you succeed in aligning an AI, you expect it to perform a pivotal act, e.g. burn out all the gpus on earth. To achieve this pivotal act, it seems that going through AI governance is not really necessary?
But yes, it does seem to be a bit of a stretch
You’re right in that those situations aren’t impossible, but also governance doesn’t have to be an end goal but a process. Even helping to govern current AI efforts will shape the field, much the same kind of attitude regulation has with the nuclear field.