ML knowledge is good and important; I generally wish I had more of it and use many of my Learning Days to improve it. That link also shows some of the other, non-law subjects I’ve been studying.
In law school, I studied a lot of different subjects that have been useful, like:
Administrative law
National Security law
Constitutional law
Corporate law
Compliance
Contract law
Property law
Patent law
International law
Negotiations
Antitrust law
I am pretty bullish on most of the specific stuff you mentioned. I think macrohistory, history of technology, general public policy, forecasting, and economics are pretty useful. Unfortunately, it’s such a weird and idiosyncratic field that there’s not really a one-size-fits-all curriculum for getting into it, though this also means there’s many productive ways to spend one’s time preparing for it.
How much ML/CS knowledge is too much? For someone working in AI Policy, do you see diminishing returns to become a real expert in ML/CS, such that you could work directly as a technical person? Or is that level of expertise very useful for policy work?
Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have “enough” ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I’ve mentioned in these comments).
I would lean pretty heavily towards ML. Taking an intro to CS class is good background, but specialize other than that. Some adjacent areas, like cybersecurity, are good too.
(You could help AI development without specializing in AI, but this is specifically for AI Policy careers.)
ML knowledge is good and important; I generally wish I had more of it and use many of my Learning Days to improve it. That link also shows some of the other, non-law subjects I’ve been studying.
In law school, I studied a lot of different subjects that have been useful, like:
Administrative law
National Security law
Constitutional law
Corporate law
Compliance
Contract law
Property law
Patent law
International law
Negotiations
Antitrust law
I am pretty bullish on most of the specific stuff you mentioned. I think macrohistory, history of technology, general public policy, forecasting, and economics are pretty useful. Unfortunately, it’s such a weird and idiosyncratic field that there’s not really a one-size-fits-all curriculum for getting into it, though this also means there’s many productive ways to spend one’s time preparing for it.
How much ML/CS knowledge is too much? For someone working in AI Policy, do you see diminishing returns to become a real expert in ML/CS, such that you could work directly as a technical person? Or is that level of expertise very useful for policy work?
Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have “enough” ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I’ve mentioned in these comments).
How useful is general CS knowledge vs ML knowledge specifically?
I would lean pretty heavily towards ML. Taking an intro to CS class is good background, but specialize other than that. Some adjacent areas, like cybersecurity, are good too.
(You could help AI development without specializing in AI, but this is specifically for AI Policy careers.)