Loved reading this post, as a person considering working in AI Safety, this is a great resource and answers many questions, including some I hadn’t thought of answering. Thanks so much for writing this!
One question: I am curious to hear anyone’s perspective on the following “conflict”:
Point 1: “There is a specific skill of getting things done inside large organizations that most EAs lack (due to lack of corporate experience, plus lack of people-orientedness), but which is particularly useful when pushing for lab governance proposals. If you have it, lab governance work may be a good fit for you.”
Point 2: “You need to get hands on” and, related: “Coding skill is a much more important prerequisite, though.”
There may be exceptions, but I would guess (partly based on my own experience) that the kind of people who have a lot of experience getting things done in large organisations typically do not spend much time coding ML models.
And yet, as I say, I believe both of these are necessary. If I want to influence a major AI / ML company, I will lack credibility in their eyes if I have no experience working with and in large organisations. But I will also lack credibility if I don’t have an in-depth understanding of the models and an ability to discuss them specifically rather than just abstractly.
Specific question: What might the typical learning curve be for the second aspect, to get to the point where I could get hands on with models? My starting point would be having studied FORTRAN in college (!! - yes, that long ago!) and only having one online course of Python. There may be others with different starting points.
I suppose. ultimately, it still seems likely that it would be quicker even for a total novice to coding to reach some level of meaningful competence than for someone with no experience of organisations to become expert in how decisions are made and plans are approved or rejected, and how to influence this.
Also are there good online courses anyone would recommend?
One question: I am curious to hear anyone’s perspective on the following “conflict”:
The former is more important for influencing labs, the latter is more important for doing alignment research.
And yet, as I say, I believe both of these are necessary.
FWIW when I talk about the “specific skill”, I’m not talking about having legible experience doing this, I’m talking about actually just being able to do it. In general I think it’s less important to optimize for having credibility, and more important to optimize for the skills needed. Same for ML skill—less important for gaining credibility, more important for actually just figuring out what the best plans are.
Also are there good online courses anyone would recommend?
Loved reading this post, as a person considering working in AI Safety, this is a great resource and answers many questions, including some I hadn’t thought of answering. Thanks so much for writing this!
One question: I am curious to hear anyone’s perspective on the following “conflict”:
Point 1: “There is a specific skill of getting things done inside large organizations that most EAs lack (due to lack of corporate experience, plus lack of people-orientedness), but which is particularly useful when pushing for lab governance proposals. If you have it, lab governance work may be a good fit for you.”
Point 2: “You need to get hands on” and, related: “Coding skill is a much more important prerequisite, though.”
There may be exceptions, but I would guess (partly based on my own experience) that the kind of people who have a lot of experience getting things done in large organisations typically do not spend much time coding ML models.
And yet, as I say, I believe both of these are necessary. If I want to influence a major AI / ML company, I will lack credibility in their eyes if I have no experience working with and in large organisations. But I will also lack credibility if I don’t have an in-depth understanding of the models and an ability to discuss them specifically rather than just abstractly.
Specific question: What might the typical learning curve be for the second aspect, to get to the point where I could get hands on with models? My starting point would be having studied FORTRAN in college (!! - yes, that long ago!) and only having one online course of Python. There may be others with different starting points.
I suppose. ultimately, it still seems likely that it would be quicker even for a total novice to coding to reach some level of meaningful competence than for someone with no experience of organisations to become expert in how decisions are made and plans are approved or rejected, and how to influence this.
Also are there good online courses anyone would recommend?
The former is more important for influencing labs, the latter is more important for doing alignment research.
FWIW when I talk about the “specific skill”, I’m not talking about having legible experience doing this, I’m talking about actually just being able to do it. In general I think it’s less important to optimize for having credibility, and more important to optimize for the skills needed. Same for ML skill—less important for gaining credibility, more important for actually just figuring out what the best plans are.
See the resources listed here.
Thanks Richard, This is clear now.
And thank you (and others) for sharing the resources link—this indeed looks like a fantastic resource.
Denis