Thanks, really appreciated this (strong upvoted for the granularity of data).
To be very explicit: I mostly trust your judgement about these tradeoffs for yourself. I do think you probably get a good amount from social osmosis (such that if I knew you didn’t talk socially a bunch to people doing direct work I’d be more worried that the 5-10% figure was too low); I almost want to include some conversion factor from social time to deliberate time.
If you were going to get worthwhile benefits from more investment in understanding object-level things, I think the ways this would seem most plausible to me are:
Understanding not just “who is needed to join AI safety teams?”, but “what’s needed in people who can start (great) new AI safety teams?”
Understanding the network of different kinds of direct work we want to see, and how the value propositions relate to each other, to be able to prioritize finding people to go after currently-under-invested-in areas
Something about long-term model-building which doesn’t pay off in the short term but you’d find helpful in five years time
Overall I’m not sure if I should be altering my “20%” claim to add more nuance about degree of seniority (more senior means more investment is important) and career stage (earlier means more investment is good). I think that something like that is probably more correct but “20%” still feels like a good gesture as a default.
(I also think that you just have access to particularly good direct work people, which means that you probably get some of the benefits of sync about what they need in more time-efficient ways than may be available to many people, so I’m a little suspicious of trying to hold up the Claire Zabel model as one that will generalize broadly.)
Thanks, really appreciated this (strong upvoted for the granularity of data).
To be very explicit: I mostly trust your judgement about these tradeoffs for yourself. I do think you probably get a good amount from social osmosis (such that if I knew you didn’t talk socially a bunch to people doing direct work I’d be more worried that the 5-10% figure was too low); I almost want to include some conversion factor from social time to deliberate time.
If you were going to get worthwhile benefits from more investment in understanding object-level things, I think the ways this would seem most plausible to me are:
Understanding not just “who is needed to join AI safety teams?”, but “what’s needed in people who can start (great) new AI safety teams?”
Understanding the network of different kinds of direct work we want to see, and how the value propositions relate to each other, to be able to prioritize finding people to go after currently-under-invested-in areas
Something about long-term model-building which doesn’t pay off in the short term but you’d find helpful in five years time
Overall I’m not sure if I should be altering my “20%” claim to add more nuance about degree of seniority (more senior means more investment is important) and career stage (earlier means more investment is good). I think that something like that is probably more correct but “20%” still feels like a good gesture as a default.
(I also think that you just have access to particularly good direct work people, which means that you probably get some of the benefits of sync about what they need in more time-efficient ways than may be available to many people, so I’m a little suspicious of trying to hold up the Claire Zabel model as one that will generalize broadly.)