I’ll only briefly reply because I feel like I’ve said most of what I wanted to say. 1) Mostly agree but that feels like part of the point I’m trying to make. Doing good research is really hard, so when you don’t have a decade of past experience it seems more important how you react to early failures than whether you make them. 2) My understanding is that only about 8 people were involved with the public research outputs and not all of them were working on these outputs all the time. So the 1 OOM in contrast to ARC feels more like a 2x-4x. 3) Can’t share. 4) Thank you. Hope my comments helped. 5) I just asked a bunch of people who work(ed) at Conjecture and they said they expect the skill building to be better for a career in alignment than e.g. working with a non-alignment team at Google.
I’ll only briefly reply because I feel like I’ve said most of what I wanted to say.
1) Mostly agree but that feels like part of the point I’m trying to make. Doing good research is really hard, so when you don’t have a decade of past experience it seems more important how you react to early failures than whether you make them.
2) My understanding is that only about 8 people were involved with the public research outputs and not all of them were working on these outputs all the time. So the 1 OOM in contrast to ARC feels more like a 2x-4x.
3) Can’t share.
4) Thank you. Hope my comments helped.
5) I just asked a bunch of people who work(ed) at Conjecture and they said they expect the skill building to be better for a career in alignment than e.g. working with a non-alignment team at Google.
We’ve updated the recommendation about working at Conjecture.