I can’t speak to the “AI as a normal technology” people in particular, but a shortlist I created of people I’d be very excited about includes someone who just doesn’t buy at all that AI will drive an intelligence explosion or explosive growth.
I think there are lots of types of people where it wouldn’t be a great fit, though. E.g. continental philosophers; at least some of the “sociotechnical” AI folks; more mainstream academics who are focused on academic publishing. And if you’re just focused on AI alignment, probably you’ll get more at a different org than you would at Forethought.
More generally, I’m particularly keen on situations where V(X, Forethought team) is much greater than than V(X) + V(Forethought team), either because there are synergies between X and the team, or because X is currently unable to do the most valuable work they could in any of the other jobs they could be in.
I can’t speak to the “AI as a normal technology” people in particular, but a shortlist I created of people I’d be very excited about includes someone who just doesn’t buy at all that AI will drive an intelligence explosion or explosive growth.
I think there are lots of types of people where it wouldn’t be a great fit, though. E.g. continental philosophers; at least some of the “sociotechnical” AI folks; more mainstream academics who are focused on academic publishing. And if you’re just focused on AI alignment, probably you’ll get more at a different org than you would at Forethought.
More generally, I’m particularly keen on situations where V(X, Forethought team) is much greater than than V(X) + V(Forethought team), either because there are synergies between X and the team, or because X is currently unable to do the most valuable work they could in any of the other jobs they could be in.