Do you have any sense of how many may be working predominately on near-term risks vs. long term risks? I’d be interested to know the latter number, and have reason to believe it would be lower than this estimate (i.e. because I highly doubt OpenAI has 10 people working fully on the long term risks instead of the near term stuff that they generally seem more interested in)
At OpenAI, I’m pretty sure there are far more people working on near-term problems that long-term risks. Though the Superalignment team now has over 20 people from what I’ve heard.
The Superalignment team currently has about 20 people according to Jan Leike. Previously I think the scalable alignment team was much smaller and probably only 5-10 people.
Do you have any sense of how many may be working predominately on near-term risks vs. long term risks? I’d be interested to know the latter number, and have reason to believe it would be lower than this estimate (i.e. because I highly doubt OpenAI has 10 people working fully on the long term risks instead of the near term stuff that they generally seem more interested in)
At OpenAI, I’m pretty sure there are far more people working on near-term problems that long-term risks. Though the Superalignment team now has over 20 people from what I’ve heard.
Oh really? I thought it was far smaller, like in the range of 5-10.
The Superalignment team currently has about 20 people according to Jan Leike. Previously I think the scalable alignment team was much smaller and probably only 5-10 people.
Good update, thanks for sourcing!