If you are saying academia has a good track record, then I must say (1) wrong for stuff like ML, where in recent years much (arguably most) relevant progress is made outside of academia, and (2) it may have a good track record for the long history of science, and when you say it’s good at solving problems, sure I think it might solve alignment in 100 years, but we need it in 10, and academia is slow. (E.g. read Yudkowsky’s sequence on science, if you don’t think that academia is slow.)
Do you have some reason why you think that a person can make more progress in academia than elsewhere? I agree that academia has people, and it’s good to get those people, but academia has badly shaped incentives, like (from my other comment): “Academia doesn’t have good incentives to make that kind of important progress: You are supposed to publish papers, so you (1) focus on what you can do with current ML systems, instead of focusing on more uncertain longer-term work, and (2) goodhart on some subproblems that don’t take that long to solve, instead of actually focusing on understanding the core difficulties and how one might address them.” So I expect a person can make more progress outside of academia. Much more, in fact.
Some important parts of the AI safety problem seem to me like they don’t fit well into academia work. There are of course exceptions, people in academia who can make useful progress here, but they are rare. I am not that confident in this, as my understanding of AI safety isn’t that deep, but I’m not just making this up. (EDIT: This mostly overlaps with the first two points I made, that academia is slow and that there are bad incentives, and maybe some other minor considerations about why excellent people (e.g. John Wentworth) may rather choose to not work in academia. What I’m saying is that I think that AI safety is a problem where those obstacles are big obstacles, whereas there might be other fields where those obstacles aren’t thaaat bad.)
I must say I strongly agree with Steven.
If you are saying academia has a good track record, then I must say (1) wrong for stuff like ML, where in recent years much (arguably most) relevant progress is made outside of academia, and (2) it may have a good track record for the long history of science, and when you say it’s good at solving problems, sure I think it might solve alignment in 100 years, but we need it in 10, and academia is slow. (E.g. read Yudkowsky’s sequence on science, if you don’t think that academia is slow.)
Do you have some reason why you think that a person can make more progress in academia than elsewhere? I agree that academia has people, and it’s good to get those people, but academia has badly shaped incentives, like (from my other comment): “Academia doesn’t have good incentives to make that kind of important progress: You are supposed to publish papers, so you (1) focus on what you can do with current ML systems, instead of focusing on more uncertain longer-term work, and (2) goodhart on some subproblems that don’t take that long to solve, instead of actually focusing on understanding the core difficulties and how one might address them.” So I expect a person can make more progress outside of academia. Much more, in fact.
Some important parts of the AI safety problem seem to me like they don’t fit well into academia work. There are of course exceptions, people in academia who can make useful progress here, but they are rare. I am not that confident in this, as my understanding of AI safety isn’t that deep, but I’m not just making this up. (EDIT: This mostly overlaps with the first two points I made, that academia is slow and that there are bad incentives, and maybe some other minor considerations about why excellent people (e.g. John Wentworth) may rather choose to not work in academia. What I’m saying is that I think that AI safety is a problem where those obstacles are big obstacles, whereas there might be other fields where those obstacles aren’t thaaat bad.)