All of this suggests to me that the number 1 priority to solve AI Safety is making it concrete enough that we can make it easy for researchers to get adsorbed by small subproblems. For example, we could define a few concrete approaches that allow people to progress at a concrete level, even if we don’t solve AI Safety once and for all, as perhaps Yudkowski would hope.
I’m very sympathetic to the general idea that building the AI safety field is currently more important than making direct progress (though continuous progress of course helps with field building). Have you considered doing this for a while if you think it’s possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?
Sidenote 1: Another option that I’m even more excited about is getting promising CS students engaged with AI safety. This would avoid things like “senior researchers are kinda stuck in their particular interests” and “senior people don’t care so much about money”. Michael Chen’s comment about his experience with AI Safety university groups made it sound quite tractable and possibly highly underrated.
Sidenote 2: I would be quite surprised if AI safety orgs would not allow you to work remotely at least a significant fraction of your time? E.g. even if some aspects of the work need to be in person, I know quite a few researchers who manage to do this by travelling there a few times per year for a couple weeks.
Have you considered doing this for a while if you think it’s possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?
Indeed, I think that would be a good objective for the postdoc. It’s also true that I think this is the kind of thing we need to do to make progress in the field, and my intuition is that aiming for academic papers should be necessary to increase quality.
Cool, I’d personally be very glad if you would contribute to this. Hmm, I wonder whether a plausible next step could be to work on this independently for a couple months to try how much you like doing the work. Maybe you could do this part-time while staying at your current job?
Unfortunately, this is not feasible: I am finishing my Ph.D. and have to decide what I am doing next in the next couple of weeks.
In any case, my impression is that to pose good questions I need a couple of years of understanding the field of expertise, so things are tractable, state of the art, concretely defined...
Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problem full-time for a couple of months and testing your fit, and then if you don’t feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?
(Btw I’m still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)
Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don’t feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?
The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a year. I’m thinking of it more in academic terms; I would like to target academic-quality papers.
But perhaps if that happens I could come back to quantum computing or any other boring computer scientist job.
(Btw I’m still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)
The main reason is that if I go to a place where people are working in technical AI Safety I will get to speed with the AI/ML part faster if I am there. So it’d be for learning purposes.
I’m very sympathetic to the general idea that building the AI safety field is currently more important than making direct progress (though continuous progress of course helps with field building). Have you considered doing this for a while if you think it’s possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?
Sidenote 1: Another option that I’m even more excited about is getting promising CS students engaged with AI safety. This would avoid things like “senior researchers are kinda stuck in their particular interests” and “senior people don’t care so much about money”. Michael Chen’s comment about his experience with AI Safety university groups made it sound quite tractable and possibly highly underrated.
Sidenote 2: I would be quite surprised if AI safety orgs would not allow you to work remotely at least a significant fraction of your time? E.g. even if some aspects of the work need to be in person, I know quite a few researchers who manage to do this by travelling there a few times per year for a couple weeks.
Indeed, I think that would be a good objective for the postdoc. It’s also true that I think this is the kind of thing we need to do to make progress in the field, and my intuition is that aiming for academic papers should be necessary to increase quality.
Cool, I’d personally be very glad if you would contribute to this. Hmm, I wonder whether a plausible next step could be to work on this independently for a couple months to try how much you like doing the work. Maybe you could do this part-time while staying at your current job?
Unfortunately, this is not feasible: I am finishing my Ph.D. and have to decide what I am doing next in the next couple of weeks. In any case, my impression is that to pose good questions I need a couple of years of understanding the field of expertise, so things are tractable, state of the art, concretely defined...
Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problem full-time for a couple of months and testing your fit, and then if you don’t feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?
(Btw I’m still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)
The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a year. I’m thinking of it more in academic terms; I would like to target academic-quality papers. But perhaps if that happens I could come back to quantum computing or any other boring computer scientist job.
The main reason is that if I go to a place where people are working in technical AI Safety I will get to speed with the AI/ML part faster if I am there. So it’d be for learning purposes.