I think of myself as making a lot of gambles with my career choices. And I suspect that regardless of which way the propositions turn out, I’ll have an inclination to think that I was an idiot for not realizing them sooner. For example, I often have both the following thoughts:
“I have a bunch of comparative advantage at helping MIRI with their stuff, and I’m not going to be able to quickly reduce my confidence in their research directions. So I should stop worrying about it and just do as much as I can.”
“I am not sure whether the MIRI research directions are good. Maybe I should spend more time evaluating whether I should do a different thing instead.”
But even if it feels obvious in hindsight, it sure doesn’t feel obvious now.
So I have big gambles that I’m making, which might turn out to be wrong, but which feel now like they will have been reasonable-in-hindsight gambles either way. The main two such gambles are thinking AI alignment might be really important in the next couple decades and working on MIRI’s approaches to AI alignment instead of some other approach.
When I ask myself “what things have I not really considered as much as I should have”, I get answers that change over time (because I ask myself that question pretty often and then try to consider the things that are important). At the moment, my answers are:
Maybe I should think about/work on s-risks much more
Maybe I spend too much time inventing my own ways of solving design problems in Haskell and I should study other people’s more.
Maybe I am much more productive working on outreach stuff and I should do that full time.
(This one is only on my mind this week and will probably go away pretty soon) Maybe I’m not seriously enough engaging with questions about whether the world will look really different in a hundred years from how it looks today; perhaps I’m subject to some bias towards sensationalism and actually the world will look similar in 100 years.
I think of myself as making a lot of gambles with my career choices. And I suspect that regardless of which way the propositions turn out, I’ll have an inclination to think that I was an idiot for not realizing them sooner. For example, I often have both the following thoughts:
“I have a bunch of comparative advantage at helping MIRI with their stuff, and I’m not going to be able to quickly reduce my confidence in their research directions. So I should stop worrying about it and just do as much as I can.”
“I am not sure whether the MIRI research directions are good. Maybe I should spend more time evaluating whether I should do a different thing instead.”
But even if it feels obvious in hindsight, it sure doesn’t feel obvious now.
So I have big gambles that I’m making, which might turn out to be wrong, but which feel now like they will have been reasonable-in-hindsight gambles either way. The main two such gambles are thinking AI alignment might be really important in the next couple decades and working on MIRI’s approaches to AI alignment instead of some other approach.
When I ask myself “what things have I not really considered as much as I should have”, I get answers that change over time (because I ask myself that question pretty often and then try to consider the things that are important). At the moment, my answers are:
Maybe I should think about/work on s-risks much more
Maybe I spend too much time inventing my own ways of solving design problems in Haskell and I should study other people’s more.
Maybe I am much more productive working on outreach stuff and I should do that full time.
(This one is only on my mind this week and will probably go away pretty soon) Maybe I’m not seriously enough engaging with questions about whether the world will look really different in a hundred years from how it looks today; perhaps I’m subject to some bias towards sensationalism and actually the world will look similar in 100 years.