I’m starting a PHD in soon. That gives me freedom to research a large plethora of topics, some of which more valuable then others, ranging from information theory and the nature of information to the mathematics of altruism, and the formation of Singletons. In between all sorts of questions about genetic determination and behavioral genomics are allowed, as well as primatology.
My current contention is to research potential paths for altruism in the future, where it can lead in naïve evolutionary models and in less naïve evolutionary models. I will also do research with other EAs on how to impart moral concepts to artificial general intelligences.
Are there better counterfactual alternatives?
What field are you studying? It seems like biology, but not sure. I’m planning on applying to programs in economics. Interested in cause prioritization and studying the interplay of social networks with economic decisions. Interested in seeing the decision you make.
Biological Anthropology, with an adviser whose latest book is in philosophy of mind, the next book on information theory, the previous book on—of all things—biological anthropology, and most of his career was as a semioticist and neuroscientist.
My previous adviser was a physicist in the philosophy of physics who turned into a philosopher of mind. My main sources of inspiration are Bostrom and Russell, who defy field borders. So I’m basically studying whatever you convince me makes sense in the intersection of interestingly complex and useful for the world. Except for math, code and decision theory, which are not my comparative advantage, specially not among EAs.
I would tend to focus on AGI-related topics, though you may have specific alternate ideas that are compelling for reasons that I can’t see from a distance. In addition to AGI safety, studying political dynamics of AGI takeoff (including de novo AGI, emulations, etc.) could be valuable. I suggested a few very general AGI research topics here and here. Some broader though perhaps less important topics are here.
I’m starting a PHD in soon. That gives me freedom to research a large plethora of topics, some of which more valuable then others, ranging from information theory and the nature of information to the mathematics of altruism, and the formation of Singletons. In between all sorts of questions about genetic determination and behavioral genomics are allowed, as well as primatology. My current contention is to research potential paths for altruism in the future, where it can lead in naïve evolutionary models and in less naïve evolutionary models. I will also do research with other EAs on how to impart moral concepts to artificial general intelligences. Are there better counterfactual alternatives?
What field are you studying? It seems like biology, but not sure. I’m planning on applying to programs in economics. Interested in cause prioritization and studying the interplay of social networks with economic decisions. Interested in seeing the decision you make.
Biological Anthropology, with an adviser whose latest book is in philosophy of mind, the next book on information theory, the previous book on—of all things—biological anthropology, and most of his career was as a semioticist and neuroscientist. My previous adviser was a physicist in the philosophy of physics who turned into a philosopher of mind. My main sources of inspiration are Bostrom and Russell, who defy field borders. So I’m basically studying whatever you convince me makes sense in the intersection of interestingly complex and useful for the world. Except for math, code and decision theory, which are not my comparative advantage, specially not among EAs.
Thanks for asking for suggestions. :)
I would tend to focus on AGI-related topics, though you may have specific alternate ideas that are compelling for reasons that I can’t see from a distance. In addition to AGI safety, studying political dynamics of AGI takeoff (including de novo AGI, emulations, etc.) could be valuable. I suggested a few very general AGI research topics here and here. Some broader though perhaps less important topics are here.