I have a bunch of thoughts, but find it hard to express them without any specific prompt. In general, I find a lot of AI Alignment research valuable, since it helps me evaluate other AI Alignment research, but I guess that’s kind of circular. I haven’t found most broad cause-prioritization research particularly useful for me, but would probably find research into better decision making as well as the history of science useful for helping me make better decision (i.e. rationality research).
I’ve found Larks recent AI Alignment literature and organization review quite useful, so more of that seems great. I’ve also found some of Shahar Avin’s thoughts on scientific funding interesting, but don’t really know whether it’s useful. I generally think a lot of Bostrom’s writing has been very useful to me, so more of that type seems good, though I am not sure how well others can do the same.
Not sure how useful this is or how much this answers your question. Happy to give concrete comments on any specific research direction you might be interested in getting my thoughts on.
I have a bunch of thoughts, but find it hard to express them without any specific prompt. In general, I find a lot of AI Alignment research valuable, since it helps me evaluate other AI Alignment research, but I guess that’s kind of circular. I haven’t found most broad cause-prioritization research particularly useful for me, but would probably find research into better decision making as well as the history of science useful for helping me make better decision (i.e. rationality research).
I’ve found Larks recent AI Alignment literature and organization review quite useful, so more of that seems great. I’ve also found some of Shahar Avin’s thoughts on scientific funding interesting, but don’t really know whether it’s useful. I generally think a lot of Bostrom’s writing has been very useful to me, so more of that type seems good, though I am not sure how well others can do the same.
Not sure how useful this is or how much this answers your question. Happy to give concrete comments on any specific research direction you might be interested in getting my thoughts on.