Lifelong recursive self-improver, on his way to exploding really intelligently :D
More seriously: my posts are mostly about AI alignment, with an eye towards moral progress and creating a better future. If there was a public machine ethics forum, I would write there as well.
An idea:
We have a notion of what good is and how to do good
We could be wrong about it
It would be nice if we could use technology not only to do good, but also to also improve our understanding of what good is.
The idea above, and the fact that I’d like to avoid producing technology that can be used for bad purposes, is what motivates my research. Feel free to reach out if you relate!
At the moment I am doing research on agents whose behaviour is driven by a reflective process analogous to human moral reasoning, rather than by a metric specified by the designer. See Free agents.
Here are other suggested readings from what I’ve written so far:
-Naturalism and AI alignment
-From language to ethics by automated reasoning
-Criticism of the main framework in AI alignment
Last time I checked, improving the lives of animals was much cheaper than improving human lives; and I don’t think that arguments saying that humans have more moral weight are enough to compensate.