Summary: This is a small post to thank the Future of Life Institute for setting up their AI existential risk community, as well as the Vitalik Buterin PhD and postdoc fellowships.
A couple of years ago the Future of Life Institute set up a community of academic researchers interested in working on AI Safety and Alignment. I think this is a really useful contribution towards solving this problem because it addresses the AI alignment problem at multiple levels:
It helps publicly clarify and list what academic researchers are interested in working towards AI Safety, and in which techniques they are specialists. Before it, there were some hard-to-find google sheets listing some, but that made it much less clear what problems where they were interested in or if they really wanted to be known for working on this.
It gives a reputation to the field of AI Safety and signals that this is a problem academics consider important and tractable enough to be working on.
It clarifies the academic path to becoming an AI Safety researcher, especially via the Vitalik Buterin fellowships.
It helps AI safety researchers know each other and what they are working on, especially lowering the disadvantage of not being physically living in an AI Safety hub.
For example, if not for the FLI I would probably have not met Victor Veitch, with whom I applied to a couple of postdoc grants, even if in the end I postponed that plan.
This year I am helping review the FLI PhD fellowship, and the two most important conclusions I got were: a) most of the applications I reviewed are of outstanding quality, and b) they also mostly come from just a handful of universities. To me, this indicates that it should be possible to scale this program up without sacrificing quality and that it may represent a good donation opportunity. Thus, I want to thank the FLI for setting it up.
An appraisal of the Future of Life Institute AI existential risk program
Summary: This is a small post to thank the Future of Life Institute for setting up their AI existential risk community, as well as the Vitalik Buterin PhD and postdoc fellowships.
A couple of years ago the Future of Life Institute set up a community of academic researchers interested in working on AI Safety and Alignment. I think this is a really useful contribution towards solving this problem because it addresses the AI alignment problem at multiple levels:
It helps publicly clarify and list what academic researchers are interested in working towards AI Safety, and in which techniques they are specialists. Before it, there were some hard-to-find google sheets listing some, but that made it much less clear what problems where they were interested in or if they really wanted to be known for working on this.
It gives a reputation to the field of AI Safety and signals that this is a problem academics consider important and tractable enough to be working on.
It clarifies the academic path to becoming an AI Safety researcher, especially via the Vitalik Buterin fellowships.
It helps AI safety researchers know each other and what they are working on, especially lowering the disadvantage of not being physically living in an AI Safety hub.
For example, if not for the FLI I would probably have not met Victor Veitch, with whom I applied to a couple of postdoc grants, even if in the end I postponed that plan.
This year I am helping review the FLI PhD fellowship, and the two most important conclusions I got were: a) most of the applications I reviewed are of outstanding quality, and b) they also mostly come from just a handful of universities. To me, this indicates that it should be possible to scale this program up without sacrificing quality and that it may represent a good donation opportunity. Thus, I want to thank the FLI for setting it up.