I think this is a good question. I am currently on a team retreat, so likely won’t get to this until next week (and maybe not then because I will likely be busy catching up with stuff). If I haven’t responded in 10 days, please feel free to ping me.
I also realize that there are other fanfictions, e.g. Friendship is Optimal, that, in theory at least, seem well-placed to introduce concerns about AI alignment to the public. To the extent you can explain why these were less successful than HP:MoR (or any general theory of what success looks like here), I would be interested in hearing it!
I am still pretty swamped (being in the middle of both another LTFF grant round and the SFF grant round), and since I think a proper response to the above requires writing quite a bit of text, it will probably be another two weeks or so.
I think this is a good question. I am currently on a team retreat, so likely won’t get to this until next week (and maybe not then because I will likely be busy catching up with stuff). If I haven’t responded in 10 days, please feel free to ping me.
Thanks! Ping on this.
I also realize that there are other fanfictions, e.g. Friendship is Optimal, that, in theory at least, seem well-placed to introduce concerns about AI alignment to the public. To the extent you can explain why these were less successful than HP:MoR (or any general theory of what success looks like here), I would be interested in hearing it!
Thanks for pinging me!
I am still pretty swamped (being in the middle of both another LTFF grant round and the SFF grant round), and since I think a proper response to the above requires writing quite a bit of text, it will probably be another two weeks or so.