On a meta note: Different people who work on AI alignment have radically different pictures of what the development of AI will look like, what the alignment problem is, and what solutions might look like.
+1, this is the thing that surprised me most when I got into the field. I think helping increase common knowledge and agreement on the big picture of safety should be a major priority for people in the field (and it’s something I’m putting a lot of effort into, so send me an email at richardcngo@gmail.com if you want to discuss this).
+1, this is the thing that surprised me most when I got into the field. I think helping increase common knowledge and agreement on the big picture of safety should be a major priority for people in the field (and it’s something I’m putting a lot of effort into, so send me an email at richardcngo@gmail.com if you want to discuss this).
Also +1 on this.