As best I can tell so far (willing to keep learning) the entire AI professional community doesn’t seem to grasp that, as of yet, there’s not really a compelling case that their well intended efforts will be meaningful.
Frustrating, but fascinating.
Is there any intellectual within the AI realm who is willing and able to sail against this group think?
If yes, is there any chance we could get them in here so I don’t have to keep typing this?
As best I can tell so far (willing to keep learning) the entire AI professional community doesn’t seem to grasp that, as of yet, there’s not really a compelling case that their well intended efforts will be meaningful.
Frustrating, but fascinating.
Is there any intellectual within the AI realm who is willing and able to sail against this group think?
If yes, is there any chance we could get them in here so I don’t have to keep typing this?
Stuart Russell is probably the most prominent example.
I think Dan Hendryks is doing good work in this area as well, as well as a bunch of people on the AI alignment team at DeepMind.
But yea, it’d be great if a lot more ML researchers/engineers engaged with the AI x-risk arguments and alignment research.