Hi! What a comprehensive review, thanks for writing it up!
One quibble is that the OP is very dismissive of the issue of biases, discrimination, and AI.
While I don’t necessarily think that this issue should fall under the category of AI alignment that people in the EA community normally are concerned with, I also believe that it is inappropriate to completely dismiss it. So, I just wanted to add a comment saying that some of us in the community are concerned about biases and AI, and I hope the EA community will being having a healthy discussion about it.
Have you seen any study/analysis (even a solid Fermi estimate) showing that AI bias, either similar to variants identified so far or hypothetical future variants, could plausibly be sufficiently large*tractable to be worthy of further investigation?
I’ve always grouped this issue in the large category of “issues that are bad and should be worked on by someone, but that get plenty of coverage in the non-EA world and don’t seem especially compelling for our tiny community to look at”. AI bias gets a lot of attention from large tech firms and large media companies relative to long-term concerns about safety/alignment.
So, I think we agree and I may have been unclear in my comment. I didn’t mean to imply that the problem of AI bias necessarily is large/neglected/tractable enough that the EA community should be very preoccupied with it.
The reason I commented was that I read OP’s paragraph to not only say ‘bias isn’t the kind of thing that the EA community should focus on’ but rather something much more bold, i.e. ‘bias isn’t a problem at all’.
And I quite confidently and strongly disagree with the latter claim.
Hi! What a comprehensive review, thanks for writing it up!
One quibble is that the OP is very dismissive of the issue of biases, discrimination, and AI.
While I don’t necessarily think that this issue should fall under the category of AI alignment that people in the EA community normally are concerned with, I also believe that it is inappropriate to completely dismiss it. So, I just wanted to add a comment saying that some of us in the community are concerned about biases and AI, and I hope the EA community will being having a healthy discussion about it.
Cheers!
Have you seen any study/analysis (even a solid Fermi estimate) showing that AI bias, either similar to variants identified so far or hypothetical future variants, could plausibly be sufficiently large*tractable to be worthy of further investigation?
I’ve always grouped this issue in the large category of “issues that are bad and should be worked on by someone, but that get plenty of coverage in the non-EA world and don’t seem especially compelling for our tiny community to look at”. AI bias gets a lot of attention from large tech firms and large media companies relative to long-term concerns about safety/alignment.
Hey Aaron!
So, I think we agree and I may have been unclear in my comment. I didn’t mean to imply that the problem of AI bias necessarily is large/neglected/tractable enough that the EA community should be very preoccupied with it.
The reason I commented was that I read OP’s paragraph to not only say ‘bias isn’t the kind of thing that the EA community should focus on’ but rather something much more bold, i.e. ‘bias isn’t a problem at all’.
And I quite confidently and strongly disagree with the latter claim.
-Joshua from YEA.