Made the front page of Hacker News. Here’s the comments.
The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there’s a good deal of pushback and (I thought) some surprisingly high-quality discussion.
It seems relevant that most of the signatories are academics, where this criticism wouldn’t make sense. @HaydnBelfield created a nice graphic here demonstrating this point.
I’ve also been trying this to people claiming financial interests. On the other hand, the tweet Haydn replied to actually makes another good point though, that does apply to professors—diverting attention to from societal risks that they’re contributing to but can solve, to x-risk where they can mostly sign such statements and then go “🤷🏼♂️”, shields them from having to change anything in practice.
In the vein of “another good point” made in public reactions to the statement, an article I read in The Telegraph:
“Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer.”
This seems obvious with hindsight as one factor at play, but I hadn’t considered it before reading it here. This doesn’t address Daniel / Haydn’s point though, of course.
Made the front page of Hacker News. Here’s the comments.
The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there’s a good deal of pushback and (I thought) some surprisingly high-quality discussion.
It seems relevant that most of the signatories are academics, where this criticism wouldn’t make sense. @HaydnBelfield created a nice graphic here demonstrating this point.
I’ve also been trying this to people claiming financial interests. On the other hand, the tweet Haydn replied to actually makes another good point though, that does apply to professors—diverting attention to from societal risks that they’re contributing to but can solve, to x-risk where they can mostly sign such statements and then go “🤷🏼♂️”, shields them from having to change anything in practice.
In the vein of “another good point” made in public reactions to the statement, an article I read in The Telegraph:
“Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer.”
This seems obvious with hindsight as one factor at play, but I hadn’t considered it before reading it here. This doesn’t address Daniel / Haydn’s point though, of course.
https://www.telegraph.co.uk/business/2023/06/04/worry-climate-change-not-artificial-intelligence/
This is also the case in the comments on this FT article (paywalled I think), which I guess indicates how less techy people may be tending to see it.