I don’t like this post and I don’t think it should pinned to the forum front page.
A few reasons:
The general message of: “go and spread this message, this is the way to do it” is too self-assured, and unquestioning. It appears cultish. It’s off-putting to have this as the first thing that forum visitors will see.
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it’s not clear what messages you think should be being spread. The only two I could see are “relate it to Skynet” and “even if AI looks safe it might not be”.
Too many prerequisites: this post refers to five or ten others posts as a “this concept is properly explained here” thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/or poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) they’re not that complex, just aren’t being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away.
I’d like to see the actual ideas you’re referring to expressed clearly, instead of referring to other posts.
Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum
Personally an argument I would find more compelling is to note that the OP doesn’t answer comments, making the value of discussion lower and it less interesting for a public forum. Also there is already a newsletter for cold takes that people can subscribe to.
Noting that I’m now going back through posts responding to comments, after putting off doing so for months—I generally find it easier to do this in bulk to avoid being distracted from my core priorities, though this time I think I put it off longer than I should’ve.
It is generally true that my participation in comments is extremely sporadic/sparse, and folks should factor that into curation decisions.
This argument proves too much. The same could be said of “go and donate your money, this (list of charities we think are most effective) is the way to do it”.
My takeaway was that messages which could be spread include: “we should worry about conflict between misaligned AI and all humans”, “AIs could behave deceptively, so evidence of safety might be misleading, “AI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems”, “alignment research is prosocial and great” and “we’re not ready for this”. (I excluded “it might be important for companies and other institutions to act in unusual ways”, because I agree this doesn’t seem like a straightforward message to spread).
The answer is probably (a).
“Disproportionate” seems like it boils down to an object-level disagreement about relative cause prioritisation between AI safety and other causes.
I like the framing “bad ideas are being obscured in a tower of readings that gatekeep the critics away” and I think EA is guilty of this sometimes in other areas too.
Just noting that many of the “this concept is properly explained elsewhere” links are also accompanied by expandable boxes that you can click to expand for the gist. I do think that understanding where I’m coming from in this piece requires a bunch of background, but I’ve tried to make it as easy on readers as I could, e.g. explaining each concept in brief and providing a link if the brief explanation isn’t clear enough or doesn’t address particular objections.
I don’t like this post and I don’t think it should pinned to the forum front page.
A few reasons:
The general message of: “go and spread this message, this is the way to do it” is too self-assured, and unquestioning. It appears cultish. It’s off-putting to have this as the first thing that forum visitors will see.
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it’s not clear what messages you think should be being spread. The only two I could see are “relate it to Skynet” and “even if AI looks safe it might not be”.
Too many prerequisites: this post refers to five or ten others posts as a “this concept is properly explained here” thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/or poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) they’re not that complex, just aren’t being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away. I’d like to see the actual ideas you’re referring to expressed clearly, instead of referring to other posts.
Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum
Personally an argument I would find more compelling is to note that the OP doesn’t answer comments, making the value of discussion lower and it less interesting for a public forum. Also there is already a newsletter for cold takes that people can subscribe to.
Noting that I’m now going back through posts responding to comments, after putting off doing so for months—I generally find it easier to do this in bulk to avoid being distracted from my core priorities, though this time I think I put it off longer than I should’ve.
It is generally true that my participation in comments is extremely sporadic/sparse, and folks should factor that into curation decisions.
These don’t seem very compelling to me.
This argument proves too much. The same could be said of “go and donate your money, this (list of charities we think are most effective) is the way to do it”.
My takeaway was that messages which could be spread include: “we should worry about conflict between misaligned AI and all humans”, “AIs could behave deceptively, so evidence of safety might be misleading, “AI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems”, “alignment research is prosocial and great” and “we’re not ready for this”. (I excluded “it might be important for companies and other institutions to act in unusual ways”, because I agree this doesn’t seem like a straightforward message to spread).
The answer is probably (a).
“Disproportionate” seems like it boils down to an object-level disagreement about relative cause prioritisation between AI safety and other causes.
I like the framing “bad ideas are being obscured in a tower of readings that gatekeep the critics away” and I think EA is guilty of this sometimes in other areas too.
Just noting that many of the “this concept is properly explained elsewhere” links are also accompanied by expandable boxes that you can click to expand for the gist. I do think that understanding where I’m coming from in this piece requires a bunch of background, but I’ve tried to make it as easy on readers as I could, e.g. explaining each concept in brief and providing a link if the brief explanation isn’t clear enough or doesn’t address particular objections.
I agree. I’m curious what the process is for deciding what gets pinned to the front page. Does anyone know?
Hi! The process for curation is outlined here. In short, some people can suggest curation, and I currently make the final calls.
You can also see a list of other posts that have been curated (you can get to the list by clicking on the star next to a curated post’s title).
Oh, I see! Thanks, that’s helpful.