I donât like this post and I donât think it should pinned to the forum front page.
A few reasons:
The general message of: âgo and spread this message, this is the way to do itâ is too self-assured, and unquestioning. It appears cultish. Itâs off-putting to have this as the first thing that forum visitors will see.
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but itâs not clear what messages you think should be being spread. The only two I could see are ârelate it to Skynetâ and âeven if AI looks safe it might not beâ.
Too many prerequisites: this post refers to five or ten others posts as a âthis concept is properly explained hereâ thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/âor poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) theyâre not that complex, just arenât being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away.
Iâd like to see the actual ideas youâre referring to expressed clearly, instead of referring to other posts.
Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum
Personally an argument I would find more compelling is to note that the OP doesnât answer comments, making the value of discussion lower and it less interesting for a public forum. Also there is already a newsletter for cold takes that people can subscribe to.
Noting that Iâm now going back through posts responding to comments, after putting off doing so for monthsâI generally find it easier to do this in bulk to avoid being distracted from my core priorities, though this time I think I put it off longer than I shouldâve.
It is generally true that my participation in comments is extremely sporadic/âsparse, and folks should factor that into curation decisions.
This argument proves too much. The same could be said of âgo and donate your money, this (list of charities we think are most effective) is the way to do itâ.
My takeaway was that messages which could be spread include: âwe should worry about conflict between misaligned AI and all humansâ, âAIs could behave deceptively, so evidence of safety might be misleading, âAI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systemsâ, âalignment research is prosocial and greatâ and âweâre not ready for thisâ. (I excluded âit might be important for companies and other institutions to act in unusual waysâ, because I agree this doesnât seem like a straightforward message to spread).
The answer is probably (a).
âDisproportionateâ seems like it boils down to an object-level disagreement about relative cause prioritisation between AI safety and other causes.
I like the framing âbad ideas are being obscured in a tower of readings that gatekeep the critics awayâ and I think EA is guilty of this sometimes in other areas too.
Just noting that many of the âthis concept is properly explained elsewhereâ links are also accompanied by expandable boxes that you can click to expand for the gist. I do think that understanding where Iâm coming from in this piece requires a bunch of background, but Iâve tried to make it as easy on readers as I could, e.g. explaining each concept in brief and providing a link if the brief explanation isnât clear enough or doesnât address particular objections.
I donât like this post and I donât think it should pinned to the forum front page.
A few reasons:
The general message of: âgo and spread this message, this is the way to do itâ is too self-assured, and unquestioning. It appears cultish. Itâs off-putting to have this as the first thing that forum visitors will see.
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but itâs not clear what messages you think should be being spread. The only two I could see are ârelate it to Skynetâ and âeven if AI looks safe it might not beâ.
Too many prerequisites: this post refers to five or ten others posts as a âthis concept is properly explained hereâ thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/âor poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) theyâre not that complex, just arenât being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away. Iâd like to see the actual ideas youâre referring to expressed clearly, instead of referring to other posts.
Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum
Personally an argument I would find more compelling is to note that the OP doesnât answer comments, making the value of discussion lower and it less interesting for a public forum. Also there is already a newsletter for cold takes that people can subscribe to.
Noting that Iâm now going back through posts responding to comments, after putting off doing so for monthsâI generally find it easier to do this in bulk to avoid being distracted from my core priorities, though this time I think I put it off longer than I shouldâve.
It is generally true that my participation in comments is extremely sporadic/âsparse, and folks should factor that into curation decisions.
These donât seem very compelling to me.
This argument proves too much. The same could be said of âgo and donate your money, this (list of charities we think are most effective) is the way to do itâ.
My takeaway was that messages which could be spread include: âwe should worry about conflict between misaligned AI and all humansâ, âAIs could behave deceptively, so evidence of safety might be misleading, âAI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systemsâ, âalignment research is prosocial and greatâ and âweâre not ready for thisâ. (I excluded âit might be important for companies and other institutions to act in unusual waysâ, because I agree this doesnât seem like a straightforward message to spread).
The answer is probably (a).
âDisproportionateâ seems like it boils down to an object-level disagreement about relative cause prioritisation between AI safety and other causes.
I like the framing âbad ideas are being obscured in a tower of readings that gatekeep the critics awayâ and I think EA is guilty of this sometimes in other areas too.
Just noting that many of the âthis concept is properly explained elsewhereâ links are also accompanied by expandable boxes that you can click to expand for the gist. I do think that understanding where Iâm coming from in this piece requires a bunch of background, but Iâve tried to make it as easy on readers as I could, e.g. explaining each concept in brief and providing a link if the brief explanation isnât clear enough or doesnât address particular objections.
I agree. Iâm curious what the process is for deciding what gets pinned to the front page. Does anyone know?
Hi! The process for curation is outlined here. In short, some people can suggest curation, and I currently make the final calls.
You can also see a list of other posts that have been curated (you can get to the list by clicking on the star next to a curated postâs title).
Oh, I see! Thanks, thatâs helpful.