First, calling GPT-4 “the first very weak AGI” is highly off-putting to me. Either that’s just not true or “very weak AGI” isn’t a meaningful term. This makes the whole post feel less credible to me.
Second, I find the praise of LessWrong in this post disturbing. There is no credible evidence of LessWrong promoting exceptionally rational thought — to quote Eliezer Yudkowsky, in some sense the test of rationality must be whether the supposedly exceptionally rational people are “currently smiling from on top of a giant heap of utility”, which is not remotely true for LessWrong — and there is a lot of evidence of irrationality and outright delusion associated with LessWrong. LessWrong is also has a profoundly morally troubling track record, to such an alarming extent that I would encourage people involved in effective altruism to dissociate from LessWrong.
Two criticisms of this post.
First, calling GPT-4 “the first very weak AGI” is highly off-putting to me. Either that’s just not true or “very weak AGI” isn’t a meaningful term. This makes the whole post feel less credible to me.
Second, I find the praise of LessWrong in this post disturbing. There is no credible evidence of LessWrong promoting exceptionally rational thought — to quote Eliezer Yudkowsky, in some sense the test of rationality must be whether the supposedly exceptionally rational people are “currently smiling from on top of a giant heap of utility”, which is not remotely true for LessWrong — and there is a lot of evidence of irrationality and outright delusion associated with LessWrong. LessWrong is also has a profoundly morally troubling track record, to such an alarming extent that I would encourage people involved in effective altruism to dissociate from LessWrong.