To give an example of where rationalists produced a useful tool here, I found microcovid useful. For example, to convince my father that it was very low-risk for him to resume outdoor social activities.
That looks like an interesting project, but I’ll just state the obvious: late 2021August 29, 2020 was far too late to call covid early. (The horse is out of the barn, across the street, and eating the neighbour’s oats.)
Also, interesting projects like this by bright, nerdy people aren’t meaningful evidence for the underlying thesis that “LessWrong called covid early” is invoked to support, namely, that Eliezer Yudkowsky’s Sequences or the rest of the LessWrong “canon” or LessWrong’s overall philosophy or epistemology or culture produces superior rationality in people. If you compared the LessWrong community against some demographically similar cohort like, I don’t know, undergraduate students in computer science at UC Berkeley, I imagine you would find all kinds of interesting projects created by the comparison cohort.
If LessWrong is the “intervention” and the LessWrong community is the “experimental group”, then we also need a “control group”. And we need to look out for confounding variables. For example, if we were to compare LessWrong against the average U.S. population, I would worry there might be differences that could be explained just by education. But if you take some cohort of bright, nerdy, educated people who have never read the Sequences and have never heard of LessWrong, I imagine you would see the same sort of things, like this project, that you see in the LessWrong community.
It’s worth noting that the most organized, concerted effort of the LessWrong community to teach rationality, the Center for Applied Rationality (CFAR), turned out to be a complete disaster and, worst of all, CFAR also ran a summer camp for kids, meaning that it may have harmed/done wrong by children in a similar way as it harmed/did wrong by adults.
I think Holden Karnofsky nailed it way back in 2012 when he wrote this:
The Sequences (which I have read the vast majority of) do not seem to me to be a demonstration or evidence of general rationality. They are about rationality; I find them very enjoyable to read; and there is very little they say that I disagree with (or would have disagreed with before I read them). However, they do not seem to demonstrate rationality on the part of the writer, any more than a series of enjoyable, not-obviously-inaccurate essays on the qualities of a good basketball player would demonstrate basketball prowess. I sometimes get the impression that fans of the Sequences are willing to ascribe superior rationality to the writer simply because the content seems smart and insightful to them, without making a critical effort to determine the extent to which the content is novel, actionable and important.
This point is especially correct and important:
I endorse Eliezer Yudkowsky’s statement, “Be careful … any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility.” To me, the best evidence of superior general rationality (or of insight into it) would be objectively impressive achievements (successful commercial ventures, highly prestigious awards, clear innovations, etc.) and/or accumulation of wealth and power. As mentioned above, SI [Singularity Institute, now MIRI] staff/supporters/advocates do not seem particularly impressive on these fronts, at least not as much as I would expect for people who have the sort of insight into rationality that makes it sensible for them to train others in it. I am open to other evidence that SI staff/supporters/advocates have superior general rationality, but I have not seen it.
And this identification of a problem with overconfidence:
Insufficient self-skepticism given how strong its claims are and how little support its claims have won. Rather than endorsing “Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments,” SI seems often to endorse something more like “Others have not accepted their arguments because they have inferior general rationality,” a stance less likely to lead to improvement on SI’s part.
I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
I’m saying microcovid was a useful contribution on what to do about covid that came out of the rationality community.
Fair enough, but it seems more like a cool, fun coding project in the realm of science communication, rather than a prediction or some sort of original scientific research or analysis that generated new insights.
The infectious disease doctor interviewed for the Smithsonian Magazine article about microCovid said that microCovid is a user-friendly, clearly explained version of tools that already existed within the medical profession. So, that’s great, that’s useful, but it’s not a prediction or an original insight. It’s just good science communication and good coding.
The article also mentions two other similar risk calculators designed for use by the public. One of the calculators mentioned, Mathematica’s 19 and Me calculator, was released on or around May 11, 2020, more than 3 months before microCovid. I was able to find a few other risk calculators that were released no later than mid-May 2020. So, microCovid wasn’t even a wholly original idea, although it may have been differentiated from those previous efforts in some important ways.
When people say that LessWrong called covid early or was right about covid, what they mean is that LessWrong made correct predictions or had correct opinions about the pandemic (not by luck or chance, but by superior rationality) that other people didn’t make or didn’t have. And they say this in the context of providing reasons why the LessWrong community’s views or predictions on other topics should be trusted or taken seriously.
microCovid, as nice a thing as it may be, does not support either of those ideas.
I think when you look at the LessWrong community’s track record on covid-19, there is just no evidence to support this flattering story that the community tells about itself.
Late 2021 is the date of the article, not the website: “They started the project in May of 2020 for their own use, and within a few months, created a version for the public.”
To give an example of where rationalists produced a useful tool here, I found microcovid useful. For example, to convince my father that it was very low-risk for him to resume outdoor social activities.
That looks like an interesting project, but I’ll just state the obvious:
late 2021August 29, 2020 was far too late to call covid early. (The horse is out of the barn, across the street, and eating the neighbour’s oats.)Also, interesting projects like this by bright, nerdy people aren’t meaningful evidence for the underlying thesis that “LessWrong called covid early” is invoked to support, namely, that Eliezer Yudkowsky’s Sequences or the rest of the LessWrong “canon” or LessWrong’s overall philosophy or epistemology or culture produces superior rationality in people. If you compared the LessWrong community against some demographically similar cohort like, I don’t know, undergraduate students in computer science at UC Berkeley, I imagine you would find all kinds of interesting projects created by the comparison cohort.
If LessWrong is the “intervention” and the LessWrong community is the “experimental group”, then we also need a “control group”. And we need to look out for confounding variables. For example, if we were to compare LessWrong against the average U.S. population, I would worry there might be differences that could be explained just by education. But if you take some cohort of bright, nerdy, educated people who have never read the Sequences and have never heard of LessWrong, I imagine you would see the same sort of things, like this project, that you see in the LessWrong community.
It’s worth noting that the most organized, concerted effort of the LessWrong community to teach rationality, the Center for Applied Rationality (CFAR), turned out to be a complete disaster and, worst of all, CFAR also ran a summer camp for kids, meaning that it may have harmed/done wrong by children in a similar way as it harmed/did wrong by adults.
I think Holden Karnofsky nailed it way back in 2012 when he wrote this:
This point is especially correct and important:
And this identification of a problem with overconfidence:
Plus ça change! How pitiful is our enforced return to those things that were set down in writing many, many years ago.
Top level post:
I’m saying microcovid was a useful contribution on what to do about covid that came out of the rationality community.
Fair enough, but it seems more like a cool, fun coding project in the realm of science communication, rather than a prediction or some sort of original scientific research or analysis that generated new insights.
The infectious disease doctor interviewed for the Smithsonian Magazine article about microCovid said that microCovid is a user-friendly, clearly explained version of tools that already existed within the medical profession. So, that’s great, that’s useful, but it’s not a prediction or an original insight. It’s just good science communication and good coding.
The article also mentions two other similar risk calculators designed for use by the public. One of the calculators mentioned, Mathematica’s 19 and Me calculator, was released on or around May 11, 2020, more than 3 months before microCovid. I was able to find a few other risk calculators that were released no later than mid-May 2020. So, microCovid wasn’t even a wholly original idea, although it may have been differentiated from those previous efforts in some important ways.
When people say that LessWrong called covid early or was right about covid, what they mean is that LessWrong made correct predictions or had correct opinions about the pandemic (not by luck or chance, but by superior rationality) that other people didn’t make or didn’t have. And they say this in the context of providing reasons why the LessWrong community’s views or predictions on other topics should be trusted or taken seriously.
microCovid, as nice a thing as it may be, does not support either of those ideas.
I think when you look at the LessWrong community’s track record on covid-19, there is just no evidence to support this flattering story that the community tells about itself.
Late 2021 is the date of the article, not the website: “They started the project in May of 2020 for their own use, and within a few months, created a version for the public.”