Yeah I thought of it from the perspective of “not being told what to think but being told what to think about”—Like you could say “the most profitable (in karma of a website) strategy is to disagree with a ‘founder’-like figure of that very website” of course, but indeed if you’ve accepted his frame of the debate then didn’t he “win” in a sense? This seems technically true often (not always!) but I find it uncompelling.
I’ve spotted several issues with the sequences that the rationalists seemingly haven’t.
I did an in depth write-up on debunking one claim here (that of a superintelligence inventing general relativity from a blade of grass).
I haven’t gotten around to in depth write-ups for other things, but here are some brief descriptions of other issues I’ve encountered:
the description of Aumanns agreement theorem in “defy the data” is false, leaving behind important caveats that render his use of it incorrect.
Yud implies that “Einsteins arrogance” is some sort of mystery (and people have cited that article as a reason to be as arrogant as Einstein about speculative forecasts). In fact, Einsteins arrogance was completely justified by the available evidence and is not surprising at all, in a manner in no way comparable to speculative forecasts.
The implications of the “AI box experiment” have been severely overstated. It does not at all prove that an AGI cannot be boxed. “rationalists are gullible” fits the evidence provided just as well.
Yudkowsky treats his case for the “many worlds hypothesis” as a slam-dunk that proves the triumph of Bayes, but in fact it is only half-done. He presents good arguments against “collapse is real”, but fails to argue that this means many worlds is the truth, rather than one of the other many interpretations which do not involve a real collapse.
The use of Bayesianism in Rationalism is highly simplified, and often doesn’t actually involve using bayes rule at all. It rarely resembles bayes as actually applied in science, and is likely to lead to errors in certain situations, like forecasting low-probability events.
Yud’s track record of predictions is fairly bad, but he has a habit of pretending it isn’t by being vague and refusing to make predictions that can be actually checked. In general he displays an embarrassing lack of intellectual humility.
Yeah I thought of it from the perspective of “not being told what to think but being told what to think about”—Like you could say “the most profitable (in karma of a website) strategy is to disagree with a ‘founder’-like figure of that very website” of course, but indeed if you’ve accepted his frame of the debate then didn’t he “win” in a sense? This seems technically true often (not always!) but I find it uncompelling.
where did you write these down?
I did an in depth write-up on debunking one claim here (that of a superintelligence inventing general relativity from a blade of grass).
I haven’t gotten around to in depth write-ups for other things, but here are some brief descriptions of other issues I’ve encountered:
the description of Aumanns agreement theorem in “defy the data” is false, leaving behind important caveats that render his use of it incorrect.
Yud implies that “Einsteins arrogance” is some sort of mystery (and people have cited that article as a reason to be as arrogant as Einstein about speculative forecasts). In fact, Einsteins arrogance was completely justified by the available evidence and is not surprising at all, in a manner in no way comparable to speculative forecasts.
The implications of the “AI box experiment” have been severely overstated. It does not at all prove that an AGI cannot be boxed. “rationalists are gullible” fits the evidence provided just as well.
Yudkowsky treats his case for the “many worlds hypothesis” as a slam-dunk that proves the triumph of Bayes, but in fact it is only half-done. He presents good arguments against “collapse is real”, but fails to argue that this means many worlds is the truth, rather than one of the other many interpretations which do not involve a real collapse.
The use of Bayesianism in Rationalism is highly simplified, and often doesn’t actually involve using bayes rule at all. It rarely resembles bayes as actually applied in science, and is likely to lead to errors in certain situations, like forecasting low-probability events.
Yud’s track record of predictions is fairly bad, but he has a habit of pretending it isn’t by being vague and refusing to make predictions that can be actually checked. In general he displays an embarrassing lack of intellectual humility.