I think that MIRI looks kind of crankish from the outside, and this should indeed make people initially more skeptical of us. I think that we have a few other external markers of legitimacy now, such as the fact that MIRI people were thinking and writing about AI safety from the early 2000s and many smart people have now been persuaded that this is indeed an issue to be concerned with. (It’s not totally obvious to me that these markers of legitimacy mean that anyone should take us seriously on the question “what AI safety research is promising”.) When I first ran across MIRI, I was kind of skeptical because of the signs of crankery; I updated towards them substantially because I found their arguments and ideas compelling, and people whose judgement I respected also found them compelling.
I think that the signs of crankery in QRI are somewhat worse than 2008 MIRI’s signs of crankery.
I also think that I’m somewhat qualified to assess QRI’s work (as someone who’s spent ~100 paid hours thinking about philosophy of mind in the last few years), and when I look at it, I think it looks pretty crankish and wrong.
QRI is tackling a very difficult problem, as is MIRI. It took many, many years for MIRI to gather external markers of legitimacy. My inside view is that QRI is on the path of gaining said markers; for people paying attention to what we’re doing, I think there’s enough of a vector right now to judge us positively. I think these markers will be obvious from the ‘outside view’ within a short number of years.
But even without these markers, I’d poke at your position from a couple angles:
I. Object-level criticism is best
First, I don’t see evidence you’ve engaged with our work beyond very simple pattern-matching. You note that “I also think that I’m somewhat qualified to assess QRI’s work (as someone who’s spent ~100 paid hours thinking about philosophy of mind in the last few years), and when I look at it, I think it looks pretty crankish and wrong.” But *what* looks wrong? Obviously doing something new will pattern-match to crankish, regardless of whether it is crankish, so in terms of your rationale-as-stated, I don’t put too much stock in your pattern detection (and perhaps you shouldn’t either). If we want to avoid accidentally falling into (1) ‘negative-sum status attack’ interactions, and/or (2) hypercriticism of any fundamentally new thing, neither of which is good for QRI, for MIRI, or for community epistemology, object-level criticisms (and having calibrated distaste for low-information criticisms) seem pretty necessary.
Also, we do a lot more things than just philosophy, and we try to keep our assumptions about the Symmetry Theory of Valence separate from our neuroscience—STV can be wrong and our neuroscience can still be correct/useful. That said, empirically the neuroscience often does ‘lead back to’ STV.
I’d also suggest that the current state of philosophy, and especially philosophy of mind and ethics, is very dismal. I give my causal reasons for this here: https://opentheory.net/2017/10/rescuing-philosophy/ - I’m not sure if you’re anchored to existing theories in philosophy of mind being reasonable or not.
II. What’s the alternative?
If there’s one piece I would suggest engaging with, it’s my post arguing against functionalism. I think your comments presuppose functionalism is reasonable and/or the only possible approach, and the efforts QRI is putting into building an alternative are certainly wasted. I strongly disagree with this; as I noted in my Facebook reply,
>Philosophically speaking, people put forth analytic functionalism as a theory of consciousness (and implicitly a theory of valence?), but I don’t think it works *qua* a theory of consciousness (or ethics or value or valence), as I lay out here: https://forum.effectivealtruism.org/.../why-i-think-the...-- This is more-or-less an answer to some of Brian Tomasik’s (very courageous) work, and to sum up my understanding I don’t think anyone has made or seems likely to make ‘near mode’ progress, e.g. especially of the sort that would be helpful for AI safety, under the assumption of analytic functionalism.
I always find in-person interactions more amicable & high-bandwidth—I’ll be back in the Bay early December, so if you want to give this piece a careful read and sit down to discuss it I’d be glad to join you. I think it could have significant implications for some of MIRI’s work.
Most things that look crankish are crankish.
I think that MIRI looks kind of crankish from the outside, and this should indeed make people initially more skeptical of us. I think that we have a few other external markers of legitimacy now, such as the fact that MIRI people were thinking and writing about AI safety from the early 2000s and many smart people have now been persuaded that this is indeed an issue to be concerned with. (It’s not totally obvious to me that these markers of legitimacy mean that anyone should take us seriously on the question “what AI safety research is promising”.) When I first ran across MIRI, I was kind of skeptical because of the signs of crankery; I updated towards them substantially because I found their arguments and ideas compelling, and people whose judgement I respected also found them compelling.
I think that the signs of crankery in QRI are somewhat worse than 2008 MIRI’s signs of crankery.
I also think that I’m somewhat qualified to assess QRI’s work (as someone who’s spent ~100 paid hours thinking about philosophy of mind in the last few years), and when I look at it, I think it looks pretty crankish and wrong.
QRI is tackling a very difficult problem, as is MIRI. It took many, many years for MIRI to gather external markers of legitimacy. My inside view is that QRI is on the path of gaining said markers; for people paying attention to what we’re doing, I think there’s enough of a vector right now to judge us positively. I think these markers will be obvious from the ‘outside view’ within a short number of years.
But even without these markers, I’d poke at your position from a couple angles:
I. Object-level criticism is best
First, I don’t see evidence you’ve engaged with our work beyond very simple pattern-matching. You note that “I also think that I’m somewhat qualified to assess QRI’s work (as someone who’s spent ~100 paid hours thinking about philosophy of mind in the last few years), and when I look at it, I think it looks pretty crankish and wrong.” But *what* looks wrong? Obviously doing something new will pattern-match to crankish, regardless of whether it is crankish, so in terms of your rationale-as-stated, I don’t put too much stock in your pattern detection (and perhaps you shouldn’t either). If we want to avoid accidentally falling into (1) ‘negative-sum status attack’ interactions, and/or (2) hypercriticism of any fundamentally new thing, neither of which is good for QRI, for MIRI, or for community epistemology, object-level criticisms (and having calibrated distaste for low-information criticisms) seem pretty necessary.
Also, we do a lot more things than just philosophy, and we try to keep our assumptions about the Symmetry Theory of Valence separate from our neuroscience—STV can be wrong and our neuroscience can still be correct/useful. That said, empirically the neuroscience often does ‘lead back to’ STV.
Some things I’d offer for critique:
https://opentheory.net/2018/08/a-future-for-neuroscience/#
https://opentheory.net/2018/12/the-neuroscience-of-meditation/
https://www.qualiaresearchinstitute.org/research-lineages
(you can also watch our introductory video for context, and perhaps a ‘marker of legitimacy’, although it makes very few claims https://www.youtube.com/watch?v=HetKzjOJoy8 )
I’d also suggest that the current state of philosophy, and especially philosophy of mind and ethics, is very dismal. I give my causal reasons for this here: https://opentheory.net/2017/10/rescuing-philosophy/ - I’m not sure if you’re anchored to existing theories in philosophy of mind being reasonable or not.
II. What’s the alternative?
If there’s one piece I would suggest engaging with, it’s my post arguing against functionalism. I think your comments presuppose functionalism is reasonable and/or the only possible approach, and the efforts QRI is putting into building an alternative are certainly wasted. I strongly disagree with this; as I noted in my Facebook reply,
>Philosophically speaking, people put forth analytic functionalism as a theory of consciousness (and implicitly a theory of valence?), but I don’t think it works *qua* a theory of consciousness (or ethics or value or valence), as I lay out here: https://forum.effectivealtruism.org/.../why-i-think-the...-- This is more-or-less an answer to some of Brian Tomasik’s (very courageous) work, and to sum up my understanding I don’t think anyone has made or seems likely to make ‘near mode’ progress, e.g. especially of the sort that would be helpful for AI safety, under the assumption of analytic functionalism.
https://forum.effectivealtruism.org/posts/FfJ4rMTJAB3tnY5De/why-i-think-the-foundational-research-institute-should#6Lrwqcdx86DJ9sXmw
----------
I always find in-person interactions more amicable & high-bandwidth—I’ll be back in the Bay early December, so if you want to give this piece a careful read and sit down to discuss it I’d be glad to join you. I think it could have significant implications for some of MIRI’s work.
cf. Jeff Kaufman on MIRI circa 2003: https://www.jefftk.com/p/yudkowsky-and-miri