So, Iâve read through this post at least twice today, and even passed chunks of it through GPT for human->machine->human translation. But Iâve got to be honest Joe, I donât think I understand what youâre saying in this post. Now, this might be a bit of a conflict between your writing style and my comprehension abilities[1], but Iâve really tried here and failed.
But there were some bits of this post I really liked! I think itâs best when itâs a recounting of your lived experience, of having in some way believed youâve understood an analytical case and then seeing your mental states radically shift when getting hands on with the phenomenon in question. I had a similar âgutâ experience working with chatGPT and GPT-4, and these sections (some extracted by Lizka), really spoke to me.
To Joe directly, while drafting this comment I am unconscious of intentional error I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.[2] If you think I have then please point them out and I will happily correct them. Itâs entirely possible Iâm making the error of trying to extract a robust thesis from whatâs meant to be more of a personal reflection. I also struggled with stating what I didnât understand clearly without appearing to be rude (which definitely isnât my intention), and I apologise if what follows comes across that way.
Some thoughts on what Iâm confused about:
Gut vs Head: A lot of this essay focuses on the dichotomy between knowing something in the abstract vs knowing something in your gut. Which is fine, but doesnât seem like a new insight? In 4.3 you question whether your gutâs change of mind is âBayesianâ or not, but isnât the whole point of the Gut v Head distinction in the first place that the gut doesnât operate as a Bayesian anyway? Speaking of...
Being Bayesian: I think, if anything, this essay persuaded me that being a Bayesian in the LW Sequences/âBayesian Mindset perspective is just⌠not for me. In section 5 you mention the danger of being Dutch Booked, but one always has the option to not accept Dutchmen offering you suspect bets. In 6 you say âthe Bayesian has to live, ahead of time, in all the futures at onceâ, seems like a pretty good reason that an epistemology is unworkable. I just donât believe that Bayesians are actually walking around with well-defined distributions about all their beliefs in all futures. I got to the âYou/âThemâ discussion in section 8 and thought that âYouâ is easily correct here. Like through a lot of your essay you seem to be saying that the gut is wrong, and Bayes is right, but then in sections 8 and 9 you seem to be saying that the Bayesian perspective is wrong? Which I agree, but I feel thereâs a version of this essay about the gut reacting to recent AI advancements where you could just Cntrl+F the word âBayesâ and delete it.
The Future is Now: Thereâs another undercurrent in this essay, which as I understand it is that if you believe you will feel a certain way or believe something in the future, just âupdate all the wayâ and feel/âbelieve that nowâwhich I donât particularly disagree with. But in section 5.1 you talk about your âfuture gutâ, and I just lost the thread. You canât know what your future gut is thinking or feeling like. Your present head is making a reasoning based on what itâs seen about the world thus far, and is using that to update its present gut. The future isnât involved at allâfuture Joe isnât passing anything back to present Joe. Present Joe is doing all the work. To be specific, this belief in the introduction: âI think weâre in a position to predict, now, that AI is going to get a lot better in the coming years.â what seems to matter to me are the reasons/âarguments for this prediction now, trying to update now based on your expected future updates just seems unwieldy and unnecessary to me.
Legibility: It may be that you caught me in bad mood, but I really resonated with JohnStuartChillâs recent rant[3] on Twitter about failing to understand LessWrong. At some points during this essay I found myself thinking âwait, whatâs happening in this sectionâ. I think sometimes the language really got in the way of my understanding, such as:
Turns of phrase like ânumber-noisesâ instead of âprobabilitesââwhich is what I think you mean? Why suddenly introduce this new term in section 8?
In section 2 you mention âgutâs Bayesian virtueâ and in 4.2 you say âMy gut lost pointsâ. I donât understand what either of these mean, and theyâre not explained.
Passages like âwell, hmm, according to your previous views, youâre saying that youâre in a much-more-worrying-than-average not-seeing-the-sign scenario. Whence such above-average-worrying?â were ones I found very hard to parseâespecially at the âconclusionâ of a section. I think that this could definitely be written more clearly.
As I finish, I worry that this all seems like tone-policing, or overly harsh, and if so I sincerely apologise. But perhaps there is an explainability-authenticity tradeoff at play here? As it stands, this post is just currently beyond my comprehension, and so I canât engage in a meaningful discussion about it with you, and other Forum commenters, which is ideally what Iâd like.
For what itâs worth, I found this quite hard to follow/âread also. In fact, surprisingly so for something written by a philosopher. (Not that philosophers are easy to read, itâs just I was one, so Iâm used to reading them.)
So, Iâve read through this post at least twice today, and even passed chunks of it through GPT for human->machine->human translation. But Iâve got to be honest Joe, I donât think I understand what youâre saying in this post. Now, this might be a bit of a conflict between your writing style and my comprehension abilities[1], but Iâve really tried here and failed.
But there were some bits of this post I really liked! I think itâs best when itâs a recounting of your lived experience, of having in some way believed youâve understood an analytical case and then seeing your mental states radically shift when getting hands on with the phenomenon in question. I had a similar âgutâ experience working with chatGPT and GPT-4, and these sections (some extracted by Lizka), really spoke to me.
To Joe directly, while drafting this comment I am unconscious of intentional error I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.[2] If you think I have then please point them out and I will happily correct them. Itâs entirely possible Iâm making the error of trying to extract a robust thesis from whatâs meant to be more of a personal reflection. I also struggled with stating what I didnât understand clearly without appearing to be rude (which definitely isnât my intention), and I apologise if what follows comes across that way.
Some thoughts on what Iâm confused about:
Gut vs Head: A lot of this essay focuses on the dichotomy between knowing something in the abstract vs knowing something in your gut. Which is fine, but doesnât seem like a new insight? In 4.3 you question whether your gutâs change of mind is âBayesianâ or not, but isnât the whole point of the Gut v Head distinction in the first place that the gut doesnât operate as a Bayesian anyway? Speaking of...
Being Bayesian: I think, if anything, this essay persuaded me that being a Bayesian in the LW Sequences/âBayesian Mindset perspective is just⌠not for me. In section 5 you mention the danger of being Dutch Booked, but one always has the option to not accept Dutchmen offering you suspect bets. In 6 you say âthe Bayesian has to live, ahead of time, in all the futures at onceâ, seems like a pretty good reason that an epistemology is unworkable. I just donât believe that Bayesians are actually walking around with well-defined distributions about all their beliefs in all futures. I got to the âYou/âThemâ discussion in section 8 and thought that âYouâ is easily correct here. Like through a lot of your essay you seem to be saying that the gut is wrong, and Bayes is right, but then in sections 8 and 9 you seem to be saying that the Bayesian perspective is wrong? Which I agree, but I feel thereâs a version of this essay about the gut reacting to recent AI advancements where you could just Cntrl+F the word âBayesâ and delete it.
The Future is Now: Thereâs another undercurrent in this essay, which as I understand it is that if you believe you will feel a certain way or believe something in the future, just âupdate all the wayâ and feel/âbelieve that nowâwhich I donât particularly disagree with. But in section 5.1 you talk about your âfuture gutâ, and I just lost the thread. You canât know what your future gut is thinking or feeling like. Your present head is making a reasoning based on what itâs seen about the world thus far, and is using that to update its present gut. The future isnât involved at allâfuture Joe isnât passing anything back to present Joe. Present Joe is doing all the work. To be specific, this belief in the introduction: âI think weâre in a position to predict, now, that AI is going to get a lot better in the coming years.â what seems to matter to me are the reasons/âarguments for this prediction now, trying to update now based on your expected future updates just seems unwieldy and unnecessary to me.
Legibility: It may be that you caught me in bad mood, but I really resonated with JohnStuartChillâs recent rant[3] on Twitter about failing to understand LessWrong. At some points during this essay I found myself thinking âwait, whatâs happening in this sectionâ. I think sometimes the language really got in the way of my understanding, such as:
Turns of phrase like ânumber-noisesâ instead of âprobabilitesââwhich is what I think you mean? Why suddenly introduce this new term in section 8?
In section 2 you mention âgutâs Bayesian virtueâ and in 4.2 you say âMy gut lost pointsâ. I donât understand what either of these mean, and theyâre not explained.
Passages like âwell, hmm, according to your previous views, youâre saying that youâre in a much-more-worrying-than-average not-seeing-the-sign scenario. Whence such above-average-worrying?â were ones I found very hard to parseâespecially at the âconclusionâ of a section. I think that this could definitely be written more clearly.
As I finish, I worry that this all seems like tone-policing, or overly harsh, and if so I sincerely apologise. But perhaps there is an explainability-authenticity tradeoff at play here? As it stands, this post is just currently beyond my comprehension, and so I canât engage in a meaningful discussion about it with you, and other Forum commenters, which is ideally what Iâd like.
I admit to being somewhat similarly confused with Seeing More Whole and Grokking Illusionism.
Iâm not sure at what part of section 10 it dawned on me what you were doing ;)
See https://ââtwitter.com/ââmealreplacer/ââstatus/ââ1655206833643036674 which applies to this essay most, but the other surrounding tweets are funny/ââon-point too.
For what itâs worth, I found this quite hard to follow/âread also. In fact, surprisingly so for something written by a philosopher. (Not that philosophers are easy to read, itâs just I was one, so Iâm used to reading them.)