So, I’ve read through this post at least twice today, and even passed chunks of it through GPT for human->machine->human translation. But I’ve got to be honest Joe, I don’t think I understand what you’re saying in this post. Now, this might be a bit of a conflict between your writing style and my comprehension abilities[1], but I’ve really tried here and failed.
But there were some bits of this post I really liked! I think it’s best when it’s a recounting of your lived experience, of having in some way believed you’ve understood an analytical case and then seeing your mental states radically shift when getting hands on with the phenomenon in question. I had a similar ‘gut’ experience working with chatGPT and GPT-4, and these sections (some extracted by Lizka), really spoke to me.
To Joe directly, while drafting this comment I am unconscious of intentional error I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.[2] If you think I have then please point them out and I will happily correct them. It’s entirely possible I’m making the error of trying to extract a robust thesis from what’s meant to be more of a personal reflection. I also struggled with stating what I didn’t understand clearly without appearing to be rude (which definitely isn’t my intention), and I apologise if what follows comes across that way.
Some thoughts on what I’m confused about:
Gut vs Head: A lot of this essay focuses on the dichotomy between knowing something in the abstract vs knowing something in your gut. Which is fine, but doesn’t seem like a new insight? In 4.3 you question whether your gut’s change of mind is ‘Bayesian’ or not, but isn’t the whole point of the Gut v Head distinction in the first place that the gut doesn’t operate as a Bayesian anyway? Speaking of...
Being Bayesian: I think, if anything, this essay persuaded me that being a Bayesian in the LW Sequences/Bayesian Mindset perspective is just… not for me. In section 5 you mention the danger of being Dutch Booked, but one always has the option to not accept Dutchmen offering you suspect bets. In 6 you say “the Bayesian has to live, ahead of time, in all the futures at once”, seems like a pretty good reason that an epistemology is unworkable. I just don’t believe that Bayesians are actually walking around with well-defined distributions about all their beliefs in all futures. I got to the ‘You/Them’ discussion in section 8 and thought that ‘You’ is easily correct here. Like through a lot of your essay you seem to be saying that the gut is wrong, and Bayes is right, but then in sections 8 and 9 you seem to be saying that the Bayesian perspective is wrong? Which I agree, but I feel there’s a version of this essay about the gut reacting to recent AI advancements where you could just Cntrl+F the word ‘Bayes’ and delete it.
The Future is Now: There’s another undercurrent in this essay, which as I understand it is that if you believe you will feel a certain way or believe something in the future, just ‘update all the way’ and feel/believe that now—which I don’t particularly disagree with. But in section 5.1 you talk about your ‘future gut’, and I just lost the thread. You can’t know what your future gut is thinking or feeling like. Your present head is making a reasoning based on what it’s seen about the world thus far, and is using that to update its present gut. The future isn’t involved at all—future Joe isn’t passing anything back to present Joe. Present Joe is doing all the work. To be specific, this belief in the introduction: “I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years.” what seems to matter to me are the reasons/arguments for this prediction now, trying to update now based on your expected future updates just seems unwieldy and unnecessary to me.
Legibility: It may be that you caught me in bad mood, but I really resonated with JohnStuartChill’s recent rant[3] on Twitter about failing to understand LessWrong. At some points during this essay I found myself thinking “wait, what’s happening in this section”. I think sometimes the language really got in the way of my understanding, such as:
Turns of phrase like “number-noises” instead of “probabilites”—which is what I think you mean? Why suddenly introduce this new term in section 8?
In section 2 you mention “gut’s Bayesian virtue” and in 4.2 you say “My gut lost points”. I don’t understand what either of these mean, and they’re not explained.
Passages like “well, hmm, according to your previous views, you’re saying that you’re in a much-more-worrying-than-average not-seeing-the-sign scenario. Whence such above-average-worrying?” were ones I found very hard to parse—especially at the ‘conclusion’ of a section. I think that this could definitely be written more clearly.
As I finish, I worry that this all seems like tone-policing, or overly harsh, and if so I sincerely apologise. But perhaps there is an explainability-authenticity tradeoff at play here? As it stands, this post is just currently beyond my comprehension, and so I can’t engage in a meaningful discussion about it with you, and other Forum commenters, which is ideally what I’d like.
For what it’s worth, I found this quite hard to follow/read also. In fact, surprisingly so for something written by a philosopher. (Not that philosophers are easy to read, it’s just I was one, so I’m used to reading them.)
So, I’ve read through this post at least twice today, and even passed chunks of it through GPT for human->machine->human translation. But I’ve got to be honest Joe, I don’t think I understand what you’re saying in this post. Now, this might be a bit of a conflict between your writing style and my comprehension abilities[1], but I’ve really tried here and failed.
But there were some bits of this post I really liked! I think it’s best when it’s a recounting of your lived experience, of having in some way believed you’ve understood an analytical case and then seeing your mental states radically shift when getting hands on with the phenomenon in question. I had a similar ‘gut’ experience working with chatGPT and GPT-4, and these sections (some extracted by Lizka), really spoke to me.
To Joe directly, while drafting this comment I am unconscious of intentional error I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.[2] If you think I have then please point them out and I will happily correct them. It’s entirely possible I’m making the error of trying to extract a robust thesis from what’s meant to be more of a personal reflection. I also struggled with stating what I didn’t understand clearly without appearing to be rude (which definitely isn’t my intention), and I apologise if what follows comes across that way.
Some thoughts on what I’m confused about:
Gut vs Head: A lot of this essay focuses on the dichotomy between knowing something in the abstract vs knowing something in your gut. Which is fine, but doesn’t seem like a new insight? In 4.3 you question whether your gut’s change of mind is ‘Bayesian’ or not, but isn’t the whole point of the Gut v Head distinction in the first place that the gut doesn’t operate as a Bayesian anyway? Speaking of...
Being Bayesian: I think, if anything, this essay persuaded me that being a Bayesian in the LW Sequences/Bayesian Mindset perspective is just… not for me. In section 5 you mention the danger of being Dutch Booked, but one always has the option to not accept Dutchmen offering you suspect bets. In 6 you say “the Bayesian has to live, ahead of time, in all the futures at once”, seems like a pretty good reason that an epistemology is unworkable. I just don’t believe that Bayesians are actually walking around with well-defined distributions about all their beliefs in all futures. I got to the ‘You/Them’ discussion in section 8 and thought that ‘You’ is easily correct here. Like through a lot of your essay you seem to be saying that the gut is wrong, and Bayes is right, but then in sections 8 and 9 you seem to be saying that the Bayesian perspective is wrong? Which I agree, but I feel there’s a version of this essay about the gut reacting to recent AI advancements where you could just Cntrl+F the word ‘Bayes’ and delete it.
The Future is Now: There’s another undercurrent in this essay, which as I understand it is that if you believe you will feel a certain way or believe something in the future, just ‘update all the way’ and feel/believe that now—which I don’t particularly disagree with. But in section 5.1 you talk about your ‘future gut’, and I just lost the thread. You can’t know what your future gut is thinking or feeling like. Your present head is making a reasoning based on what it’s seen about the world thus far, and is using that to update its present gut. The future isn’t involved at all—future Joe isn’t passing anything back to present Joe. Present Joe is doing all the work. To be specific, this belief in the introduction: “I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years.” what seems to matter to me are the reasons/arguments for this prediction now, trying to update now based on your expected future updates just seems unwieldy and unnecessary to me.
Legibility: It may be that you caught me in bad mood, but I really resonated with JohnStuartChill’s recent rant[3] on Twitter about failing to understand LessWrong. At some points during this essay I found myself thinking “wait, what’s happening in this section”. I think sometimes the language really got in the way of my understanding, such as:
Turns of phrase like “number-noises” instead of “probabilites”—which is what I think you mean? Why suddenly introduce this new term in section 8?
In section 2 you mention “gut’s Bayesian virtue” and in 4.2 you say “My gut lost points”. I don’t understand what either of these mean, and they’re not explained.
Passages like “well, hmm, according to your previous views, you’re saying that you’re in a much-more-worrying-than-average not-seeing-the-sign scenario. Whence such above-average-worrying?” were ones I found very hard to parse—especially at the ‘conclusion’ of a section. I think that this could definitely be written more clearly.
As I finish, I worry that this all seems like tone-policing, or overly harsh, and if so I sincerely apologise. But perhaps there is an explainability-authenticity tradeoff at play here? As it stands, this post is just currently beyond my comprehension, and so I can’t engage in a meaningful discussion about it with you, and other Forum commenters, which is ideally what I’d like.
I admit to being somewhat similarly confused with Seeing More Whole and Grokking Illusionism.
I’m not sure at what part of section 10 it dawned on me what you were doing ;)
See https://twitter.com/mealreplacer/status/1655206833643036674 which applies to this essay most, but the other surrounding tweets are funny/on-point too.
For what it’s worth, I found this quite hard to follow/read also. In fact, surprisingly so for something written by a philosopher. (Not that philosophers are easy to read, it’s just I was one, so I’m used to reading them.)