[From the LW version of this post]
Me:
This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.
Martín Soto:
This post literally strongly misrepresents my position in three important ways¹. And these points were purposefully made central in my answers to the author, who kindly asked for my clarifications but then didn’t include them in her summary and interpretation. This can be checked by contrasting her summary of my position with the actual text linked to, in which I clarified how my position wasn’t the simplistic one here presented.
Are you telling me I shouldn’t flag that my position has been importantly misrepresented? On LessWrong? And furthermore on a post that will be seen by way more people than my original text?
¹ I mean the three latter in my above comment, since the first (the hyperbolic presentation) is worrisome but not central.
Me:
You say that he quoted bits are misrepresentations, but I checked your writing and they seem like accurate summaries. You should flag that your position has been misrepresented iff that is true. But you haven’t been misrepresented, and I don’t think that you think you’ve been misrepresented.
I think you are muddying the waters on purpose, and making spurious demands on Elizabeth’s time, because you think clarity about what’s going on will make people more likely to eat meat. I believe this because you’ve written things like:
One thing that might be happening here, is that we’re speaking at different simulacra levels
Source comment. I’m not sure how how familiar you are with local usage of the the simulacrum levels phrase/framework, but in my understanding of the term, all but one of the simulacrum levels are flavors of lying. You go on to say:
Now, I understand the benefits of adopting the general adoption of the policy “state transparently the true facts you know, and that other people seem not to know”. Unfortunately, my impression is this community is not yet in a position in which implementing this policy will be viable or generally beneficial for many topics.
The front-page moderation guidelines on LessWrong say “aim to explain, not persuade”. This is already the norm. The norms of LessWrong can be debated, but not in a subthread on someone else’s post on a different topic.
Martín Soto:
Yes, your quotes show that I believe (and have stated explicitly) that publishing posts like this one is net-negative. That was the topic of our whole conversation. That doesn’t imply that I’m commenting to increase the costs of these publications. I tried to convince Elizabeth that this was net-negative, and she completely ignored those qualms, and that’s epistemically respectable. I am commenting mainly to avoid my name from being associated with some positions that I literally do not hold.
I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries. If you don’t provide object-level reasons why the things I pointed out in my above comment are wrong, then I can do nothing with this information. (To be clear, I do think the screenshots are fairly central parts of my clarifications, but her summaries misrepresent and directly contradict other parts of them which I had also presented as central and important.)
I do observe that providing these arguments is a time cost for you, or fixing the misrepresentations is a time cost for Elizabeth, etc. So the argument “you are just increasing the costs” will always be available for you to make. And to that the only thing I can say is… I’m not trying to get the post taken down, I’m not talking about any other parts of the post, just the ones that summarize my position.
Me:
I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.
I’m looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:
The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.
Of course, my position is not as hyperbolic as this.
This only asserts that there’s a mismatch; it provides no actual evidence of one. Next up:
his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread
In my original answers I address why this is not the case (private communication serves this purpose more naturally).
Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn’t have public discussion (ie, public discussion would be suppressed). I myself wouldn’t know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.
(There are other subthreads on the LW version; I quoted this one because I was a participant, and I do not believe the other subthreads substantially change the interpretation.)
Interesting. I think I can tell an intuitive story for why this would be the case, but I’m unsure whether that intuitive story would predict all the details of which models recognize and prefer which other models.
As an intuition pump, consider asking an LLM a subjective multiple-choice question, then taking that answer and asking a second LLM to evaluate it. The evaluation task implicitly asks the the evaluator to answer the same question, then cross-check the results. If the two LLMs are instances of the same model, their answers will be more strongly correlated than if they’re different models; so they’re more likely to mark the answer correct if they’re the same model. This would also happen if you substitute two humans or two sittings of the same human implace of the LLMs.