When Gregory Lewis said to you that “If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is ‘too high’ is neither here nor there: you still have to meet it else they will keep ignoring you.” He is arguing an ultimatum: “if we’re dysfunctional, then you still have to bow to our dysfunction, or we get to ignore you.” That has no standing in epistemics, and it is a bad-faith argument. If he were to suppose his organization’s dysfunction with the probability with which he askes you to doubt your own work, he would realize that “you gotta toe the line, even if our ‘bar’ is nonsense” is just nonsense! Under the circumstance where they are dysfunctional, Gregory Lewis is lounging in it!
The worst part is that, once their fallacies and off-hand dismissals are pointed-out to them, when they give no real refutation, they just go silent. It’s bizarre, that they think they are behaving in a healthy, rational way. I suspect that many of them aren’t as competent as they hope, and they need to hide that fact by avoiding real analysis. I’d be glad to talk to any Ai Safety folks in the Bay, myself—I’d been asking them since December of last year. When I presented my arguments, they waved-away without refutation, just as they have done to you.
Yes, agreed with the substance of your points (I try to be more diplomatic about this, but it roughly lines up with my impressions).
If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is ‘too high’ is neither here nor there: you still have to meet it else they will keep ignoring you.
Rather than helping encourage reasonable evaluations in the community (no isolated demands for rigour for judging long-term safe AGI impossibility formal reasoning compared to intuitions about AGI safety being possible in principle), this is saying that a possibly unreasonable status quo is not going to be changed, so therefore people should just adjust to the status quo if they want to make any headway.
The issue here is that the inferential distance is already large enough as it is, and in most one-on-ones I don’t get further than discussing basic premises before my interlocutor side-tracks or cuts off the conversation. I was naive 11 months ago to believe that many people would actually just dig into the reasoning steps with us, if we found a way to translate them nearer to Alignment Forum speak to be easier to comprehend and follow step-by-step.
In practice, I do think it’s correct that we need to work with the community as it is. It’s on us to find ways to encourage people to reflect on their premises and to detail and discuss the formal reasoning from there.
continuing my response:
When Gregory Lewis said to you that “If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is ‘too high’ is neither here nor there: you still have to meet it else they will keep ignoring you.” He is arguing an ultimatum: “if we’re dysfunctional, then you still have to bow to our dysfunction, or we get to ignore you.” That has no standing in epistemics, and it is a bad-faith argument. If he were to suppose his organization’s dysfunction with the probability with which he askes you to doubt your own work, he would realize that “you gotta toe the line, even if our ‘bar’ is nonsense” is just nonsense! Under the circumstance where they are dysfunctional, Gregory Lewis is lounging in it!
The worst part is that, once their fallacies and off-hand dismissals are pointed-out to them, when they give no real refutation, they just go silent. It’s bizarre, that they think they are behaving in a healthy, rational way. I suspect that many of them aren’t as competent as they hope, and they need to hide that fact by avoiding real analysis. I’d be glad to talk to any Ai Safety folks in the Bay, myself—I’d been asking them since December of last year. When I presented my arguments, they waved-away without refutation, just as they have done to you.
Yes, agreed with the substance of your points (I try to be more diplomatic about this, but it roughly lines up with my impressions).
Rather than helping encourage reasonable evaluations in the community (no isolated demands for rigour for judging long-term safe AGI impossibility formal reasoning compared to intuitions about AGI safety being possible in principle), this is saying that a possibly unreasonable status quo is not going to be changed, so therefore people should just adjust to the status quo if they want to make any headway.
The issue here is that the inferential distance is already large enough as it is, and in most one-on-ones I don’t get further than discussing basic premises before my interlocutor side-tracks or cuts off the conversation. I was naive 11 months ago to believe that many people would actually just dig into the reasoning steps with us, if we found a way to translate them nearer to Alignment Forum speak to be easier to comprehend and follow step-by-step.
In practice, I do think it’s correct that we need to work with the community as it is. It’s on us to find ways to encourage people to reflect on their premises and to detail and discuss the formal reasoning from there.
Also thinking of doing an explanatory talk about this!
Yesterday, I roughly sketched out the “stepping stones” I could talk about to explain the arguments: