My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety.
This is odd to me because I have a couple memories of feeling like sr EAs were not taking me seriously because I was being sloppy in my justification for agreeing with them. Though admittedly one such anecdote was pre-pandemic, and I have a few longstanding reason to expect the post-pandemic community builder industrial complex would not have performed as well as the individuals I’m thinking about.
Looking back on my early days interacting with EAs, I generally couldn’t present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I’m not sure about what hurdles to overcome if you want EA communities to push towards ‘Agreement sloppily justified’ and ‘Disagreement sloppily justified’ being treated similarly.
I think both things happen in different contexts. (Being socially rewarded just for saying you care about AI Safety, and not being taken seriously because (it seems like) you have not thought it through carefully, that is.)
I dunno, I think it can be the case that being sloppy in reasoning and for having disagreements with your conversational partner are independently penalized, or that there’s an interaction effect between the two.
Especially in quick conversations, I can definitely see times where I’m more attuned to bad (by lights) arguments for (by my lights) wrong conclusions, than bad arguments for what I consider to be right conclusions. This is especially true if “bad arguments for right conclusions” really just means people who don’t actually understand deep arguments paraphrasing better arguments that they’ve heard.
My experience is that it’s more that group leaders & other students in EA groups might reward poor epistemics in this way.
And that when people are being more casual, it ‘fits in’ to say AI risk & people won’t press for reasons in those contexts as much, but would push if you said something unusual.
Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I’m concerned about AI risk & to respond to various counterarguments.
This is odd to me because I have a couple memories of feeling like sr EAs were not taking me seriously because I was being sloppy in my justification for agreeing with them. Though admittedly one such anecdote was pre-pandemic, and I have a few longstanding reason to expect the post-pandemic community builder industrial complex would not have performed as well as the individuals I’m thinking about.
Can confirm that:
“sr EAs [not taking someone seriously if they were] sloppy in their justification for agreeing with them”
sounds right based on my experience being on both sides of the “meeting senior EAs” equation at various times.
(I don’t think I’ve met Quinn, so this isn’t a comment on anyone’s impression of them or their reasoning)
I think that a very simplified ordering for how to impress/gain status within EA is:
Looking back on my early days interacting with EAs, I generally couldn’t present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I’m not sure about what hurdles to overcome if you want EA communities to push towards ‘Agreement sloppily justified’ and ‘Disagreement sloppily justified’ being treated similarly.
I think both things happen in different contexts. (Being socially rewarded just for saying you care about AI Safety, and not being taken seriously because (it seems like) you have not thought it through carefully, that is.)
I dunno, I think it can be the case that being sloppy in reasoning and for having disagreements with your conversational partner are independently penalized, or that there’s an interaction effect between the two.
Especially in quick conversations, I can definitely see times where I’m more attuned to bad (by lights) arguments for (by my lights) wrong conclusions, than bad arguments for what I consider to be right conclusions. This is especially true if “bad arguments for right conclusions” really just means people who don’t actually understand deep arguments paraphrasing better arguments that they’ve heard.
My experience is that it’s more that group leaders & other students in EA groups might reward poor epistemics in this way.
And that when people are being more casual, it ‘fits in’ to say AI risk & people won’t press for reasons in those contexts as much, but would push if you said something unusual.
Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I’m concerned about AI risk & to respond to various counterarguments.