But if you wanted to rank influence, I think EA’s are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake. EA’s are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.
Isn’t applying rationality (and evidence, science, math, etc.) to charity EA’s basic mission? And therefore, if you’re correct about EA, wouldn’t it be failing at its mission? Shouldn’t EA be trying to do much better at this stuff instead of being about as bad as many other communities at it? (The status quo or average in our society, for rationality, is pretty bad.)
You don’t think that would address problems of updating in EA to some extent?
Do I think those techniques would address problems of updating in EA adequately? No.
Do I think those techniques would address problems of updating in EA to some extent? Yes
The change in qualifier is an example of something I find difficult to make a decision about in discussions. It’s meaningful enough to invert my answer but I don’t know that it matters to you, and I doubt it matters to anyone else reading. I could reply with topical, productive comments that ignore this detail. Is it better to risk getting caught up in details to address this or better to try to keep the discussion making forward progress? Ignoring it risks you feeling ignored (without explanation) or the detail having been important to your thinking. Speaking about it risks coming off picky, pedantic, derailing, etc.
In general, I find there’s a pretty short maximum number of back-and-forths before people stop discussing (pretty much regardless of how well the discussion is going), which is a reason to focus replies only on the most important and interesting things. It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.
In this case, the original topic I was trying to raise is discussion methodology, so replying in a meta way actually fits my interests and that topic, which is why I’ve tried it. This is an example of a decision that people face in discussions which a good methodology could help with.
It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.
I think what sets the EA forum apart is the folks who choose to participate in the forum, there’s a lot of posts that go up here and I like their focus (ethics, charity, ai, meta stuff about thinking).
I doubt there’s enough interest to persuade folks to create and maintain a system of accountability for all arguments put on the forum or into their pool of literature, but there is a tendency here to quote other’s work, and that lets them do peer review and build on earlier work, so there’s some continuity of knowledge development that you don’t always find. Also, sometimes posts show academic rigor, which can have its pluses. And while relying on expert opinion on controversial topics isn’t going to lead to consensus, at least it positions an author in a larger field of perspectives enough for debates to have a well-known starting point.
My contest entry wasn’t about that sort of continuity or any system of building consensus. Fwiw, here is my contest entry. Like I said, it was about updating, but makes some other points about unweighted beliefs vs subjective probabilities, prediction, and EA guilt. Most of it went down in a short two-day stretch just before the contest entry was due, and there was a lot I wanted to improve for the next month as I waited for results to come back. I’ve still got some changes to make, then I’ll be done with it.
Isn’t applying rationality (and evidence, science, math, etc.) to charity EA’s basic mission? And therefore, if you’re correct about EA, wouldn’t it be failing at its mission? Shouldn’t EA be trying to do much better at this stuff instead of being about as bad as many other communities at it? (The status quo or average in our society, for rationality, is pretty bad.)
Do I think those techniques would address problems of updating in EA adequately? No.
Do I think those techniques would address problems of updating in EA to some extent? Yes
The change in qualifier is an example of something I find difficult to make a decision about in discussions. It’s meaningful enough to invert my answer but I don’t know that it matters to you, and I doubt it matters to anyone else reading. I could reply with topical, productive comments that ignore this detail. Is it better to risk getting caught up in details to address this or better to try to keep the discussion making forward progress? Ignoring it risks you feeling ignored (without explanation) or the detail having been important to your thinking. Speaking about it risks coming off picky, pedantic, derailing, etc.
In general, I find there’s a pretty short maximum number of back-and-forths before people stop discussing (pretty much regardless of how well the discussion is going), which is a reason to focus replies only on the most important and interesting things. It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.
In this case, the original topic I was trying to raise is discussion methodology, so replying in a meta way actually fits my interests and that topic, which is why I’ve tried it. This is an example of a decision that people face in discussions which a good methodology could help with.
Sounds interesting. Link please.
I think what sets the EA forum apart is the folks who choose to participate in the forum, there’s a lot of posts that go up here and I like their focus (ethics, charity, ai, meta stuff about thinking).
I doubt there’s enough interest to persuade folks to create and maintain a system of accountability for all arguments put on the forum or into their pool of literature, but there is a tendency here to quote other’s work, and that lets them do peer review and build on earlier work, so there’s some continuity of knowledge development that you don’t always find. Also, sometimes posts show academic rigor, which can have its pluses. And while relying on expert opinion on controversial topics isn’t going to lead to consensus, at least it positions an author in a larger field of perspectives enough for debates to have a well-known starting point.
My contest entry wasn’t about that sort of continuity or any system of building consensus. Fwiw, here is my contest entry. Like I said, it was about updating, but makes some other points about unweighted beliefs vs subjective probabilities, prediction, and EA guilt. Most of it went down in a short two-day stretch just before the contest entry was due, and there was a lot I wanted to improve for the next month as I waited for results to come back. I’ve still got some changes to make, then I’ll be done with it.