However, if EA was currently doing all of them, I’d still have asked the same questions. I don’t see them as adequate to address the problem I’m trying to raise.
Really? All of them?
QA models in use by people posting on comments
informal argument outlining
post content specialized by purpose
automated and expert content tagging
bots patrolling post content
(in future) AI analyzing research quality
You don’t think that would address problems of updating in EA to some extent? You could add a few more:
automated post tagging (the forum suggests tags and adds expert judgement when a post is untagged) NOTE: not content tagging, that is marking up words and sentences
suggested sources or bibliographies (now the forum provides lists of posts tagged with a specific tag, this would go further, guiding post authors to existing content before the post is published)
Suppose there were some really big premise (uh, crux) that a bunch of people were focused on. They could have their high-tech and grueling argument. Then I suppose they could record the argument’s conclusion, and add it to some collection of arguments for/against some EA norm or principle or risk estimate or cause evaluation. Then I guess EA folks could heed the arguments and follow some process to canonicalize the conclusions as a set of beliefs. They would end up with a “bible” of EA principles, norms, etc, maybe with a history feature to show updates over time.
There might be some kind of vote or something for some types of updates, that would be very EA. Voters could come from the larger pool of EA, anyone committed to reviewing arguments (probably reading summaries) would get to vote on some canon updates. It would be political, there’d be plenty of status and charisma leading and motivated thinking, but it would be cool.
There are various people, Singer, Ord, Galef, MacAskill are just a few, all with varying degrees of influence over EA thought. But as far as top-down decisions, my bottom line on EA is that EA the movement is not under anyone’s control but EA the career might involve some conflicts of interest. In that sense, there’s top-down (money-driven) influence.
But if you wanted to rank influence, I think EA’s are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake. EA’s are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.
Current EA trends include:
framing beliefs inside a presumption of mathematical uncertainty Bayesianism did not add clarity or rigor to updating of beliefs in EA.
suffering technological determinism more than some EA’s unrealistically focus on technology to solve climate change, wealth inequality, etc.
harboring a strange techno-utopian faith in the future Longtermism offers implausible visions of trillions of happy AGI or AGI-managed people.
ignoring relevant political frames or working bottom-up with their charitable efforts Neglectedness in ITN doesn’t include political causes and feedbacks of aid work.
ignoring near-term (2020-2050) plausible climate change impacts on their work Charitable impact could be lost in developing countries as climate pressures rise.
accepting politicized scientific, medical, and economic models w/o argument Becausescience which is fine accept when the science is politicized or distorted.
believing in the altruistic consequences of spending money on charities EA’s offset personal behavior with donations because they believe in donation impact.
ignoring ocean health, resource depletion, the sixth great extinction, etc EA’s are not alone in this, terrestrial climate change gets most environment news press.
ignoring the presence of, and proper context for, self-interest in decision-making AFAIK, EA’s haven’t really addressed what role selfishness has and should have in life.
Julia Galef’s work on instrumental versus epistemic rationality, and her Scout vs Soldier mindset model, are good for navigating the terrain you want to survey, Elliot. I recommend it. She is a part of the EA community at large. I keep a list of independent thinkers whose work I should follow, and she’s on it.
In addition, a friend once told me that I should be sure to enjoy the journey as well as set a destination, since there’s no guarantee of ever reaching my destination. I joined the forum to make a submission to one of their contests. My contest submission was basically about EA updating. My main observation of EA updating is that changes in belief do not reflect increasing constraints on an original belief as new evidence appears. Rather, an EA belief is just evaluated for credence as is, with confidence in it waxing or waning. EA does not appear to set out systematic methods for constraining beliefs, which is too bad.
But if you wanted to rank influence, I think EA’s are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake. EA’s are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.
Isn’t applying rationality (and evidence, science, math, etc.) to charity EA’s basic mission? And therefore, if you’re correct about EA, wouldn’t it be failing at its mission? Shouldn’t EA be trying to do much better at this stuff instead of being about as bad as many other communities at it? (The status quo or average in our society, for rationality, is pretty bad.)
You don’t think that would address problems of updating in EA to some extent?
Do I think those techniques would address problems of updating in EA adequately? No.
Do I think those techniques would address problems of updating in EA to some extent? Yes
The change in qualifier is an example of something I find difficult to make a decision about in discussions. It’s meaningful enough to invert my answer but I don’t know that it matters to you, and I doubt it matters to anyone else reading. I could reply with topical, productive comments that ignore this detail. Is it better to risk getting caught up in details to address this or better to try to keep the discussion making forward progress? Ignoring it risks you feeling ignored (without explanation) or the detail having been important to your thinking. Speaking about it risks coming off picky, pedantic, derailing, etc.
In general, I find there’s a pretty short maximum number of back-and-forths before people stop discussing (pretty much regardless of how well the discussion is going), which is a reason to focus replies only on the most important and interesting things. It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.
In this case, the original topic I was trying to raise is discussion methodology, so replying in a meta way actually fits my interests and that topic, which is why I’ve tried it. This is an example of a decision that people face in discussions which a good methodology could help with.
It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.
I think what sets the EA forum apart is the folks who choose to participate in the forum, there’s a lot of posts that go up here and I like their focus (ethics, charity, ai, meta stuff about thinking).
I doubt there’s enough interest to persuade folks to create and maintain a system of accountability for all arguments put on the forum or into their pool of literature, but there is a tendency here to quote other’s work, and that lets them do peer review and build on earlier work, so there’s some continuity of knowledge development that you don’t always find. Also, sometimes posts show academic rigor, which can have its pluses. And while relying on expert opinion on controversial topics isn’t going to lead to consensus, at least it positions an author in a larger field of perspectives enough for debates to have a well-known starting point.
My contest entry wasn’t about that sort of continuity or any system of building consensus. Fwiw, here is my contest entry. Like I said, it was about updating, but makes some other points about unweighted beliefs vs subjective probabilities, prediction, and EA guilt. Most of it went down in a short two-day stretch just before the contest entry was due, and there was a lot I wanted to improve for the next month as I waited for results to come back. I’ve still got some changes to make, then I’ll be done with it.
Really? All of them?
QA models in use by people posting on comments
informal argument outlining
post content specialized by purpose
automated and expert content tagging
bots patrolling post content
(in future) AI analyzing research quality
You don’t think that would address problems of updating in EA to some extent? You could add a few more:
automated post tagging (the forum suggests tags and adds expert judgement when a post is untagged) NOTE: not content tagging, that is marking up words and sentences
suggested sources or bibliographies (now the forum provides lists of posts tagged with a specific tag, this would go further, guiding post authors to existing content before the post is published)
Suppose there were some really big premise (uh, crux) that a bunch of people were focused on. They could have their high-tech and grueling argument. Then I suppose they could record the argument’s conclusion, and add it to some collection of arguments for/against some EA norm or principle or risk estimate or cause evaluation. Then I guess EA folks could heed the arguments and follow some process to canonicalize the conclusions as a set of beliefs. They would end up with a “bible” of EA principles, norms, etc, maybe with a history feature to show updates over time.
There might be some kind of vote or something for some types of updates, that would be very EA. Voters could come from the larger pool of EA, anyone committed to reviewing arguments (probably reading summaries) would get to vote on some canon updates. It would be political, there’d be plenty of status and charisma leading and motivated thinking, but it would be cool.
There are various people, Singer, Ord, Galef, MacAskill are just a few, all with varying degrees of influence over EA thought. But as far as top-down decisions, my bottom line on EA is that EA the movement is not under anyone’s control but EA the career might involve some conflicts of interest. In that sense, there’s top-down (money-driven) influence.
But if you wanted to rank influence, I think EA’s are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake. EA’s are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.
Current EA trends include:
framing beliefs inside a presumption of mathematical uncertainty
Bayesianism did not add clarity or rigor to updating of beliefs in EA.
suffering technological determinism more than some
EA’s unrealistically focus on technology to solve climate change, wealth inequality, etc.
harboring a strange techno-utopian faith in the future
Longtermism offers implausible visions of trillions of happy AGI or AGI-managed people.
ignoring relevant political frames or working bottom-up with their charitable efforts
Neglectedness in ITN doesn’t include political causes and feedbacks of aid work.
ignoring near-term (2020-2050) plausible climate change impacts on their work
Charitable impact could be lost in developing countries as climate pressures rise.
accepting politicized scientific, medical, and economic models w/o argument
Because science which is fine accept when the science is politicized or distorted.
believing in the altruistic consequences of spending money on charities
EA’s offset personal behavior with donations because they believe in donation impact.
ignoring ocean health, resource depletion, the sixth great extinction, etc
EA’s are not alone in this, terrestrial climate change gets most environment news press.
ignoring the presence of, and proper context for, self-interest in decision-making
AFAIK, EA’s haven’t really addressed what role selfishness has and should have in life.
Julia Galef’s work on instrumental versus epistemic rationality, and her Scout vs Soldier mindset model, are good for navigating the terrain you want to survey, Elliot. I recommend it. She is a part of the EA community at large. I keep a list of independent thinkers whose work I should follow, and she’s on it.
In addition, a friend once told me that I should be sure to enjoy the journey as well as set a destination, since there’s no guarantee of ever reaching my destination. I joined the forum to make a submission to one of their contests. My contest submission was basically about EA updating. My main observation of EA updating is that changes in belief do not reflect increasing constraints on an original belief as new evidence appears. Rather, an EA belief is just evaluated for credence as is, with confidence in it waxing or waning. EA does not appear to set out systematic methods for constraining beliefs, which is too bad.
Isn’t applying rationality (and evidence, science, math, etc.) to charity EA’s basic mission? And therefore, if you’re correct about EA, wouldn’t it be failing at its mission? Shouldn’t EA be trying to do much better at this stuff instead of being about as bad as many other communities at it? (The status quo or average in our society, for rationality, is pretty bad.)
Do I think those techniques would address problems of updating in EA adequately? No.
Do I think those techniques would address problems of updating in EA to some extent? Yes
The change in qualifier is an example of something I find difficult to make a decision about in discussions. It’s meaningful enough to invert my answer but I don’t know that it matters to you, and I doubt it matters to anyone else reading. I could reply with topical, productive comments that ignore this detail. Is it better to risk getting caught up in details to address this or better to try to keep the discussion making forward progress? Ignoring it risks you feeling ignored (without explanation) or the detail having been important to your thinking. Speaking about it risks coming off picky, pedantic, derailing, etc.
In general, I find there’s a pretty short maximum number of back-and-forths before people stop discussing (pretty much regardless of how well the discussion is going), which is a reason to focus replies only on the most important and interesting things. It’s also a reason I find those discussion techniques inadequate: they don’t address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.
In this case, the original topic I was trying to raise is discussion methodology, so replying in a meta way actually fits my interests and that topic, which is why I’ve tried it. This is an example of a decision that people face in discussions which a good methodology could help with.
Sounds interesting. Link please.
I think what sets the EA forum apart is the folks who choose to participate in the forum, there’s a lot of posts that go up here and I like their focus (ethics, charity, ai, meta stuff about thinking).
I doubt there’s enough interest to persuade folks to create and maintain a system of accountability for all arguments put on the forum or into their pool of literature, but there is a tendency here to quote other’s work, and that lets them do peer review and build on earlier work, so there’s some continuity of knowledge development that you don’t always find. Also, sometimes posts show academic rigor, which can have its pluses. And while relying on expert opinion on controversial topics isn’t going to lead to consensus, at least it positions an author in a larger field of perspectives enough for debates to have a well-known starting point.
My contest entry wasn’t about that sort of continuity or any system of building consensus. Fwiw, here is my contest entry. Like I said, it was about updating, but makes some other points about unweighted beliefs vs subjective probabilities, prediction, and EA guilt. Most of it went down in a short two-day stretch just before the contest entry was due, and there was a lot I wanted to improve for the next month as I waited for results to come back. I’ve still got some changes to make, then I’ll be done with it.