Concerns with Intentional Insights
A recent facebook post by Jeff Kaufman raised concerns about the behavior of Intentional Insights (InIn), an EA-aligned organization headed by Gleb Tsipursky. In discussion arising from this, a number of further concerns were raised.
This post summarizes the concerns found with InIn. It also notes some concerns which were mistaken and unfounded, and facts that arose which reflect well on InIn.
This post was contributed to by Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, and Claire Zabel. They disclose relevant conflicts of interest below.
Outline
1 | Exaggerated claims of affiliation or endorsement |
1.1 | Kerry Vaughan of CEA |
1.2 | Giving What We Can (GWWC) |
1.3 | Animal Charity Evaluators (ACE) |
2 | Astroturfing |
2.1 | The Intentional Insights blog |
2.2 | The Effective Altruism forum |
2.3 | LessWrong |
2.4 | |
2.4.1 | Soliciting upvotes and denying it |
2.4.2 | Not disclosing paid support |
2.5 | Amazon |
3 | Misleading figures |
4 | Dubious practices |
4.1.1 | Paid contractorsâ expected âvolunteeringâ |
4.1.2 | Further details regarding contractor âvolunteeringâ |
4.2 | Amazon bestseller |
5 | Inflated social media impact |
5.1 | |
5.2 | The Life You Can Save donations |
5.3 | |
5.4 | |
5.5 | Presentations of media article traffic and reach |
5.5.1 | TIME article |
5.5.2 | Huffington Post |
6 | Mistaken/âUnfair accusations |
6.1 | Supposed linearity of Twitter follower increase |
6.2 | Objections to Intensional Insights staff âlikingâ Intentional Insights content |
6.3 | âPaid likesâ from clickfarms |
7 | Positives |
7.1 | Jon Behar |
7.2 | Additional donations |
7.3 | Placement of articles in TIME and the Huffington Post |
8 | Policy responses from InIn |
8.1 | Post-criticism conflict-of-interest policy |
8.2 | Post-criticism Facebook boosting |
9 | Disclosures |
10 | Response comments from Gleb Tsipursky |
1. Exaggerated claims of affiliation or endorsement
Intentional Insights claims âactive collaborationsâ with a number of Effective Altruist groups in its Theory of Change document which was on its âAboutâ page (August 21, 2016).
In a number of cases InIn makes use of the name of an effective altruist organization without asking for that organizationâs consent, based on minor interactions such as the organization answering questions about web traffic. From the âEffective Altruism impact of Intentional Insightsâ document (August 19, 2016):
As detailed below, we observe that after learning of such claims and use of their names, some of these groups had asked InIn to stop. Yet even in some of these cases InIn had not altered the mentions in its promotional materials months later. Tsipursky also does not appear to have adopted a practice of checking with organizations before using their names in InIn promotional materials.
1.1. Kerry Vaughan of CEA
Tsipursky previously posted notes from a Skype conversation with Kerry Vaughan without his consent, and suggested he had endorsed Intentional Insights where he had not:
Tsipursky later apologized, edited the post, and said he had updated. Yet he later engaged in similar behavior (see sections 1.2 and 1.3 below).
1.2. Giving What We Can (GWWC)
Gleb has taken the Giving What We Can pledge, and contributed an article on the Giving What We Can blog on December 23, 2015. He also mentioned and linked to GWWC in his articles elsewhere.
Michelle Hutchinson, Executive Director of Giving What We Can, wrote to Tsipursky in May 2016 asking him to cease âclaiming to be supported by Giving What We Can.â However, the use of Giving What We Canâs name as an âactive collaborationâ was not removed from Intentional Insightsâ website, and remained in both of the above InIn documents as of October 15, 2016.
1.3. Animal Charity Evaluators (ACE)
In the InIn impact document Tsipursky quotes Leah Edgerton of ACE:
Erika Alonso of ACE subsequently made the following statement:
2. Astroturfing
Astroturfing is giving the misleading impression of unaffiliated (âgrassrootsâ) support. In GiveWellâs first year its cofounders engaged in astroturfing, and this was taken very seriously by its board. Among other responses, the GiveWell board demoted one of the co-founders and fined both $5,000 each. Tsipursky expressly claimed not to engage in astroturfing:
However, astroturfing is widespread across the Intentional Insights social media presence (documented in the sections below). Tsipursky did qualify his statement with âwe are not asking people to do these sorts of activities in their paid timeâ, but lack of payment isnât enough to prevent misleading people about the nature of the support. In any case, the distinction between contractorsâ paid and unpaid time is blurry (see section 4.1.1).
2.1. The Intentional Insights blog
Paid contractors for Intentional Insights leave complimentary remarks on the Intentional Insights blog, and the Intentional Insights account replies with gratitude, as if the comments were by strangers. At no stage do they disclose the financial relationship that exists between them. In the screenshot below (source, Candice, John, Beatrice, Jojo, and Shyam are all Intentional Insights contractors.
The most recent examples of this happened in late August 2016, after the initial post and discussion with Tsipursky on Jeffâs Facebook wall, and during the drafting of this document.
2.2. The Effective Altruism forum
Tsipursky has done the same thing on the Effective Altruism forum. Here is one instance (note that âNyorâ also goes by âJojoâ):
Here is another example (note that âAnthonyemuoboâ is a professional handle used by one of Tsipurskyâs acknowledged contractors, âSarginâ):
2.3. LessWrong
Tsipursky posted a link to some of his wife and InIn co-founderâs writing in February 2016, without noting this connection:
This is a minor lapse, one which Gleb claimed to have learned from and updated. Yet similar behavior continued:
In March 2016, Intentional Insightsâ contractors created accounts and started posting non-specific praise on Tsipurskyâs LessWrong posts:
These are all people Tsipursky pays, but none of them acknowledged it in their comments or their posts in the welcome thread. Additionally, Tsipursky did not acknowledge this relationship when he thanked them for their remarks.
LessWrong user gjm pointed out that this was misleading, and Tsipursky acknowledged this was a problem and commented on Sarginâs welcome post:
While Tsipursky knew both Beatrice Sargin and Alex Wenceslao had posted similar comments, since he had replied to them, he waited for these to be discovered and pointed out before acting:
This happened a third time, with JohnC2015:
2.4. Facebook
2.4.1. Soliciting upvotes and denying it
Tsipursky claimed âwhen I make a post on the EA Forum and LW I will let people who are involved with InIn know about it, for their consideration, and explicitly donât ask them to upvoteâ:
In the comment Tsipursky denies soliciting upvotes, and demands that accusations that he did be substantiated or withdrawn. Six hours later someone responded with a screenshot of a post Tsipursky had made to the Intentional Insights Insiders group showing Tsipursky soliciting upvotes:
Tsipurskyâs response, a couple hours later in the same thread:
Tsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably âtrueâ on some tendentious reading. Neither is reassuring. [He subsequently conceded this was âless than fully forthcomingâ.]
2.4.2. Not disclosing paid support
Intentional Insights proposed producing EA T-Shirts, and received multiple criticisms. Tsipursky claimed he had run the design by multiple people. Again, Tsipursky did not disclose that at least five of them were people he pays:
2.5. Amazon
Tsipurskyâs contractor posted a 5-star review for his self-help book on Amazon without disclosing the affiliation:
Tsipursky emailed copies of his self-help book to Intentional Insights volunteers, including contractors, who responded by posting 5-star reviews on Amazon:
He later followed up with:
This is true but incomplete: the 8th review is by Asraful Islam, a volunteer affiliated to Intentional Insights.
Another Intentional Insights affiliate, unpaid at that time but now a paid virtual assistant, Elle Acquino, posted another 5-star review, not in the top 10. In that review, however, the connection to Tsipursky and his nonprofit institute was disclosed.
3. Misleading figures
In December 2015 and January 2016, Tsipursky repeatedly claimed that his articles were shared thousands of times as evidence of the effectiveness of his approach. In fact, he had been reporting Facebook âlikesâ and all views on Stumbleupon as shares, greatly exaggerating the extent of social media engagement.
The initial point reflected a common issue with the interpretation of social media activity counters on websites. After this was explained to him Tsipursky claimed to have updated on the correction. However, a June 2016 document on Intentional Insightâs Effective Altruism Impact again reported views as shares, exaggerating sharing by many times.
4. Dubious practices
4.1.1. Paid contractorsâ expected âvolunteeringâ
Tsipursky only takes on contractors who spend at least two hours âvolunteeringâ for Intentional Insights for each paid hour:
In a follow-up discussion, Tsipursky suggested that contractors could temporarily reduce their volunteer hours in special circumstances, but he would not affirm that contractors would be allowed to simply say no to âvolunteeringâ:
Depending on the nature of the volunteer work, this requirement seems potentially unethical, effectively requiring that contractors do three times as much work for a fixed amount of money. We also suggest this relationship undermines the distinction Tsipursky offers between âpaidâ and âvolunteer timeâ and the defence that the promotion his contractors undertake on his behalf is innocuous as it occurs in their âvolunteer timeâ.
4.1.2. Further details regarding contractor âvolunteeringâ
Subsequent to the preparation of the above section Tsipursky provided additional information about how he came into contact with contractors, their donations, prior unpaid volunteering, wages, and other information as evidence of genuine support. They provide that, but also support concerns regarding linkage of paid and unpaid work and financial interests.
Tsipursky states the following regarding initial meetings and hiring:
Tsipursky stated the following regarding the length of unpaid volunteering prior to the first paid work:
He also notes donations by contractors, implemented by reducing their paid hours or paid hour wage rate, as evidence of genuine support:
I have pointed out many times that there is plenty of evidence showing that those folks who do contracting are passionate enthusiasts for InIn. Letâs take the example of John Chavez, who the document brought up. He chose to respond to a fundraising email to our supporter listserve in June 2016 â long before Jeff Kaufmanâs original post â by donating $50 per month to InIn out of his $300 monthly salary:This is bigger than a typical GWWC member, at over 15% of his annual income. Let me repeat â he voluntarily, out of his own volition in response to a fundraising that went out to all of our supporters, chose to make this donation. Just to be clear, we send out fundraising letters regularly, so itâs not like this was some special occasion. It was just that â as he said in the letter â it happened to fall on the 1-year anniversary of him joining InIn and he felt inspired and moved by the mission and work of the organization to give.
Before you go saying John is unique, here is another screenshot of a donation from another contractor who in October 2015, in response to a fundraising email, made a $10/âmonth donation:
Again, voluntarily, out of her own volition, she chose to make this donation.
Tsipursky also indicates that paid and unpaid hours by contractors constitute only a minority of work hours at InIn, with most hours contributed by volunteers without financial compensation:
Regarding wages and requirement/âexpectations of unpaid volunteering, Tsipursky wrote the following:
The Upwork (formerly known as Odesk) freelancer marketplace on which contractors are hired has a minimum wage of $3.00 per hour. Combined with the expected unpaid volunteering the typical wage would be $1.00, 1/â3rd of the minimum for the platform.
John is given as an example of a higher paid contractor at $7.5 per hour. However, this is combined with 3 hours of unpaid volunteering for each paid hour, rather than 2, for a combined wage of $1.875 per hour, prior to his donation of 1â3 of that wage.
In effect, the expectation of volunteering systematically circumvents the Upwork minimum wage for contractors. However, it should be noted that the Upwork minimum wage is a corporate policy, and not a national or local labor law. Contractors in low-income countries may be earning substantially more than the local minimum wages or average incomes. For example, according to Wikipedia the hourly minimum wage in US dollars at nominal exchange rates is $0.54 in Nigeria. In the Philippines minimum wages vary by location and sector, but Wikipedia lists a range of ~$0.6-$1.2 per hour for non-agricultural workers, with the latter group in the capital of Manila. So the wage per combined (paid+volunteer) hour of work would not appear to be in conflict with legal minimum wages in contractorsâ jurisdictions. Furthermore in a number of these jurisdictions the minimum wage is closer to the median wage, and unemployment is high.
Regarding the link between paid and unpaid hours, Tsipursky describes it as an informal understanding:
In aggregate the additional statements provide evidence of pre-existing support for InIn from new contractors. However, they also confirm a linkage of paid and unpaid labor, and contractor financial interests in promotional activity occurring during âvolunteerâ hours.
4.2. âBest-selling authorâ
Tsipursky includes being a âbest-selling authorâ in his standard bio. For example, on his Patreon:
And:
And on his Amazon author page:
Normally, a reader would take âbest-selling authorâ to mean hitting a major best-seller list like the New York Times, which indicates that very many people have decided to buy the book, and is a hard signal to fake. In Tsipurskyâs case, âbest-selling authorâ means that his book was very briefly the top seller in a sub-sub-category of Amazon. Further, he reports offering his book for free and encouraging friends and contractors to download and review it. In its first two weeks the book sold 50 copies at $3 each. Cumulatively it has sold 500 copies at $3 each, and been downloaded 3500+ times free. In contrast, NYT bestseller status requires thousands of sales over the first week. Amazon bestseller status is calculated hourly by category: in small categories three purchases in an hour can win the #1 bestselling author label.
Many of those giving the book 5 star reviews are social contacts of Tsipursky, some of them paid or volunteer Intentional Insights staff, but do not disclose this association (see section 2.5).
As of August 22, 2016 the book is ranked as follows:
In light of this, calling oneself a âbestselling authorâ on this sort of performance is potentially misleading.
We note that the practice of claiming bestselling author status using bestseller lists that involve very small actual sales may be widespread. This does not, however, prevent it from being misleading or controversial. For example, when Brent Underwood attained Amazon best-seller status using a few dollars in less than an hour with a book that was simply a picture of his foot, media coverage generally suggested that this highlighted a problematic practice.
5. Inflated social media impact
5.1. Facebook
Tsipursky has cited social media engagement as evidence of impact. However, in many cases it appears that this engagement is illusory. In the case of Facebook, it appears to have resulted from paid Facebook post boosting, which led to hundreds of likes on posts from clickfarms, in a process described by Veritasium: clickfarm accounts like enormous numbers of things they have not been directly paid to like in order to manipulate Facebookâs algorithms. Facebook boosting systematically attracts these clickfarm accounts, a risk which is exacerbated by boosting to regions where clickfarms are located (although clickfarms also have fake accounts purporting to be from all around the world).
In the case of InIn posts, InIn paid for that boosting. In February 2016, Tsipursky argued that this was resulting in genuine engagement and reach:
For a number of InIn blog posts with large numbers of likes (for example 318 for this recent one) these likes appear to be primarily the result of clickfarms. Accounts liking this post like vast numbers of disparate things. Here are some random selections from the middle of the list of that post:
There is further circumstantial evidence: the likes are often from accounts in low-income countries with substantial clickfarm operations. Tsipursky defended this as coincidental overlap caused by Intentional Insightâs targeting of low-income countries, however countries with similar demographics without large click farm operations are not well represented.
In arguing for the impact of his writing, Tsipursky cited a post on the TLYCS blog that got 500 likes in its first day on the TLYCS blog while typical posts got 100-200 likes:
However, this appears to also be a case of Facebook ad boosting eliciting engagement from clickfarms, this time by a former TLYCS employee (subsequently asked to stop by TLYCS) rather than InIn, according to this statement from TLYCSâ Jon Behar:
The profiles contributing the likes and whose profiles show no other engagement with TLYCS, or with EA ideas:
After Jeff Kaufman raised concerns about the pattern of Facebook likes in February 2016, Tsipursky doesnât seem to have looked into the issue further prior to the August 2016 discussion, when outside observers provided indisputable evidence and explained the role of boosting in generating clickfarm likes. While the boosting-clickfarm link is counterintuitive, the lack of any other engagement by the clickfarmers was apparent both before and after the raising of concerns in February. Failure to examine the ineffectiveness of these social media channels, even after concerns were raised, raises questions about InInâs practices as an outreach and content marketing organization.
5.2. The Life You Can Save donations
In his âEffective Altruism impact of Intentional Insightsâ document (archived copy), Tsipursky claims that content he has published with The Life You Can Save is able to âregularly reach an audience of over 5,000, at least 12% of whom make a donationâ suggesting over 600 donations per article, based on a reference letter from a former TLYCS employee. However, these figures were incorrect, and TLYCS estimates that the total number of visitors who landed on Tsipurskyâs blog posts at the TLYCS blog was ~3,000 (rather than tens of thousands), with donations directly from those page totalling likely 2-3 (rather than hundreds).
While the reference letter Tsipurksy cites could easily give that false impression, it is implausible in light of other information available to him about the impact of his pieces. For example, Tsipursky also cites an article in a major news outlet as producing two donations to GiveDirectly totalling $500:
Since two donations is far less than ~600, this â12% of 5,000 viewsâ number was clearly not sanity checked before being used to argue the case for Intentional Insights to EAs and in a fundraising document aimed at EAs. Itâs possible that Tsipursky simply took a surprisingly good estimate from a partner organization at face value, but one might expect an expert in marketing to investigate why this channel was performing so much better than his other channels.
5.3. Twitter
Tsipursky implied that his 10k Twitter followers represent organic interest:
The InIn account is following approximately as many accounts as follow it, 11.7k to 11.4k. Oliver observed that many of these accounts have â100% follow-backâ in their descriptions. It seems like theyâre offering an exchange: InIn adds these accounts as followers, and they add his back in return, or vice versa. This is not an indication of actual interest from fans, and these accounts have almost no organic engagement with InIn such as retweets:
5.4. Pinterest
InIn follows over 20,000 people on Pinterest, far more people than follow it. As on the InIn Facebook page and Twitter, follower engagement is extremely low, and dominated by persons affiliated with InIn, suggesting the vast majority of followers are not genuine.
Examining the profiles of followers, there appears to be a very high rate of clickfarm/âadvertising accounts. Here are 10 randomly selected InIn Pinterest follower accounts. 10 out of 10 appear to be spam/âadvertising/âclickfarm accounts:
5.5. Presentations of media article traffic and reach
5.5.1. TIME article
In the InIn EA impact document we see this:
The document does not make clear that the article did not appear in the print magazine, so print readers would not be exposed to it there. Online, we are left to anchor on a figure of 65 million views, without any reference to the actual views of the article (which were tremendously lower).
Somewhat later in the document we see this:
As another example, here are numbers in a spreadsheet we set up recently to track clicks to EA nonprofit websites from the Time piece we published.
However, while the article made the case for GiveWell recommended charities and EA charity evaluators, only 132 clicks reached those organizations through the article, 70 of which did not immediately bounce, according to InInâs traffic figures. Specifically, in the original InIn spreadsheet the âsigned up to newsletter or converted in other waysâł column had a value of 13 for ACE, and 1 âclicked on donate buttonâ
The corrected spreadsheet shows a value of 2 rather than 13 for âsigned up to newsletter.â
Thus InIn knew that the product of traffic and click-through was very low, suggesting some combination of low traffic for a piece on Timeâs website and low click-through rates. However this negative information was removed from the main text of the document while the 65 million figure (for all articles on the TIME website, including dubious traffic) was made prominent.
5.5.2. Huffington Post
The InIn EA impact document also included this discussion of a Huffington Post article:
However, he provided no evidence of reaching new audiences via the placement in the HuffIngton Post. Instead, he provided an example of an already supportive facebook friend, who apparently encountered the article from Tsipurskyâs Facebook page, not the Huffington Post.
6. Mistaken/âUnfair accusations
6.1. Supposed linearity of Twitter follower increase
It was suggested that Tsipurskyâs twitter page shows surprisingly linear increases in followers over time (e.g. +8 followers a day for 10 days in a row, which may be indicative of click-farming. This piece of evidence is likely mistaken, as the tool used (sharecounter) probably linearly interpolates days where they do not record a userâs Twitter followers, and thus the apparent linearity is an artifact.
6.2. Objections to Intensional Insights staff âlikingâ Intentional Insights content
In the course of the original discussion of Jeffâs post on Facebook, numerous people took exception to staff or volunteers âlikingâ or supporting InIn content. This criticism is misguided: this is common practice both for nonprofits generally and within the EA community: many EAs affiliated to a given group âlikeâ or share content without disclosing their affiliation. Although issues around appropriate disclosure can be subtle, acts like this on social media do not on reflection seem significant enough to warrant disclosure of interests to the authors of this document.
6.3. âPaid likesâ from clickfarms
In the February 2016 discussion it was suggested the Tsipursky might be directly paying for likes from clickfarms. However, as discussed in section 5.1, while the likes in question appear to have resulted from paid Facebook boosting, and to be from clickfarms, they were not directly paid for. Instead, the boosting attracted clickfarm likes through an accidental process explained well the linked Veritasium video.
7. Positives
In the course of research into and discussion around InIn, some facts that reflect well on InIn were discovered. These are listed below. We donât think this comprises all evidence favourable of InIn: the impact document, Tsipurskyâs post on the EA forum, and the Intentional Insights website offer further evidence. (We have not looked at these closely enough to have a view on them.)
7.1. Jon Behar
One TLYCS employee who was worked with Tsipursky on Giving Games says Tsipursky has made helpful introductions:
Behar is also quoted in the InIn EA impact doc as saying:
7.2. Additional donations
TLYCS has information indicating that Tsipurskyâs posts combined drove about two or three donations, and the Huffington Post article resulted in to donations to GiveDirectly totaling $500. Tracking donations is hard, so this is definitely an underestimate.
7.3. Placement of articles in TIME and the Huffington Post
Tsipurskyâs articles in TIME and the Huffington Post got lots of exposure for EA ideas. Additionally, being able to get articles placed there is impressive.
8. Policy responses from InIn
During discussions with Tsipursky regarding drafts of this document he mentioned some InIn policy changes made in response to the criticisms. This section does not reflect any other changes InIn may have made, primarily because we havenât been able to put in the time to follow up on each practice and see whether it has continued. We also note that Tsipursky provided additional information regarding Amazon sales, contractor names, and payment practices upon request for this document.
8.1. Post-criticism conflict-of-interest policy
Following the discussion under Jeff Kaufmanâs post in August 2016, InIn created a conflicts of interest policy document:
8.2. Post-criticism Facebook boosting
Tsipursky now states:
Regarding InIn social media policy, we are making sure to avoid boosting any more posts to clickfarm countries. Weâre generally not boosting posts right now to anyone but fans of the page who live in the US and other rich countries. We found we couldnât ban identifiable clickfarm accounts from the FB page, unfortunately, so weâre being really cautious about boosting posts.
9. Disclosures
Many people contributed to this document, some of them anonymously. Below are disclosures from people who contributed substantially and want to be clear about any potential conflicts of interest. None of the individuals below contributed on behalf of an employer or organization, and their contributions should not be taken to imply any stance on the part of any organization with which they are affiliated.
-
Jeff Kaufman has donated to the Centre for Effective Altruism (CEA), 80,000 Hours, and Giving What We Can. He has volunteered for Animal Charity Evaluators in a very minor capacity. His wife, Julia Wise, works for CEA and serves on the board of GiveWell.
-
Gregory Lewis has previously worked as a volunteer for Giving What We Can and 80,000 hours. He has donated to Giving What We Can and the Global Priorities Project.
-
Oliver Habryka currently works for CEA, and has been active in EA community organizing in a variety of roles.
-
Carl Shulman currently works for the Future of Humanity Institute, and consults for the Open Philanthropy Project. He previously worked for the Machine Intelligence Research Institute (MIRI). He has previously done some consulting and volunteering for the Center for Effective Altruism, especially 80,000 Hours. His wife is executive director of the Center for Applied Rationality and a board member of MIRI.
-
Claire Zabel works at the Open Philanthropy Project, and serves on the board of Animal Charity Evaluators. She has donated to a variety of EA organizations and has close ties with other people in the EA community.
10. Response comments from Gleb Tsipursky
Tsipursky has responded in the comments below: part one, part two, part three.
- DeÂciÂsion-makÂing and deÂcenÂtralÂiÂsaÂtion in EA by 26 Jun 2023 11:34 UTC; 269 points) (
- 2018 AI AlignÂment LiterÂaÂture ReÂview and CharÂity Comparison by 18 Dec 2018 4:46 UTC; 190 points) (LessWrong;
- The biggest risk of free-spendÂing EA is not opÂtics or moÂtiÂvated cogÂniÂtion, but grift by 14 May 2022 2:13 UTC; 180 points) (
- Thoughts on EA, post-FTX by 10 Sep 2023 11:05 UTC; 144 points) (
- 20 Dec 2023 2:08 UTC; 135 points) 's comment on EffecÂtive AsperÂsions: How the NonÂlinÂear InÂvesÂtiÂgaÂtion Went Wrong by (
- 2018 AI AlignÂment LiterÂaÂture ReÂview and CharÂity Comparison by 18 Dec 2018 4:48 UTC; 118 points) (
- Ask (EveryÂone) AnyÂthing â âEA 101â by 5 Oct 2022 10:17 UTC; 110 points) (
- EA FoÂrum: Data analÂyÂsis and deep learning by 12 May 2020 17:39 UTC; 82 points) (
- EA MarÂket Testing by 30 Sep 2021 15:17 UTC; 81 points) (
- Against responsibility by 31 Mar 2017 21:12 UTC; 80 points) (LessWrong;
- Milan Griffes on EA blindspots by 18 Mar 2022 16:17 UTC; 77 points) (
- The Craft & The ComÂmuÂnityâA Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 77 points) (LessWrong;
- The Ethics of PostÂing: Real Names, Pseudonyms, and Burner Accounts by 9 Mar 2023 22:53 UTC; 63 points) (
- 27 Oct 2017 3:04 UTC; 40 points) 's comment on Why & How to Make Progress on DiverÂsity & InÂcluÂsion in EA by (
- SetÂting ComÂmuÂnity Norms and Values: A reÂsponse to the InIn Open Letter by 26 Oct 2016 22:44 UTC; 38 points) (
- 13 Mar 2023 1:41 UTC; 32 points) 's comment on How my comÂmuÂnity sucÂcessÂfully reÂduced sexÂual misconduct by (
- 23 Mar 2019 12:05 UTC; 29 points) 's comment on Apology by (
- [deÂfunct] ModerÂaÂtion AcÂtion List (warnÂings and bans) by 6 Mar 2018 19:18 UTC; 29 points) (LessWrong;
- 11 Jul 2019 2:43 UTC; 23 points) 's comment on RaÂtionÂalÂity, EA and beÂing a movement by (
- DeÂcenÂtralÂized Exclusion by 13 Mar 2023 15:50 UTC; 23 points) (LessWrong;
- 2 Oct 2020 11:00 UTC; 21 points) 's comment on ObÂjecÂtions to Value-AlignÂment beÂtween EffecÂtive Altruists by (
- 24 Apr 2022 7:31 UTC; 11 points) 's comment on Aaron Gertlerâs Quick takes by (
- EffecÂtive AltruÂism FoÂrum web trafÂfic from Google Analytics by 31 Dec 2016 21:23 UTC; 10 points) (
- 22 Dec 2023 15:05 UTC; 9 points) 's comment on EffecÂtive AsperÂsions: How the NonÂlinÂear InÂvesÂtiÂgaÂtion Went Wrong by (
- 5 Apr 2019 18:24 UTC; 9 points) 's comment on How x-risk proÂjects are differÂent from startups by (
- EffecÂtive AltruÂism as Global CatasÂtroÂphe Mitigation by 8 Jun 2018 4:17 UTC; 9 points) (LessWrong;
- 16 Jan 2017 17:53 UTC; 8 points) 's comment on RaÂtional PoliÂtics Project by (
- [CEA UpÂdate] OcÂtoÂber 2016 by 15 Nov 2016 14:49 UTC; 7 points) (
- EffecÂtive AltruÂism as Global CatasÂtroÂphe Mitigation by 8 Jun 2018 4:35 UTC; 7 points) (
- 24 Nov 2022 11:55 UTC; 5 points) 's comment on A LetÂter to the BulÂletin of Atomic Scientists by (
- 3 Oct 2018 23:25 UTC; 5 points) 's comment on RaÂtionÂalÂism for the masses by (LessWrong;
- 7 Mar 2017 13:57 UTC; 4 points) 's comment on AdÂviÂsory panel at CEA by (
- 27 Oct 2016 18:38 UTC; 4 points) 's comment on SetÂting ComÂmuÂnity Norms and Values: A reÂsponse to the InIn Open Letter by (
- 16 Jan 2017 17:53 UTC; 4 points) 's comment on InÂtenÂtional InÂsights and the EA MoveÂment â Q & A by (
- 4 Nov 2017 21:10 UTC; 4 points) 's comment on The Craft & The ComÂmuÂnityâA Post-Mortem & Resurrection by (LessWrong;
- 23 Aug 2022 20:50 UTC; 3 points) 's comment on Erich_Grunewaldâs Quick takes by (
- 27 Oct 2016 3:13 UTC; 3 points) 's comment on SetÂting ComÂmuÂnity Norms and Values: A reÂsponse to the InIn Open Letter by (
- 3 Nov 2016 10:11 UTC; 3 points) 's comment on VotÂing is like donatÂing hunÂdreds of thouÂsands to charity by (LessWrong;
- 3 Apr 2017 5:59 UTC; 3 points) 's comment on Against responsibility by (LessWrong;
- 7 Feb 2017 22:30 UTC; 2 points) 's comment on AnonyÂmous EA comments by (
- 19 Oct 2020 9:16 UTC; 2 points) 's comment on When does it make sense to supÂport/âopÂpose poliÂtiÂcal canÂdiÂdates on EA grounds? by (
- 7 Jun 2018 17:48 UTC; 1 point) 's comment on EA HoÂtel with free acÂcomÂmoÂdaÂtion and board for two years by (
- 26 Oct 2016 17:13 UTC; 0 points) 's comment on SupÂport ProÂmotÂing EffecÂtive GivÂingâInÂtenÂtional Insights by (
- 8 Nov 2016 17:42 UTC; 0 points) 's comment on The Best of EA in 2016: NomÂiÂnaÂtion Thread by (
- RaÂtional PoliÂtics Project by 8 Jan 2017 13:28 UTC; -12 points) (
My fellow contributors and I aimed in this document to have as little of an âeditorial lineâ as possible: we were not all in complete agreement on what this should be, so thought it better to discuss the appropriate interpretation of the data we provide in the comments. I offer mine below: in addition to the disclaimers and disclosures above, I stress I am speaking for myself, and not on behalf of any other contributor.
I believe InIn and Tsipursky are toxic to the EA community. I strongly recommend that EAs do not spend time or money on InIn going forward, nor any future projects Tsipursky may initiate. Insofar as there may be ways for EA organisations to insulate themselves from InIn, I urge them to avail themselves of these opportunities.
A key factor in this extremely adverse judgement is my extremely adverse view of InInâs product. InInâs material is woeful: a mess of misguided messaging (superdonor, the t-shirts, âeffective givingâ versus âeffective altruismâ, etc. etc.), crowbarred in aspirational pop-psychology âinsightsâ, tacky design and graphics, and oleaginous self-promotion seeping through wherever it can (see, for example, the free sample of Glebâs erstwhile âamazon bestsellerâ). Although mercifully little of InInâs content has anything to do with EA, whatever does reflects poorly on it (c.f. prior remarks about people collaborating with Tsipursky as a damage limitation exercise). I have yet to meet an EA with view of InInâs content better than mediocre-to-poor.
Due to this, the fact that the social âreachâ of InIn is mostly illusory may be a blessing in disguise: I am genuinely uncertain whether low-quality promotion of sort-of EA is better than nothing given it may add noise to higher quality signal notwithstanding the (likely fairly scant) counterfactual donations it may elicit. In any case, that it is illusory is a black mark against InInâs instrumental competencies necessary for being an effective outreach organisation.
What I find especially shocking is that this meagre output is the result of gargantuan amounts of time spent. Tsipursky states across assistants, volunteers, and staff, about 1000 hours are spent on InIn each week: if so, InIn is likely the leader among all EA orgs for hours spentâyet, by any measure of outputs, it is comfortably among the worst.
Would that it just be a problem of InIn being ineffective. The document above illustrates not only a wide-ranging pattern of at-best-shady practices, but a meta-pattern of Tsipursky persisting with these practices despite either being told not to or saying himself he wasnât doing them or wonât do them again. This record challenging to reconcile with Tsipursky acting in good faith, although I can fathom the possibility given the breadth and depth of his incompetence. Regardless of intention, I am confident the pattern of dodgy behaviour will continue with at most cosmetic modification, and it will continue to prove recalcitrant to any attempts to explain or persuade Tsipursky of his errors.
These issues incur further costs to Effective Altruism. There are obvious risks that donors âfall forâ InInâs self-promotion and donate to it instead of something better. There are similar reputational risks of InInâs behaviour damaging the EA brand independent of any risks from its content. Internally, acts like this may act to burn important commons in how individuals and organisations interact in the EA community. Finally, although in part self-inflicted, monitoring and reporting these things sucks up time and energy from other activities: although my time is basically worthless, the same cannot be said for the other contributors.
In sum: InInâs message is at best a cargo cult version of EA with dubious value. Despite being an outreach organisation, it is incompetent at fundamental competencies for its mission. A shocking number of volunteer hours are being squandered. Tsipursky is incapable of conducting himself to commonsense standards of probity, leave alone higher ones that should apply to the leader of an EA organisation. This behaviour incurs further external and internal costs to the EA movement. I see essentially no prospect of these problems being substantially remediated such that InInâs benefit to the community outweigh its costs, still moreso such that it would be competitive with other EA groups or initiatives. Stay away.
[Edit: I previously said â[InIn] is comfortably the worst [in terms of outputs]â, it has been pointed out there may be other groups with similarly poor performance, so Iâve (much belatedly) changed the wording.]
I suspect the reason InInâs quality is low is because, given their reputation disadvantage, they cannot attract and motivate the best writers and volunteers. I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.
Gleb canât improve InIn until he really understands the problem thatâs going on. I think this is why Intentional Insights has been resistant to change. I hope I provided enough insight in my comment about social status instincts for it to be possible for us all to overcome the inferential distance.
Iâm glad to see that so many people have come together to give Gleb feedback on this. Itâs not just me trying to get through to him by myself anymore. I think itâs possible for InIn to improve up to standards with enough feedback and a lot of work on Glebâs part. I mean, that is a lot of work for Gleb, but given what Iâve seen of his interest in self-improvement and his level of dedication to InIn, I believe Gleb is willing to go through all of that and do whatever it takes.
Really understanding what has gone wrong with Intentional Insights is hard, and it will probably take him months. After he understands the problems better, he will need a new plan for the organization. All of that is a lot of work. It will take a lot of time.
I think Gleb is probably willing to do it. This is a man who has a tattoo of Intentional Insights on his forearm. Because I believe Gleb would probably do just about anything to make it work, I would like to suggest an intervention.
In other words, perhaps we should ask him to take a break from promoting Intentional Insights for a while in order to do a bunch of self-improvement, make his major updates and plan out a major version upgrade for Intentional Insights.
Perhaps I didnât get the memo, but I donât think weâve tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/âor driving Gleb out of the EA movement.
The world does need an org that promotes rationality to a broader audience⊠and rationalists arenât exactly known for having super people skills⊠Since Gleb is so dedicated and is willing to work really hard, and since weâve all finally organized in public to do something about this, maybe we aught to try using this new source of leverage to heave him onto the right track.
Hello Kathy,
I have read your replies on various comment threads on this post. If youâll forgive the summary, your view is that Tsipurskyâs behaviour may arise from some non-malicious shortcomings he has, and that, with some help, these can be mitigated, thus leading InIn to behave better and do more good. In medicalese, Iâm uncertain of the diagnosis, strongly doubt the efficacy of the proposed management plan, and I anticipate a bleak prognosis. As I recommend generally, I think your time and laudable energy is better spent elsewhere.
A lot of the subsequent discussion has looked at whether Tsipurskyâs behaviour is malicious or not. Iâd guess in large part it is not: deep incompetence combined with being self-serving and biased towards ones org to succeed probably explain most of itâregrettably, Tsipurskyâs response to this post (e.g. trumped-up accusations against Jeff and Michelle, pre-emptive threats if his replies are downvoted, veiled hints at âwouldnât it be bad if someone in my position started railing against EAâ, etc.) seem to fit well with malice.
Yet this is fairly irrelevant. Tsipursky is multiply incompetant: at creating good content, at generating interest in his org (i.e. almost all of its social media reach is ilusory), at understanding the appropriate ambit for promotional efforts, at not making misleading statements, and at changing bad behaviour. I am confident that any EA I know in a similar position would not have performed as badly. I highly doubt this can all be traced back to a single easy-to-fix flaw. Furthermore, I understand multiple people approached Tsipursky multiple times about these issues; the post documents problems occurring over a number of months. The outside view is not favourable to yet further efforts.
In any case, InInâs trajectory in the EA community is probably fairly set at this point. As I write this, InIn is banned from the FB group, CEA has officially disavowed it, InIn seems to have lost donors and prospective donations from EAs, and my barometer of âEA public opinionâ is that almost all EAs who know of InIn and Tsipursky have very adverse attitudes towards both. Given the understandable reticience of EAs towards corporate action like this, one can anticipate these decisions have considerable inertia. A nigh-Damascene conversion of Tsipursky and InIn would be required for these things to begin to move favourably to InIn again.
In light of all this, attempting to âreform InInâ now seems almost as ill-starred as trying to reform a mismanaged version of homeopaths without borders: such a transformation is required to be surely worth starting afresh. The opportunity cost is also substantial as there are other better performing EA outreach orgs (i.e. all of them), which promise far greater returns on the margin for basically any return one migh be interested in. Please help them out instead.
Iâm not completely sure whatâs going on with Gleb, but I feel a great deal of concern for people with Aspergerâs, and I think it made me overly sympathetic in this case. Thank you for this.
One thing to consider is that too much charity for Gleb is actively harmful for people with ASDs in the community.
If I am at a party of a trusted friend and know theyâve only invited people the trust, and someone hurts my feelings, Iâm likely to ascribe it to a misunderstanding and talk it out with them. If Iâm at a party where lots of people have been jerks to me before, and someone hurts my feelings, Iâm likely to assume this person is a jerk too and withdraw.
By saying âIâm updatingâ and then committing the same problems again, Gleb is lessening the value of the words. He is teaching people itâs not worth correcting others, because they wonât change. This is most harmful to the people who most need the most direct feedback and the longest lead time to incorporate it.
Wow. More excellent arguments. More updates on my side. Youâre on fire. I almost never meet people who can change my mind this much. I would like to add you as a friend.
[This was originally a comment calling for Gleb to leave the EA community with various supporting arguments, but Iâve decided I donât endorse online discussions as a mechanism for asking people to leave EA. See this comment of mine for more.]
He wrote that he is a âmonthly donorâ to CFAR.
On the other hand a cynic might note that he has used his interactions with CFAR to promote himself and his organization, e.g. his linked favorable review of CFAR comes with a few plugs for Intentional Insights, and CFAR (or rather the erroneous acronym-unpacking âCenter for Advanced Rationalityâ) appeared as a collaboration in InIn promotional documents. My understanding is that the impression that he was aligned with CFAR (and EA) had also made some CFAR donors more open to InIn fundraising pitches.
He has also taken the Giving What We Can pledge, but I donât know what that means. He has said he and his wife fund most of InInâs budget (which would presumably be more than 10% of his income) and claims that it is highly effective, so might take that to satisfy his pledge.
[Disclosure: my wife is the executive director of CFAR, but I am speaking only for myself.]
Note: I am socially peripheral to EA-the-community and philosophically distant from EA-the-intellectual-movement; salt according to taste.
While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion Iâve seen around Gleb are charitable and systematic in excess of reasonable caution.
My first introduction to Gleb was Jeffâs August post, read before there were any comments up, and it seemed very clear that he was acting in bad faith and trying to use community norms of particular communication styles, owning up to mistakes, openness to feedback, etc. to disarm those engaging honestly and enable the con to go on longer. I donât think Iâm an especially untrusting person (quite the opposite, really), but even if thatâs the case nearly every subsequent revealed detail and interaction confirmed this. Gleb responds to criticism he canât successfully evade by addressing it in only the most literal and superficial manner, and continues on as before. It is to the point that if I were Gleb, and had somehow honestly stumbled this many times and fell into this pattern over and over, I would feel I had to withdraw on the grounds that no one external to my own thought processes could possibly reasonably take me seriously and that I clearly had a lot of self-improvement to do before engaging in a community like this in the future.
The responses to this behavior that Iâve seen are overwhelmingly of the form of taking Gleb seriously, giving him the benefit of the doubt where none should exist, providing feedback in good faith, and responding positively to the superficial signs Gleb gives of understanding. This is true even for people who I know have engaged with him before. Iâm not completely confident of this, but the pattern looks like people are applying the standards of charity and forgiveness that would be appropriate for any one of these incidences in isolation, not taking into account that the overall pattern of behavior makes such charitable interpretations increasingly implausible. On top of that, some seem to have formed clear final opinions that Gleb is not acting in good faith, yet still use very cautious language and are hesitant to take a single step beyond what they can incontrovertibly demonstrate to third parties.
A few examples from this post, not trying to be comprehensive:
Using the word âconcernsâ in the title and introductory matter
noting that Gleb doesnât âappearâ to have altered his practices around name-dropping
Saying âTsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably âtrueâ on some tendentious reading.â without bringing up the possibility of him knowing exactly what heâs doing and just lying
Calling Glebâs self-proclaimed bestselling author status only âpotentiallyâ misleading.
Moreover, the fully comprehensive nature of the post and the painstaking lengths it goes to separate out definitely valid issues from potentially invalid ones seems to be part of the same pattern. No one, not even Gleb, is claiming that these instances didnât happen or that he is being set up, yet this post seems to be taking on a standard appropriate for an adversarial court of law.
And this is a problem, because in addition to wasting peopleâs time it causes people less aware of these issues to take Gleb more seriously, encourages him to continue behaving as he has been, and I suspect in some cases inclines even the more knowledgeable people involved to trust Gleb too much in the future, despite whatever private opinions they may have of his reliability. At some point there needs to be a way for people to say âno, this is enough, we are done with youâ in the face of bad behavior; in this case if that is happening at all it is being communicated behind-the-scenes or by people silently failing to engage. That makes it much harder for the community as a whole to respond appropriately.
I take your point as âarenât we being too nice to this guy?â but I actually really like the approach taken here, which seems extremely fair-minded and diligent. My suspicion is this sort of stuff is long-term really valuable because it establishes good norms for something that will likely recur in future. Iâd be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything Iâd said, rather than just calling me names and walking away.
I donât understand what youâre claiming here. Are you saying youâd be honest in a community if you thought it would investigate you a lot to determine your honesty, but dishonest otherwise? Why not just be honest in all communities, and leave the ones you donât like?
I think he means that it is human behaviour to do that, not that he does it himself.
I literally still donât understand. I can understand the motivation to be an asshole in communities you think wonât treat you fairly, but why be a lying asshole? I think the OP wrote âhonestyâ and meant something else.
I think the common point of intervention for people telling mis-truths, is not holding themselves back when they donât really have enough evidence. A person might be about to write of a quick reply, and in most communities, know that theyâre not going to be held accountable for any mischaracterisations of othersâ opinions, or referring inaccurately to studies and data. In those communities, the comments are awful. In communities where you know that, if you do this over a sustained period, Carl Shulman, Jeff Kaufman, Oliver Habryka, Gregory Lewis and more are gonna write tens of thousands of words documenting your errors, youâll be more likely to note when you havenât quite substantiated the comment youâre about to hit âsendâ on.
Thereâs an important difference between repeatedly making errors, jumping to conclusions, or being attached to a preconceived notion (all of which which Iâve personally done in front of Carl plenty of times), and the sort of behavior described in the OP, which seems more like intentional misrepresentation for the sake of climbing a social status gradient.
Iâd like to agree partially with MichaelPlant and Paul_Crowley, in so far as Iâm glad that Iâm part of a community that responds to problems in such a charitable and diligent manner. However, I feel they missed the most important point of shlevyâs comment. Without arguing for a less fair-mined and thoughtful response, we can still ask the following: Gleb started InIn back in 2014; why did it take us two years to get to the point where we were able to call him out on his bad behaviour? This couldâve been called out much earlier.
I think the answer looks like this:
Firstly, Gleb has learned the in-group signals of communicating in good-faith (for example, at every criticism, he says he has âupdatedâ, and he says âthank youâ for criticism). This alone is not a problemâit would merely take a few people to realise this, call it out, and then he could be asked to leave the community.
Thereâs a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiencesâas was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation. My guess for the fundamental reason that we are having this conversation now, is that Jeff Kaufman bravely made his beliefs about Gleb common knowledgeâhe made a blog post about InIn, after which everyone else realised âOh, everyone else believes this too. Iâm not worried any more that everyone will think negatively of me for acting as though Gleb is acting in bad faith. I will now let out the piled up problems I have with Glebâs behaviour.â
To re-iterate, itâs delightful to be part of a community that responds to this sort of situation by spending ~100s of hours (collectively) and ~100k words (Iâm counting the original Facebook thread as well as the post here) analysing the situation and producing a considered, charitable yet damning report. However, itâs important to realise that there are communities out there for whom Gleb wouldâve been outed in months rather than years, and without the time of many top researchers in the community wasted.
Iâm not sure what the correct norms to have are. Iâd suggest that we should be more trusting that when someone in the community criticises someone else not in the community, theyâre doing it for good reasons. However, writing that out is almost self-refutingâthatâs what all insular communities are doing. Perhaps appointing a small group of moderators for the community to whom we trust. Thatâs how good online communities often work, perhaps the model can be extended to the EA community (which is significantly more than just an online community). I certainly want to sustain the excellent norms of charity, diligence and respect that we currently have, something necessary to any successful intellectual project.
I just want to highlight that I feel like part of this post is based on a false premise; you mention InIn was started in 2014. While that may be true, all of the incidents in EA (and Less Wrong) circles cited above date to November 2015 or later. Glebâs very first submission in the EA forum is in October 2015. By saying âit took two yearsâ and then talking about âmonths rather than yearsâ you give the impression that Gleb could have been excluded sometime back in 2015 and would have been elsewhere, which I think is pretty misleading (though presumably unintentionally so).
The truth is that it took a little over 9 months from Glebâs first post to Jeffâs major public criticism. 9 months and a decent amount of time is not trivial. But letâs not overstate the problem.
âThereâs a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiencesâas was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation.â
I do strongly agree with this. I had some very frustrating conversations around that thread.
Pretty much agree with you and shlevy here, except that the wasting hundreds of collective hours carefully checking that Gleb is acting in bad faith seems more like a waste to me.
If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.
But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isnât a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I donât know how workable this would be for the EA community, since my impression is that itâs spread out across several platforms.
Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because weâre missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.
Yes! But⊠you said your opening line as though it disagreed somehow? I said:
I may be misinterpreting you here; you wrote
and while I think this behavior is in some sense admirable, I think it is on net not delightful, and the huge waste of time it represents is bad on net except to the extent that it leads to better community norms around policing bad actors.
Yup, we are in agreement.
(I was just noting how sweet it was that we do this much more kindly than most other communities. Itâs totally not optimal though.)
Yes, insofar communities do that, but typically in emotive and highly biased ways. EA at least has more constructive norms for how these things are discussed. Itâs not perfect, and itâs not fast, but here I see people taking pains to be as fair-minded as they can be. (We achieve that to different degrees, but the effort is expected.)
My System 1 doesnât like this. Giving this power to a group of people and suggesting that we accept their guidance⊠that feels cultish, and not very compatible with a community of critical thinkers.
Scientific departments have ethics boards. Good online communities (e.g. Hacker News) have moderators. Society as a whole has a justice part of governance, and other groups that check on the decisions made by the courts. Suggesting that it feels cult-y to outsource some of our community norm-enfacement (so as to save the community as a whole significant time input, and make the process more efficient and effective) is⊠Iâm just confused every time someone calls something totally normal âcult-yâ.
I deliberately said âMy System 1 doesnât like this.â and âthat feels cultishâ â on an intuitive level, I feel uncomfortable, and Iâm trying to work out why. I do see value in having effective gatekeepers.
Iâm not even sure what it means to be âbannedâ from a movement consisting of multiple organisations and many individuals. It may be that if the process is clearly defined, and we know who is making the decision, on whose behalf, Iâd be more comfortable with it.
Thanks for clarifying!
Just in case youâre interested: I think the word âcultishâ is massively overloaded (with negative connotations) and mis-used. Iâd also point out that saying that a statement is oneâs gut feeling isnât equivalent to saying one doesnât endorse the feeling, and so I felt pretty defensive when you suggested my idea was cultish and not compatible with our community.
I wrote this because I thought you might prefer to know the impacts of your comments rather than not hearing negative feedback. My apologies in advance if that was a false assumption.
Thanks â helpful feedback (and from Owen also). In hindsight I would probably have kept the word âcultishâ while being much more explicit about not completely endorsing the feeling.
Something went wrong with the communication channel if you ended up feeling defensive.
However, despite generally agreeing with you about problems with the world âcultishâ, I actually think this is a reasonable use-case. It has a lot of connotations, and it was being reported that the description was triggering some of those connotations in the reader. Thatâs useful information that it may be worth some effort to avoiding it being perceived that way if the idea is pursued (your stack of examples make it pretty clear that it is avoidable).
I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.
Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a âconâ.
I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things.
On the inside, the picture is not so clear. Whatâs going on inside his head?
If this is a con, what in the world does he want? He canât seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. Heâs not making a disproportionate amount of money off of this⊠and spreading rationality doesnât even look like one of those things which a con artist could quickly do for a disproportionate reward⊠so I am confused.
If I thought Intentional Insights was a con, Iâd be right with you trying to make that more obvious to everyone⊠but I launched my con detector and that test was negative.
Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word âconâ either way just as people use the word âtrollâ either way.
For the same reasons that we should face the fact that thereâs a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem weâre seeing with Intentional Insights as accurately as possible.
Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipediaâs page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and major loss. (https://ââen.wikipedia.org/ââwiki/ââPsychological_trauma)
As some kind of weird hybrid between a bleeding heart and a shrewd person, I canât justify anything but minimizing the brutality of a traumatic event for someone with a learning disorder, no matter how destructive it is. At the same time, I agree that ousting destructive people is a necessity if they wonât or canât change, but I think in the case of an incompetent person, there are a lot of ways in which the community has been too brutal. In the event of a malicious con, weâve been too charitable, and Iâm guilty of this as well. If Gleb really is a con artist, we should be removing him as fast as possible. I just donât see strong evidence that the problem he has is intentional, nor does it even seem to be clearly differentiated from terrible social skills and general ignorance about marketing.
Our response is too brutal for someone with a learning disorder or other form of incompetence, and itâs too charitable for a con artist. In order to move forward, I think perhaps we aught to stop and solve this disagreement.
Hereâs whatâs at stake: currently, I intend to advocate for an intervention*. If you convince me that he is a con artist, I will abandon this intent and instead do what you are doing. Iâll help people see the con.
/â (By intervention, I mean: encouraging everyone to tell Gleb we require him to shape up or ship out, to negotiate things like what we mean by shape up and how we would like him to minimize risk while he is improving. If he has a learning disorder, a bit of extra support could go a long way if* the specific problems are identified so the support can target them accurately. I suspect that Gleb needs to see a professional for a learning disorder assessment, especially for Aspergerâs.)
Iâm open to being convinced that Intentional Insights actually does qualify as some type of con or intends net negative destructive behavior. I donât see it, but Iâd like to synchronize perspectives, whether I âwinâ or âloseâ the disagreement.
I donât think incompetent and malicious are the only two options (I wouldnât bet on either as the primary driver of Glebâs behavior), and I donât think theyâre mutually exclusive or binary.
Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.
Views my own, not my employerâs.
That was a truly excellent argument. Thank you.
Thanks Kathy!
Witch hunting and attacks do nothing for anyone.
Which is fine.
People can look at clear and concise summaries like the one above and come to their own conclusion. They donât need to be told what to believe and they donât need to be led into a groupthink.
Attacking people who are bad protects other people in the community from having their time wasted or being hurt in other ways by bad people. Try putting yourself in the shoes of the sort of people who engage in witch hunts because theyâre genuinely afraid of witches, who if they existed would be capable of and willing to do great harm.
To be clear, itâs admirable to want to avoid witch hunts against people who arenât witches and wonât actually harm anyone. But sometimes there really are witches, and hunting them is less bad than not.
This approach doesnât scale. Suppose the EA community eventually identifies 100 people at least as bad as Gleb in it, and so generates 100 separate posts like this (costing, what, 10k hours collectively?) that others have to read and come to their own conclusions about before they know who the bad actors in the EA community are. Thatâs a lot to ask of every person who wants to join the EA community, not to mention everyone whoâs already in it, and the alternative is that newcomers donât know who not to trust.
The simplest approach that scales (both with the size of the community and with the size of the pool of bad actors in it) is to kick out the worst actors so nobody has to spend any additional time and/âor effort wondering /â figuring out how bad they are.
Yes, but Gleb isnât actively hurting anyone. You need an ironclad rationale before deciding to just build a wall in front of people who you think are unhelpful.
Even if you could really have 100 people starting their own organizations related to EA⊠itâs not relevant. Just because it wonât scale doesnât mean itâs not the right approach with 1 person. We might think that the time and investment now is worthwhile, whereas if there were enough questionable characters that we didnât have the time to do this with all of them, then (and only then) weâd be compelled to scale back.
The problem is that Gleb is manufacturing false affiliations in the eyes of outsiders, and outsiders who only briefly glance at lengthy, polite documents like this one are unlikely to realize thatâs whatâs happening.
Gleb did lots of things and the post describes them, so itâs about more than just manufacturing false affiliations.â The issue is not that the post is too long or contains too many details, thatâs a silly thing to complain about. The issue is whether the post should be adversarial and whether it should manufacture a dominant point of view. The answer to that is No.
In the original facebook thread I was highly critical of intentional insights, I have not read all the followup here yet, but I would like to note that after that thread the next âthingâ I saw from Intentional Insights was this post about EA marketing. I thought that was a highly competent and interesting contribtuion to the EA community. All of the ongoing concerns about II may standâbut there is clearly a few people associated with the org who have valuable contributions to make to the future of the community,
The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here is that it probably would have merited the early and permanent exclusion of the Singularity Institute/âMIRI from the EA community. Holden wrote a blog on LessWrong saying that he didnât like their organization and didnât think they were worth funding. Some assorted complaints have been floating around the web for a long time complaining about them associating with neoreactionaries and about LessWrong being cultists as well as complaints about the way they communicate and write. Thereâs been a few odd âincidentsâ (if you can call them that) over the years between MIRI, LessWrong, and the rationalist sphere. It would be easy to jumble all of that together into some kind of meta-post documenting concerns, and there is certainly no shortage of people who are willing and able to write long impassioned posts expressing their feelings and saying that they want nothing to do with SIAI/âMIRI and recommending others to adhere to that. We could have done that, lots of people would come out of the woodwork to add their own complaints, the conversation would reach critical mass, and boomâall of a sudden, half the steam behind AI safety goes down the tubes.
Itâs easy to find online communities today where people are mind-numbingly dismissive of anything AI-related due to a poorly-argued, critical-mass groupthink against everything LessWrong. Good thing that weâre not one of them.
I agree that itâs important that EA stay open to weird things and not exclude people solely for being low status. I see several key distinctions between early SI/âearly MIRI and Intentional Insights:
SI was cause focused, II a fundraising org. Causes can be argued on their merits. For fundraising, âpeople dislike you for no reasonâ is in and of itself evidence you are bad at fundraising and should stop.
I think this is an important general lesson. Right now âfundraising orgâ seems to be the default thing for people to start, but itâs actually one of the hardest things to do right and has the worst consequences if it goes poorly. With the exception of local groups, Iâd like to see the community norms shift to discourage inexperienced people from starting fundraising groups.
AFAIK, SI wasnât trying to use the credibility of the EA movement to bolster itself . Gleb is, both explicitly (by repeatedly and persistently listing endorsements he did not receive) and implicitly. As long as he is doing that the proportionate response is criticizing him/âdistancing him from EA enough to cancel out the benefits.
The effective altruism name wasnât worth as much when MIRI was getting started. There was no point in faking an endorsement because no one had heard of us. Now that EA has some cachet with people outside the movement there exists the possibility of trying to exploit that cachet, and it makes sense for us to raise the bar on who gets to claim endorsement.
Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.
My experience (as someone connected to both the rationalist and Oxford/âGiving What We Can clusters as EA came into being) is that its birth came out of Giving What We Can, and the communities you mentioned contributed to growth (by aligning with EA) but not so much to birth.
You can equally draw a list of distinctions which point in the other direction: distinctions that would have made it more worthwhile to exclude MIRI than to exclude InIn. Iâve listed some already.
I donât think this comparison holds water. Briefly, I think SI/âMIRI would have mostly attracted criticism for being weird in various ways. As far as I can tell, Gleb is not acting weird; he is acting normal in the sense that heâs making normal moves in a game (called Promote-Your-Organization-At-All-Costs) that other people in the community donât want him playing, especially not in a way that implicates other EA orgs by association.
Whatever you think of that object-level point, an independent meta-level point: itâs also possible that the EA movement excluding SI/âMIRI at some point would have been a reasonable move in expectation. Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.
Theyâve attracted criticism for more substantial reasons; many academics didnât and still donât take them seriously because they have an unusual point of view. And other people believe that they are horrible people who are in between neoreactionary racists and a Silicon Valley conspiracy to take peopleâs money. Itâs easy to pick up on something being a little off-putting and then get carried down the spiral of looking for and finding other problems. The original and underlying reason people have been pissed about InIn this entire time is that they are aesthetically displeased by their content. âIt comes across as spammy and promotionalâ. An obvious typical mind fallacy. If you can fall for that then you can fall for âEliezerâs writing style is winding and confusing.â
Highly implausible.
AI safety is a large issue. MIRI has done great work and has itself benefited tremendously from its involvement. Besides that, there have been many benefits to EA for aligning with rationalists more generally.
Yes, but people are taking this case to be a true positive that proves the rule, which is no better.
Some of the criticisms Iâve read of MIRI are so nasty that I hesitate to rehash them all here for fear of changing the subject and side tracking the conversation. Iâll just say this:
MIRI has been accused of much worse stuff than this post is accusing Gleb of right now. Compared to that weird MIRI stuff, Gleb looks like a normal guy who is fumbling his way through marketing a startup. The weird stuff MIRI /â Eliezer did is really bizarre. For just one example, there are places in The Sequences where Eliezer presented his particular beliefs as The Correct Beliefs. In the context of a marketing piece, that would be bad (albeit in a mundane way that we see often), but in the context of a document on how to think rationally, thatâs more like⊠egregious blasphemy. Itâs a good thing the guy counter-balanced whatever that behavior was with articles like âScreening Off Authorityâ and âGuardians of the Truthâ.
Do some searches for web marketing advice sometime, and youâll see that Gleb might have actually been following some kind of instructions in some of the cases listed above. Not the best instructions, mind you⊠but somebodyâs serious attempt to persuade you that some pretty weird stuff is the right thing to do. This is not exactly a science⊠itâs not even psychology. Weâre talking about marketing. For instance, paying Facebook to promote things can result in problems⊠yet this is recommended by a really big company, Facebook. :/â
There are a few complaints against him that stand out as a WTF⊠(Then again, if youâre really scouring for problems, youâre probably going to find the sorts of super embarrassing mistakes people only make when theyâre really exhausted or whatever. I donât know what to make of every single one of these examples yet.)
Anyway, MIRI /â Eliezer canât claim stuff like âI was following some marketing instructions I read on the Internet somewhere.â, which, IMO, would explain a lot of this stuff that Gleb didâwhich is not to say I think copying him is an effective or ethical way of promoting things! The Eliezer stuff was, like self-contradictory enough that it was weird to the point of being original. It took me forever to figure that guy out. There were several years where I simply had no cogent opinion on him.
The stuff Gleb is doing is just so commonly bad. Itâs not an excuse. I still want to see InIn shape up or ship out. I think EA can and should have higher standards than this. I have read and experienced a lot in the area of promoting things, and I know there are ways of persuading through making people think that donât bias them or mislead them, but by getting them more in touch with reality. I think it takes a really well thought out person to accomplish that because seeing reality is only the first step⊠then, you need to know how to deal with it, and you need to encourage the person to do something constructive with the knowledge as well. Sometimes bare information can leave people feeling pretty cynical, and itâs not like we were all taught how to be creative and resourceful and lead ourselves in situations that are unexpectedly different from what we believed.
I really believe there are better ways to be memorable other than making claims about how much attention youâre getting. Providing questionable info of this type is certainly bad. The way Iâm seeing it, wasting time on such uninspired attempts involves such a large quantity of lost potential that questionable info is almost silly by comparison. I feel like weâre worried about a guy who says he has the best lemonade stand ever, but what we should be worried about is why he hasnât managed to move up to selling at the grocery store yet.
I can very clearly envision the difference between what Gleb has been doing, and specific awesome ways in which it is possible to promote rationality. I canât condemn Gleb as some sort of bad guy when what heâs doing wrong betrays such deep ignorance about marketing. I feel like: surely, a true villain would have taken over the beverage aisle at the grocery store by now.
I see insight in what Qiaochu wrote here:
Right now we donât have a procedure set up for formally deciding whether a particular person is a bad actor. If someone feels that another person is a bad actor, the only way to deal with the situation is informally. Since the community largely functions online, the discussion has a âwitch huntâ character to it.
I think most people agree that bad actors exist, and we should have the capability to kick them out in principle (even if we donât want to use it in Glebâs particular case). But I agree that online discussions are not the best way to make these decisions. Iâve spent some time thinking about better alternatives, and Iâll make a top-level post outlining my proposal if this comment gets at least +4.
Edit: Alternatively, for people who feel it should be possible to oust a person like Gleb with less effort, a formal procedure could streamline this kind of thing in the future.
[ETA: a number of these comments are addressed to possible versions of this that John is not advocating, see his comment replying to mine.]
My attitude on this is rather negative, for several reasons:
The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements
Individual fora have their moderation policies, individual organizations can choose who to affiliate with or how to authorize use of their trademarks, individuals can decide who to work with or donate to
There was no agreed-on course of action among the contributors to this document, let alone the wider EA community
Public discussion (including criticism) allows individual actors to make their own decisions
There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I donât think others are in a position to ask that they cut off such interactions if they find them valuable
I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and not willing to give blind trust to) a non-transparent condemnation from such a process, and I think it would reflect badly on those involved and the movement as a whole
If one wants to avoid heated online discussions , flame wars, and whatnot, they would be elicited by the outputs of the formal process (moreso, if less transparent and careful, I think)
But controversial decisions will still need to be madeâabout who to ban from the forum, say. As EA gets bigger, I see advantages to setting up some sort of due process (if only so the process can be improved over time) vs doing things in an ad hoc way.
Well, perhaps an official body would choose some kind of compromise action, such as what you did (making knowledge about Glebâs behavior public without doing anything else). I donât see why this is a compelling argument for an ad hoc approach.
Without official means for dealing with bad actors, the only way to deal with them is by being a vigilante. The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario Iâm most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way.
I donât (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way.
OK, letâs make it transparent then :) The question here is formal vs ad hoc, not transparent vs opaque.
If I see a long post on the EA forum that explains why someone I know is bad for the movement, I need to read the entire post to determine whether it was constructed in a careful & transparent way. If the person is a good friend, I might be tempted to skip reading the post and just make a negative judgement about its authors. If the post is written by people whose job is to do things carefully and transparently (people who will be fired if they do this badly), itâs easier to accept the postâs conclusions at face value.
This is a very good point. One reason I got involved in the OP was to offset some of this selection effect. On the other hand, I was also reluctant to involve EA institutions to avoid dragging them into it (I was not expecting Will MacAskillâs post or the announcement by the EA Facebook group moderators, and mainly aiming at a summary of the findings for individuals). A respected institution may have an easier time in an individual case, but it may also lose some of its luster by getting involved in disputes.
Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/âad hoc. However a more official body could have more power to do the things I mention, so I donât think the issues are orthogonal.
True, but I suspect the worst case scenario for an official body is still less bad than the worst case scenario for vigilantism. Letâs say we set up an Effective Altruism Association to be the governing body for effective altruism. Letâs say it becomes apparent over time that the board of the Effective Altruism Association is abusing its powers. And letâs say members of the board ignore pressure to step down, and thereâs nothing in the Associationâs charter that would allow us to fix this problem. Well at that point, someone can set up a rival League of Effective Altruists, and people can vote with their feet & start attending League-sponsored events instead of Association-sponsored events. This sounds to me like an outcome that would be bad, but not catastrophic in the way spiraling vigalantism has been for communities demographically similar to ours devoted to programming, atheism, video games, science fiction, etc. If anything, I am more worried about the case where the Associationâs board is unable to do anything about vigilantism, or itself becomes the target of a hostile takeover by vigilantes.
I suspect a big cause of disagreement here is that in America at least, weâve lost cultural memories about how best to organize ourselves.
From the essay Bowling Alone: Americaâs Declining Social Capital (15K citations on Google Scholar). You can read the essay for info on big drops in participation for churches, unions, PTAs, and civic/âfraternal organizations.
I donât think formal procedures are likely to be followed and I donât think itâs generally sensible to go to all the trouble of building an explicit policy to kick people out of EA. Itâs a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly. Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology.
Iâm not against online discussions on a structural level. I think theyâre fine. Iâm against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.
The impression I get from Jeffâs post is that the people involved took great pains to be as reasonable as possible. They donât even issue recommendations for what to do in the body of the postâthey just present observations. This after ~2000 edits over the course of more than two months. This makes me think theyâd have been willing to go to the trouble of following a formal procedure. Especially if the procedure was streamlined enough that it took less time than what they actually did.
My recommendations are about how to formally resolve divisive disputes in general. If divisive disputes constitute existential threats to the movement, it might make sense to have a formal policy for resolving them, in the same way buildings have fire extinguishers despite the low rate of fires. Also, I took in to account that my policy might be used rarely or never, and kept its maintenance cost as low as possible.
Drama seems pretty universalâI donât think it can be wished away.
There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, itâd be nice to know when itâs acceptable to ban a user from the EA forum, Facebook group, etc.
Iâm not especially impressed with the reference class of social movements when it comes to doing good, and Iâm not sure we should do a particular thing just because itâs what other social movements do.
I keep seeing other communities implode due to divisive internet drama, and Iâd rather this not happen to mine. I would at least like my community to find a new way to implode. Iâd rather be an interesting case study for future generations than an uninteresting one.
So whatâs the right way to take action, if you and your friends think someone is a bad actor whoâs harming your movement?
I mean for the community as a whole, to say, âoh, look, our thought leaders decided to reject someoneâok, letâs all shut them out.â
Thereâs the normal kind of drama which is discussed and moved past, and the weird kind of drama like Rokoâs Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.
Those groups can make their own decisions. EA has no central authority. I moderate a group like that and there is no chance Iâd ban someone just because of the sort of thing which is going on here, and certainly not merely because the high chancellor of the effective altruists told me to.
Weâre not following their lead on how to change the world. Weâre following their lead on how to treat other members of the community. Thatâs something which is universal to social movements.
Is this serious? EA is way more important than yet another obscure annal in Internet history.
Tell it to them. Talk about it to other people. Run my organizations the way I see fit.
I think the second kind of drama is more likely in the absence of a governing body. See the vigilante action paragraph in this comment of mine.
If the limiting factor for a movement like Effective Altruism is being able to coordinate people via the Internet, then coordinating people via the Internet ought to be a problem of EA interest.
I see your objections to my proposal as being fundamentally aesthetic. You donât like the idea of central authority, but not because of some particular reason why it would lead to bad consequencesâit just doesnât appeal to you intuitively. Does that sound accurate?
The second kind of drama was literally caused by the actions of a governing body. Specifically, one that was so self-absorbed in its own constellation of ideas that it forgot about everything that outsiders considered normal.
So youâre trying to say that the worst case scenario of setting up an official EA panel is not as bad as the worst case scenario of vigilantism. Thatâs a very limited argument. First, merely comparing the worst case scenarios is a very limited approach. Firstly because by definition these are events at the extreme tail ends of our expectations which implies that we are particularly incapable of understanding and predicting them, secondly because we also need to take probabilities into account, and thirdly because we need to take average, median, best case, etc. expectations into account. Furthermore, itâs not clear to me that the level of witch hunting and vigilantism currently present in programming, atheist, etc. communities, is actually worse than having a veritable political rift between EA organizations. Moreover, youâre jumping from Rokoâs Basilisk type weird drama and controversy to vigilantism, when the two are fairly different things. And finally, youâre shifting the subject of discussion from a panel that excommunicates people to some kind of big organization that runs all the events.
Besides that, the fact that there has been essentially no vigilantism in EA except for a small number of people in this thread suggests that youâre jumping far too quickly to enormous solutions for vague problems.
Thatâs way too simplistic. Communities donât hit a ceiling and then fail when they run into a universal limiting factor. Their actions and evolution are complicated and chaotic and always affected by many things. And hardly any social movements are led by people who look at other social movements and then pattern their own behavior based on othersâ.
I prefer the term âcommon senseâ.
It rings lots of lots of alarm bells.
If selection of leadership is an explicit process, we can be careful to select people we trust to represent the EA movement to the world at large. If the process isnât explicit, forum moderators may be selected in an incidental way, e.g. on the basis of being popular bloggers.
Governance in general seems like itâs mainly about mitigation of worst case scenarios. Anyway, the evidence I presented doesnât just apply to the tail ends of the distribution.
This is an empirical question. I donât get the impression that competition between organizations is usually very destructive. It might be interesting for someone to research e.g. the history of the NBA and the ABA (competing professional basketball leagues in the 1970s) or the history of AYSO and USYSA (competing youth soccer leagues in the US that still both existâcontrast with youth baseball, where I donât believe Little League has any serious rivals). I havenât heard much about destructive competition between rival organizations of this type. Even rival businesses are often remarkably civil towards one another.
I suspect the reason competition between organizations is rarely destructive is because organizations are fighting over mindshare, and acting like a jerk is a good way to lose mindshare. When Google released its Dropbox competitor Google Drive, the CEO of Dropbox could have started saying nasty things about Googleâs CEO in order to try & discredit Drive. Instead, he cracked a joke. The second response makes me much more favorably inclined toward Dropboxâs product.
Vigilantes donât typically think like this. Theyâre not people who were chosen by others to represent an organization. Theyâre people who self-select on the basis of anger. They want revenge. And they often do things that end up discrediting their cause.
The biggest example I can think of re: organizations competing in a nasty way is rival political parties, and I think there are incentives that account for that. Based on what Iâve read about the details of how Australiaâs system operates, it seems like Australian politicians face a slightly better set of incentives than American ones. Iâd be interested to hear from Australians about whether they think their politicians are less nasty to each other.
Was there a particular case of destructive competition between organizations that you had in mind?
Part of the reason this hasnât been much of a problem is because the EA movement is sufficiently âelitistâ to filter out troublemakers during the recruitment stage. (Gleb got through, which is arguably my faultâIâm the person who introduced him to other EAs and told them his organization seemed interesting. Sorry about that.) Better mechanisms for mitigating bad actors who get through means we can be less paranoid about growth.
Also, it makes sense to set something like this up well before itâs needed. If itâs formed in response to an existing crisis, it wonât have much accumulated moral authority, and it might look like a play on the part of one party or another to create a âneutralâ arbiter that favors them.
People in EA have done this a fair amount. Iâve heard of at least two EAs besides Jeff who have spent significant time looking at the history of social movements, and here is OpenPhilâs research in to the history of philanthropy. I assume a smart EA-type movement of the future would also do this stuff.
I also think that contributing to societyâs stock of knowledge about how to organize people is valuable, because groups are rarely set up for the purpose of doing harm and often end up incidentally doing good (e.g. charitable activities of fraternal organizations).
Doesnât seem like that to me. And just because âgovernance in generalâ does something doesnât mean we should.
Yeah, and itâs unclear. I donât see why it is relevant anyway. I never claimed that creating an EA panel would lead to a political divide between organizations.
Weâre not paranoid about growth and weâre not being deliberately elitist. People wonât change their recruiting efforts just because a few people got officially kicked out. When the rubber hits the road on spreading EA, people just busy themselves with their activities, rather than optimizing some complicated function.
Yeah, EA, which is not a typical social movement. Iâve not heard of others doing this. Hardly any.
Saying that you want to experiment with EA because risking the stability of a(n unusually important) social movement just because it might benefit random people with unknown intentions who may or may not study our history is taking it a little far.
Well most of them are relatively ineffective and most of them donât study histories of social movements. As for the ones that do, they donât look up obscure things such as this. When people spend significant time looking at the history of social movements, they look at large, notable, well documented cases. They will not look at a few peopleâs online actions. There is no shortage of stories of people doing online things at this low level of notability and size.
Thatâs fair.
Thatâs what we did for a year+. The problem didnât go away.
Not much of a problem except the time you wasted going after it. Few people in the outside world knew about InIn; fewer still could have associated it with effective altruism. Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity.
Iâm done arguing about this, but if you still want an ex post facto solution just to ward off imagined future Glebs, take a moment to go to people in the actual outside world, i.e. people who have experience with social movements outside of this circlejerk, and ask them âhey, Iâm a member of a social movement based on charity and altruism. We had someone who associated with our community and did some shady things. So weâd like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them. Could you be so kind as to tell us if this is the awful idea that it sounds like? Thanks.â
So hereâs your proposal for dealing with bad actors in a different comment:
Youâve found ways to characterize other proposals negatively without explaining how they would concretely lead to bad consequences. Iâll note that I can do the same for this proposalâtalking to them directly is ârudeâ and âconfrontationalâ, while talking about it to other people is âgossipâ if not âbackstabbingâ.
Dealing with bad actors is necessarily going to involve some kind of hostile action, and itâs easy to characterize almost any hostile action negatively.
I think the way to approach this topic is to figure out the best way of doing things, then find the framing that will allow us to spend as few weirdness points as possible. I doubt this will be hard, as I donât think this is very weird. I lived in a large student co-op with just a 3-digit number of people, and we had formal meetings with motions and elections and yes, formal expulsions. The Society for Creative Anachronism is about dressing up and pretending youâre living in medieval times. Hereâs their organizational handbook with bylaws. Check out section X, subsection C, subsection 3 where âExpulsion from the SCAâ is discussed:
Sure I did. I said it would create unnecessary bureaucracy taking up peopleâs time and it would make judgements and arguments that would start big new controversies where its opinions wouldnât be universally followed. Also, it would look ridiculous to anyone on the outside.
Is it not apparent that other things besides âweirdness pointsâ should be factored into decisionmaking?
You found an organization that excludes people from itself. So what? The question here is about a broad social movement trying to kick people out. If all the roleplayers of the world decided to make a Roleplaying Committee whose job was to ban people from participating in roleplaying, youâd have a point.
Thatâs fair. Here are my responses:
Specialization of labor has a track record of saving people time that goes back millenia. The fact that we have police, whose job it is to deal with crime, means I have to spend a lot less time worrying about crime personally. If we got rid of the police, I predict the amount of crime-related drama would rise. See Steven Pinker on why heâs no longer an anarchist.
A respected neutral panel whose job is resolving controversies has a better chance of its opinions being universally followed than people whose participation in a discussion is selected on the basis of angerâespecially if the panel is able to get better at mediation over time, through education and experience.
With regard to ridiculousness, I donât think what Iâm suggesting is very different than the way lots of groups govern themselves. Right now youâre thinking of effective altruism as part of the âmovementâ reference class, but I suspect in many cases a movement or hobby will have one or more âassociationsâ which form de facto governing bodies. Scouting is a movement. The World Organization of the Scout Movement is an umbrella organization of national Scouting organizations, governed by the World Scout Committee. Chess is a hobby. FIDE is an international organization that governs competitive chess and consists of 185 member federations. One can imagine the creation of an umbrella organization for all the existing EA organizations that served a role similar to these.
Iâm feeling frustrated, because it seems like you keep interpreting my statements in a very uncharitable way. In this case, what I meant to communicate was that we should factor in everything besides weirdness points, then factor in weirdness points. Please be assured that I want to do whatever the best thing is, I consider what the best thing is to be an empirical question, and I appreciate quality critical feedbackâbut not feedback that just drains my energy.
Implementation of my proposal might involve the creation of an âEffective Altruism Assocationâ, analgous to the SCA, as I describe here.
Sounds great, but itâs only valuable when people can actually specialize. You canât specialize in determining whether somebodyâs a true EA or not. Being on a committee that does this wonât make you wiser or fairer about it. Itâs a job thatâs equally doable by people already in the community with their existing skills and their existing job titles.
Itâs trivially true that the majority opinion is most likely to be followed.
Sure it is. Youâre suggesting that the FIDE start deciding whoâs not allowed to play chess.
I donât think the order in which you factor things will make a difference in how the options are eventually ranked, assuming youâre being rational. In any case, there are large differences. For one thing, the SCA does not care about how it is perceived by outsiders. The SCA is often rewarded for being weird. The SCA is also not necessarily rational.
Then youâre suggesting something far larger and far more comprehensive than anything that Iâve heard about, which I have no interest in discussing.
I actually think being on a committee helps some on its own, because you know youâll be held accountable for how you do your job. But I expect most of the advantages of a committee to be in (a) identifying people who are wise and fair to serve on it (and yes, I do think some people are wiser and fairer than others) (b) having those people spend a lot of time thinking about the relevant considerations (c) overcoming bystander effects and ensuring that there exists some neutral third party to help adjudicate conflicts.
If thereâs no skill to this sort of thing, why not make decisions by flipping coins?
Well naturally, the committee would be staffed by people who are already in the community, and it would probably not be their full-time job.
Do you really think chess federations will let you continue to play at their events if you cheat or if youâre rude/âaggressive?
Looking at the links you shared it looks like these accounts werenât so much âfakeâ but just new accounts from Gleb that were used for broadcasting/âspamming Glebâs book on Reddit. That attracted criticism for the aggressive self-promotion (both by sending to so many reddits, and the self-promotional spin in the message).
The commenters call out angela_theresa for creating a Reddit account just to promote the book. She references an Amazon review, and there is an Amazon review from the same time period by an Angela Hodge (not an InIn contractor). My judgment is that is a case of genuine appreciation of the book, perhaps encouraged by Glebâs requests for various actions to advance the book. In one of the reviews she mentions that she knows Gleb personally, but says she got a lot out of the book.
At least one other account was created to promote the book, but I havenât been able to determine whether it was an InIn affiliate. Gleb says he
Ok my goal was not to launch accusations, I just wanted to point out that even when people were saying this (they thought they were fake accounts) and looking into his personal info they didnât say anything about altruism or charity, so the themes behind the content werenât apparent, meaning that there was little or no damage to EA. Because most of the content on the site and book isnât about charity or altruism, itâs not clear how well this promotes people to actually donate and stuff, but it canât be very harmful.
Right, I just wanted to diminish uncertainty about the topic and reduce speculation, since it had not been previously mentioned.
Kbog, I think your general mistake on this thread as a whole is assuming a binary between âeither we act charitably to people or we ostracise people whenever members of the community feel like outgrouping themâ. Thus your straw-man characterisation of an
Which was exactly what I disavowed at the bottom of my long comment here.
Examples of why your dichotomy is false: we could have very explicit and contained rules, such as âIf you do X, Y or Z then youâre outâ and this would be different from the generic approach of âif anyone tries to outgrip them then support that effortâ. Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted âcommunity moderatorsâ who were asked to make decisions about this sort of thing. In an case, these are two I just came up with, the landscape is more nuanced than youâr accounting for.
To be more clear, Iâm against both (a) witch hunts and (b) formal procedures of evicting people. The fact that one of these things can happen without the other does not eliminate the fact that both of them are still stupid on their own.
As a counterexample to the dichotomy, sure. As something to be implemented⊠haha no. The more rules you make up the more argument there will be over what does or doesnât fall under those rules, what to do with bad actions outside the rules, etc.
Maybe you shouldnât outsource my decision about who is kosher to âtrusted community moderatorsâ. Why are people not smart enough to figure it out on their own?
And is this supposed to save time, the hundreds of hours that people are bemoaning here? A formal group with formal procedures processing random complaints and documenting them every week takes up at least as much time.
The system of everyone keeping track of everything works ok in small communities, but weâre so far above Dunbarâs number that I donât think itâs viable anymore for us. As you point out, a more formal process wouldnât have time for âprocessing random complaints and documenting them every weekâ, so theyâd need a process for screening out everything but the most serious problems.
Everyone doesnât have to keep track of everything. Everyone just needs to do what they can with their contacts and resources. Political parties are vastly larger than Dunbarâs Number and they (usually) donât have formal committees designed to purge them of unwanted people. Same goes for just about every social movement that I can think of. Except for churches excommunicating people, of course.
This is the only time that thereâs been a problem like this where people started calling for a formal process. You have no idea if it actually represents a frequent phenomenon.
Make bureaucracy more efficient by adding more bureaucracy...
The Democrats have the Democratic National Committee, and the Republicans have the Republican National Committee.
Do they kick people out of the party?
More specifically, do they kick people out of âconservatismâ and âliberalismâ?
In the US, and elsewhere, they use incentives to keep people in line, such as withholding endorsements or party funds, which can lead to people losing their seat, this effectively kicking them out of the party. See whips) for what this looks like in practice. Also, in parliamentary systems, often times you can also kick people out of the party directly, or at the very least take away their power and position.
Yes, if youâre in charge of an organization or resources, you can allocate them and withhold them how you wish. Nothing I said is against that.
In parties and parliaments you can remove people from power. You canât remove people from associating with your movement.
The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other peopleâs organizations and other peopleâs communities to exclude certain people.
Your party leadership can publicly denounce a person and disinvite them from your partyâs convention. That amounts to about the same thing.
Quoting myself:
Good questionânot really sure, I just meant to directly answer that one question. That being said, Social movements have, to varying degrees of success, managed to distance evenhanded from fringe subsets and problematic actors. How, exactly, one goes about doing this is unknown to me, but Iâm sure that itâs something that we could (and should) learn from leaders of other movements. Of the top of my head, the example that is most similar to our situation is the expulsion of Ralph Nader from the various movements and groups he was a part of after the Bush election.
The issue in this case is not that heâs in the EA community, but that heâs trying to act as the EA communityâs representative to people outside the community who are not well placed to make that judgment themselves.
Thatâs an important distinction, and acting against that (trying to act as the EA communityâs representative) doesnât automatically mean banning from the movement.
Here are some details on how this post came together: jefftk.com/âp/âdetails-behind-the-inin-document
Thank youâthis represents a very conscientious follow-up to serious concerns and a very complicated discussion. I appreciate the presentation of considered evidence and the opportunity given for a) members of the community pool their concerns and b) InIn to give their response.
Gleb, Intentional Insights board meeting, 9/â21/â16 at 22:05:
âWe certainly are an EA meta-charity. We promote effective giving, broadly. We will just do less activities that will try to influence the EA movement itself. This would include things like writing articles for the EA forum about how to do more effective marketing. We will still do some of that, but to a lesser extent because people are right now triggered about Intentional Insights. Thereâs a personalization of hostility associated with Intentional Insights, so we want to decrease some of our visibility in central EA forums, while still doing effective altruism. We are still an effective altruist meta-charity. So focusing more on promoting effective giving to a broad audience.â
(https://ââwww.youtube.com/ââwatch?v=WbBqQzM7Rto)
See 53:10-57:30 for discussion of social media.
A questioner asks about the concerns raised about InInâs social media presence. Tsipursky gives the raw numbers for social media including Facebook, Twitter, and Pinterest. He admits to the presence of clickfarms in facebook likes (although not the massive scale), but denies problems for Twitter and Pinterest while presenting them as good news about social media impact.
He conveys this by saying that the precise mechanism in Facebook is not known to apply to the other channels, failing to mention the evidence regarding them. There is even an exchange with Agnes Vishnekvin about how great it is to have so many Pinterest followers, since there are more women on Pinterest.
This meeting took place Sept 21st, but Tsipursky had been informed about the Twitter and Pinterest problems (lack of engagement, InIn following thousands of people, etc) discussed in the doc in August. He only addressed the Facebook problem mentioned by the questioner, while sweeping problems with the other channels under the rug and strongly implying they were fine.
23:50-25:40 A questioner asks about the controversy with InIn and the EA movement. It is said a few existing and potential donors/âpledges withdrew from supporting InIn after the controversy. Also Tsipursky and Vishnevkin say that 2 or 3 people at EA Global had considered 4-figure donations to InIn, and these may have fallen through in light of the subsequent revelations and discussion.
Glebâs problems seem due to important differences in social status instincts. For example, Eliezer once wrote that he doesnât experience the âstatus smackdown emotionsâ that other people experience, but he didnât realize it until a lot of people complained that his Harry Potter character comes across as insufferably arrogant to them. Readers wanted to smack down his Harry Potter character but this possibility did not occur to Eliezer at the time. So, Eliezer could not have written a Harry Potter character that people did not want to smack down.
I suspect that, for similar reasons, Gleb did not expect to see a large number of complaints of this nature. He might be having difficulty modeling other peopleâs minds regarding status, so he might find it difficult to relate to the people who have complained.
Some with social status instinct differences might be described as âstatus blindâ. They might not notice status messages at all, they might not make clear distinctions between different statuses, or they make such detailed distinctions that it becomes impossible to organize the statuses into a hierarchy. This very detailed approach has effects that are totally unlike social status as most people seem to experience it.
Additionally, someone who is status blind might have a very blurry emotional experience of statuses or they might feel nothing at all. That is to say status may not feel important to someone who is status blind. Richard Feynman wrote that he âNever knows who he is talking to.â and this resulted in him starting arguments with geniuses and famous people. Fortunately for Feynman, he was bright enough that he was able to hold his own and maybe it didnât seem too out of place to others for him to behave that way. I donât know if this example from Feynman is some form of status blindness, but I hope it makes it easier to imagine what status blindness might feel like for someone. For some, I think status blindness feels like always being of equal status no matter who youâre talking to.
On many occasions, I have noticed that Gleb didnât seem to mind public feedback. This is very unusual. That can certainly be a strength, but is part of a double-edged reputation sword. Most people who want feedback get an anonymous form so they can receive it in private. This prevents other people from reading things that make them look bad. Things like this cause me to suspect that, for Gleb, status messages do not have an emotional impact.
For the same reasons, when Gleb makes a status claim, he may not realize it will feel very important to others.
If I am correct that Gleb has a very different experience of social status, this would make promotion very hard for Gleb. It could lead to an outward appearance a little bit similar to Eliezerâs âArrogance Problemâ as described by Luke Muehlhauser. When chatting, Gleb doesnât come across as an arrogant person, but some of his promotional materials do have an element of that. Itâs mainly when he is trying to promote InIn that I see things really standing out that seem due to differences in status instincts.
Iâm sure that nobody here intends to shame Gleb for inherent differences that he may have and Iâm sure nobody intends to behave like an ableist. It seems like whatâs going on with these group discussions is mainly due to inferential distance. People didnât understand Gleb and Gleb didnât really understand others because itâs complicated and nobody had insight into what the difference is.
I hypothesize that what Gleb needs most is a few good, detailed explanations about how other people perceive statuses. He also needs to know what specifically he can do to âspeak the language of statusâ to effectively communicate, given the way others are going to interpret him. This would help him communicate promotional messages in a way that a broad audience will find is both accurate and persuasive, despite the differences in social status experiences. I believe it is very important to Gleb to be able to present Intentional Insights accurately and effectively. To succeed at that that, I think Gleb needs to become much more aware of everything having to do with social statuses and how they are perceived by others.
Fortunately Gleb does take feedback. I think he will improve if he gets explanations that help him really understand the problem and what the solution looks like. I canât be sure whatâs going on inside of Gleb, of course. Iâm not in his head, but I would like to suggest that we all try to be careful and make good distinctions between ignorance and malice.
I see a lot of examples of people investing a lot of energy giving Gleb feedback to no result. What do you think should be done differently that would lead to a different result?
I donât want to shame anyone for things they canât control, but if Gleb does not have the abilities that are necessary for outreach and fundraising, it is correct for him to not do outreach and fundraising. This is in some sense discrimination based on ability, but calling it âbehaving like an ableistâ seems like a really bad framing to me. First, it frames it as an issue of identity rather than individual actions. It would be more helpful to say âexpecting Gleb to X unfairly discriminates on abilityâ than âExpecting X is behaving like an ableistâ
Second, ableist is a vague word that includes both âjudging moral worth based on abilityâ, âdiscrimination based on lack of abilities that have nothing to do with the question at handâ and âdifferent abilities lead to different outcomesâ. If Gleb doesnât have the abilities to succeed in his chosen field that is very sad. I mourn for the things I would like to do but lack the ability for. But that does not change the outcome of his actions.
You have a great point that I agree with: if a person is incompetent at a particular task, they should not be doing that particular task (or should learn first rather than making a mess). IMO, Gleb should not write his own promotional materials himself and should not be the decision maker regarding methods of promotion (or he should invest the time to learn to do it well first). However, in my view, what Gleb does at Intentional Insights is not merely promotion. That is just the most visible thing that Gleb does. What Gleb actually does at InIn includes a lot of uncommon and valuable abilities like:
Gleb has a really intense level of dedication to the cause of spreading rationality. Gleb is brave enough to stick his neck out and take a risk while most people are terrified just to speak in front of an audience (Though I believe someone else aught to write his speeches. Delegating speech writing is common anyway.). He is also taking large risks financially in order to make InIn happen, and not everyone can do that. Gleb cares a lot about helping the world and being kind to others and is very dedicated to that. He is educated and knowledgeable as a professor and as a rationalist, though I realize this doesnât show very well in the articles written by some of his writers. In his own articles, the quality is much higher. So, I believe his main quality problem is not that he doesnât understand quality but that his awkward promotion behaviors are repelling the good writers and/âor attracting poor ones so that he is left trying to make the best of it. Iâve actually seen this repelling effect happening first hand. I believe that if he proved that Intentional Insights can do promotion well, good writers would want the benefit of being promoted by InIn.
Most importantly, Gleb actually wants the truth while some ârationalistsâ are motivated by other things (ego, status, loving to argue, wanting to hang out with smart people, etc.), so cannot actually practice rationality, nor do such people have any hope of ever spreading rationality. Spreading rationality is ridiculously hard and itâs not something that most dedicated and reality-minded rationalists would do well right way. Someone like Gleb at least has a chance because his motives are in the right place. That is both mission critical for the cause of spreading rationality, and itâs not common enough.
I think Gleb could pretty easily upgrade his leadership style to play to his strengths, and then learn enough about things like promotion to delegate what he is weak at effectively. All the successful leaders Iâve gotten to know are ignorant about a variety of things their organizations do, but delegate those things well. This works surprisingly well. Iâve seen delegation compensate for some truly hideous areas of incompetence, so I regard delegation as a very powerful strategy. I believe Gleb can learn to use delegation as a sort of reasonable accommodation for the issues that result from social status instinct differences.
Why hasnât Gleb seemed to update on this yet? He is an updaterâIâve seen it. Maybe you didnât know this, but Gleb has already begun delegating some of the promotional decisions.
I think what he needs to make delegation successful is a better understanding of promotion. Part of the problem may be that âthe apple doesnât fall far from the treeâ, so some of the people that Gleb has attracted and chosen to delegate the promotional decisions to arenât much better at promotion than Gleb is.
The size of the inferential distance in this area is very large and it wasnât obvious to anyone how to explain across the distance before. I believe that what I wrote in the comment weâre responding to is an insightful enough foundation of an explanation that Gleb, myself, and others can build upon it to help Gleb become informed enough to succeed at delegating promotional tasks to skilled people.
Itâs not our responsibility to educate him, of course, but I think there are enough people who are willing enough to do that, even though it takes time. I think Gleb is willing enough to spend the time learning. I think that this approach of crossing the inferential distance is worth testing to see whether it succeeds.
Additionally, Iâm happy to document my own attempts at explaining to Gleb, and explaining Gleb to others, by placing these explanations here on the forum. Because I am documenting all of this, others in the EA movement with social status instinct differences will have an opportunity to find information which will assist them with self-improvement. Therefore, my efforts, so long as I document them here, are much more valuable than just helping Gleb.
Even if I test my belief that we can cross the distance with Gleb, and my attempt fails, that test result is still valuable information!
I think youâre doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention.
It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe itâs because Bob had a difficult childhood and finds it hard to get close to people⊠all the while ignoring that outside view, the simplest hypothesis that explains all of Bobâs behavior is that he is manipulating her into giving him sex and affection. Itâs in some sense admirable for Alice to try to be charitable about Bobâs behavior, but at some point 1) Alice is incentivizing terrible behavior on Bobâs part and 2) the personal cost to Alice of putting up with Bobâs shit is terrible and she shouldnât have to pay it.
I think Kathyâs perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an âaggressive self-promoterâ who just want âmoney and attentionâ. If I were forced to guess, Iâd guess whatâs probably going on is some thought process like:
âIâm convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.â
âI thinks I have a lot of good outreach skills and know-how, and while Iâm not perfect, Iâm sufficiently good at âupdatingâ and accepting criticism that Iâm likely to improve a lot over time.â
âTherefore InInâs long-run value is huge no matter how many small hiccups there are at the moment.â
âThe upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you arenât literally injuring anyone and as long as the ends are sufficiently good.â
All of these claims are questionable in this case: the upside of EA outreach may depend a lot on who weâre reaching out to and how; the downside may be substantial (e.g., at least some people have reported thinking EA was terrible because they thought InIn represented it); outreach and updating skills are both lacking; and playing fast and loose with the facts âfor the greater goodâ is a terrible long-run heuristic to follow even if it really is sometimes a good idea from a myopic utility-maximizing perspective. The problem is compounded if not being fully forthcoming with others makes it progressively harder to see the whole truth oneself.
I agree with nearly all of this and Iâm glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occamâs razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what weâre seeing with Gleb. This could be valuable, so I really would like to test it. :)
Occamâs razor:
It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in âMy take on each of your 4 pointsâ below.)
Itâs possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that would be surprising. He is doing charity work, so seems less likely to have the motivations of a selfish jerk like the one Qiaochu describes. Gleb hates doing fund raising work, which supports my belief that he has a skill related problem more than it supports Qiaochuâs belief that he wants to milk people for money.
Testable Prediction:
I find that Occamâs razor helps me select explanations upon which I can build hypotheses that end up testing positive, so Iâll present a hypothesis and turn it into a testable prediction.
If my hypothesis is correct, then Gleb would have the chance to succeed if he heard enough descriptions specifying how others go about modeling other peopleâs minds when status is involved, what consequences they guess will happen if specific reputations are applied to InIn, and what quantity of negative/âpositive impact each specific reputation would result in. To turn it into a testable prediction: if Gleb received this information on every promotion-related idea he was seriously considering for the next three months, I think heâd learn enough to delegate successfully. The changes weâd see are that people would no longer complain about InIn and also that InIn would attract good people who were not interested in volunteering there before.
To prevent disaster during the 3 month period of time, perhaps InIn could take a break from most or all promotion type work, including publishing most/âall articles.
My Pessimism Is Located Here:
I can see how I came across as overly optimistic in the comment Qiaochu_Yuan was replying to. My first comment on this post did a much better job of summarizing my overall take on the situation than that one. That one was only intended to explain a much more specific area of thoughts than my overall perspective. I gave Qiaochu a quick sample of my pessimism here:
http://ââeffective-altruism.com/ââea/ââ12z/ââconcerns_with_intentional_insights/ââ8qt
My take on each of your 4 points:
1.) âIâm convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.â
My take: People with different social status instincts can have a tendency to drastically underestimate the reputation damage that can be done if outreach is low quality. I think anyone who underestimates the downsides enough would be likely to end up thinking the way you describe in 1.
2.) âI thinks I have a lot of good outreach skills and know-how, and while Iâm not perfect, Iâm sufficiently good at âupdatingâ and accepting criticism that Iâm likely to improve a lot over time.â
My take: If Gleb believes he is good enough at outreach for now, then this could be Dunning-Krueger effect, anosognosia, or underestimating the negative impact his imperfections are having. Any of these three reasons would be likely to cause a person to think their skill level is sufficient for now and/âor easy enough to improve, when it is not.
3.) âTherefore InInâs long-run value is huge no matter how many small hiccups there are at the moment.â
My take: I believe InInâs long-run value will be small or negative if the impacts of reputation risks continue to be underestimated. I think it is unfortunately far too likely that InIn will only end up producing important problems. These may include causing people to feel averse to rationality, confusing people about effective altruism, or drawing the wrong people into the EA movement. The risk of counter-productive results has been far too high for me to offer InIn anything other than things which could help reduce the risk of such problems (like feedback). However, the reason I think InInâs long-run value is likely to be low or negative is because I am not underestimating the impact of InInâs reputation problems the way Gleb is. You and I may be having something like hindsight bias or illusion of transparency here. I think anyone who has a pattern of underestimating reputation problems would be pretty likely to end up believing 3.
4.) âThe upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you arenât literally injuring anyone and as long as the ends are sufficiently good.â
My take: I suspect that you probably do not expect Gleb to be deontological about this or use virtue ethics or anything. Instead, I suspect that you would probably require him to meet a much higher standard with his trade-off decisions. To you and I, the negative reputation impact of the behavior you describe in 4 seems large. My reaction to this is to automatically model other peopleâs minds, guess some consequences for this dishonest behavior, and feel disgust. One guess is that people may feel suspicion toward Intentional Insights and regard their rationality teachings with skepticism. That alone could toast all of the value of the organization. Therefore, it is a major reputation disaster which would need to be rectified in a satisfactory manner before we can believe InIn will have a positive impact. Probably, we need to overcome mind projection fallacy to see why Gleb would think this way. My model of Gleb says the problem is that he models other people differently from the way I do when status is involved, does not guess the same consequences of reputation problems, and this is how he ends up underestimating the impact of reputation disasters. Underestimating the negative impact of dishonesty would, of course, result in Gleb choosing different risk vs. reward trade-offs than we would.
I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: âI strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.â ⊠âPerhaps I didnât get the memo, but I donât think weâve tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/âor driving Gleb out of the EA movement.â
(Perhaps you didnât read all of my comments because this thread has too many to read but that one is located here: http://ââeffective-altruism.com/ââea/ââ12z/ââconcerns_with_intentional_insights/ââ8o8)
One of the main reasons I have hope is because Iâve given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. Weâll see if it works.
True, I donât have a very good perception of social status instincts. I focus more on the quality of someoneâs contributions and expertise rather than their status. I despise status games.
Also, thereâs a basic inference gap between people who perceive InIn and me as being excessively self-promotional. I am trying to break the typical and very unhelpful humility characteristic of do-gooders. See more about this in my piece here.
FWIW, I read quite a bit of the self-promotional stuff as being status-gamey. I expect Iâm not all that unusual in this.
That it gets read this way is a challenge here, and indeed a challenge to the general problem of trying to dial back humility re. good deeds. I think some humility about good deeds is instrumentally pretty important for sending the right signals and encouraging others to be attracted to the idea (not of course to the point of keeping them all private).
I observe that people seem to evaluate a very large number of things in terms of status. Itâs actually ridiculously hard to write something that contains absolutely no status message about anybody whatsoever. If you donât believe me, try writing something thatâs both interesting or useful, but does not contain a single line or other element that can be interpreted in terms of status.
Ironically, I think itâs the people who are worst at conveying status messages who are most often accused of playing status games. Not to say that youâre accusing anyone! I can see that you are not! :)
The people who are very good at making status messages simply receive status. Part of what popular people do is to be smooth enough that most people donât think about the fact that theyâre even presenting status messages. To be unskilled with status messages is awkward, which attracts attention to the fact that status messages are present.
So, from what I have observed, it seems like the people who are best at actually playing status games are rarely called out for it (Even though their skill level suggests that they may, in fact, practice that on purpose!), while the people who are terrible at it canât seem to avoid making status messages all together, nor manage to consistently craft smooth status messages that donât stick out like a sore thumb.
It makes things a bit confusing for someone who doesnât do status things the stereotypical way. Do you âstopâ playing status games so people do not complain? How do you get around the major limitations on expression youâd impose onto yourself by being unable to say anything that anyone might possibly interpret as a status message? Do you just swallow the irony, dive in, and intentionally practice playing status games smoothly so that nobody complains to you about status games anymore?
Perhaps you agree about Glebâs intentions, or have no opinion on this, but I just wanted to say that if Gleb appears to be playing status games, he probably isnât very good at actually playing status games. This supports Glebâs claim that he hates status games more than any claim that he is playing them. Though I do acknowledge that all youâre saying here is that he comes across as playing status games. That is not an accusation. Itâs feedback. I agree with you.
What Iâm curious about is what do people think Gleb should do? Should he learn to play status games smoothly and in a way that will lead people to believe an accurate view of reality? Should Gleb try to limit himself to expressions that no one will interpret as status messages? Something else?
I agree that Gleb appears to be bad at status games. I donât have a view about whether he is deliberately engaging in them (Iâd kind of expect him to be better if he conceived of himself as engaging in them, but I observe that he has generated status among some group of supporters of InIn).
I think he should take a break from EA promotion and try to learn how to do better in this domain, in a way that doesnât take up large slices of time and attention from the EA community. It seems possible that he could come to be a productive member of the community, although Iâm a bit pessimistic on the basis of the amount of feedback he has received without apparently fixing the important issues. âLearning to do betterâ means not necessarily getting very good at status games, but getting good enough to recognise what might be construed as engaging in them, and avoiding that. I also think itâs crucial that he moves from a position of trying to avoid saying strictly-false things to trying to avoid saying things that could lead people to take away false impressions.
(Views my own, not my employerâs.)
One of the things Iâm trying to do, as I noted above, is a meta-move to change the culture of humility about good deeds. I generally have an attitude of trying to be the change that I want to see in the world and leading by example. Itâs a long-term strategy that has short-term costs, clearly :-)
I understand the long-term goal. Iâm claiming that this strategy is actually instrumentally bad for that long-term goal, as it is too widely read as negative (hence reinforcing cultural norms towards humility). More effective would be to embody something which is superior to current cultural norms but will still be seen as positive.
I will think about this further, as I am not in a good space mentally to give this the consideration it deserves
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I donât think the world is ready for it yet. I think promoting discussions about accomplishments among effective altruists is a great idea. I think if we do that enough, then effective altruists will eventually manage to present that to friends and family members effectively. This is a slow process but I really think word of mouth is the best promotional method for spreading this cultural change outside of EA, at least for now.
I totally agree with you that the world should not shut altruists down for talking about accomplishments, however we have to make a distinction between what we think people should do and what they are actually going to do.
Also, we cannot simply tell people âYou shouldnât shut down altruists for talking about accomplishments.â because it takes around 11 repetitions for them to even remember that. One cannot just post a single article and expect everyone to update. Even the most popular authors in our network donât get that level of attention. At best, only a significant minority reads all of what is written by a given author. Only some, not all, of those readers remember all the points. Fewer choose to apply them. Only some of the people applying a thing succeed in making a habit.
Additionally, we currently have no idea how to present this idea to the outside world in a way that is persuasive yet. That part requires a bunch of testing. So, we could repeat the idea 11 times, and succeed at absolutely no change whatsoever. Or we could repeat it 11 times and be ridiculed, succeeding only at causing people to remember that we did something which, to them, made us look ridiculous.
Then, thereâs the fact that the friends of the people who receive our message wonât necessarily receive the message, too. Friends of our audience members will not understand this cultural element. That makes it very hard for the people in our audience to practice. If audience members canât consistently practice a social habit like sharing altruistic accomplishments with others, they either wonât develop the habit in the first place, or the habit will be lost to disuse.
Another thing is that there could be some unexpected obstacle or Chestertonâs fence we donât know about yet. Sometimes when you try to change things, you run face first into something really difficult and confusing. It can take a while to figure out what the heck happened. If we ask others to do something different, we canât be sure we arenât causing those others to run face first into some weird obstacle⊠at which point they may just wonder if we have any sense at all, lol. So, this is something that takes a lot of time, and care. It takes a lot of paying close attention to look for weird, awkward details that could be a sign of some sort of obstacle. This is another great reason to keep our efforts limited to a small group for now. The small group is a lot more likely to report weird obstacles to us, giving us a chance to do something sensible about it.
Changing a culture is really, really hard. To implement such a cultural change just within a chunk of the EA movement would take a significant amount of time. To get it to spread to all of EA would take a lot of time, and to get it spreading further would take many years.
Unless we one day see good evidence that a lot of people have adopted this cultural change, itâs really best to speak for the audience that is actually present, whatever their culture happens to be. Even if we have to bend over backwards while playing contortionist to express our point of view to people, we just have to start by showing them respect no matter what they believe, and do whatever it takes to reach out across inferential distances and get through to them properly. It takes work.
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the âStatusâ chapter of Keith Johnstoneâs Impro, which contains this quote:
Emphasis mine. Of course, a large fraction of EA folks and rationalists Iâve met claim to not be bothered by others bragging about their accomplishments, so I think youâre right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.
This makes sense for spreading the message among EAs, which is why we have the Effective Altruist Accomplishments Facebook group. Iâll have to think further about the most effective ways of spreading this message more broadly, as Iâm not in a good mental space to think about it right now.
I donât believe you.
EDIT: Comment here was about a video by InIn, where I incorrectly speculated that they mightâve misused trademarks to signal affiliation with several other EA orgs. At least one of those orgs has confirmed that they did review the video prior to publication, so in fact there was not an issue. I apologize; it was wrong to speculate about that when it wasnât true, and without adequately investigating first.
The video description does say âAll the organizations involved in the video reviewed the script and provided a high-resolution copy of their logo. Their collaboration in the production of this video does not imply their specific support for any other organizations involved in the video.â
Youâre right, I missed that. Iâll edit the parent post to fix the error.
(Given the history, Iâm curious to find out what âreviewed the script and provided a high-resolution copy of their logoâ means, and in particular whether they saw the entire script, and therefore knew they were being featured next to InIn, or whether they only reviewed the portion that was about themselves.)
Thanks for this. I volunteer for The Life You Can Save and I am checking in on this for the organization. I will get back to you shortly.
An update from The Life You Can Save: we saw and approved this particular video for publication. We did not check with other non-profits as we assumed that was not our responsibility.
Hope that helps.
Jim, in light of the statement in the video description I think you should edit this post more to reduce snark based on a questionable hypothesis (and put the edits on top). I think this is also a good example of the value of careful and cautious approach to these things.
Also, while the GiveWell pronunciation is not that usually used by GiveWell staff, pronouncing the words separately actually makes it easier to understand.
If the organizations concerned give permission, I am happy to share documentary evidence in my email of them reviewing the script and giving access to their high-quality logo images. I am also happy to share evidence of me running the final video by them and giving them an opportunity to comment on the wording of the description below the video, which some did to help optimize the description to suit their preferences. I would need permission from the orgs before sharing such email evidence, of course.
I am confident this is true.
And at least some of the orgs have been contacted (see Neelaâs comment) and have the opportunity to disclaim if they wish. [ETA: and have said this was true in their own case, see Neelaâs second comment.]
Iâm half wondering how much upset was influenced by a general suspicion of or aversion about advertising and persuasion in general.
From one perspective, itâs almost as if Gleb used to be one of the âadvertising/âpersuasion is ickyâ people, and decided to bite the bullet and just do this thing, even if it seemed whacked out and ickyâŠ
At first I thought maybe part of the problem was Gleb didnât have any vision of how it could be done better. Now, I think it might actually be part of a systemic problem I keep noticing. Our social network generally does not have a clear vision of how it could be done better.
How many of us can easily think of specific strategies to promote InIn that sit well with all of your ethical standards and effectiveness criteria?
If a lot of people here are beginning with the belief that promotion is either icky or ineffective, we have set ourselves up for failure. This may encourage us to behave as if one either needs to accept being ineffective, or one needs to allow ones self to be icky ⊠which may result in choosing whichever things appear to be the icky-effective ones.
I think effective altruism can have both ethics and effectiveness at the same time. I do not believe there is actually a trade-off where choosing one necessarily must sacrifice the other. I believe there are probably even ways where one can enhance and build on the other.
I keep thinking that it would really benefit the whole movement if more people became more aware about what sorts of things result in disasters and how to promote things well. This is another way that such awareness could be beneficial.
Huh, this is a good point. Having a clear sense of what to do with advertising (both within the community and without) would be really helpful.
In 5.3. Twitter:
The question asked of Gleb is âHow many of those are payed [sic] and how many organic?â
I double checked and some Internet sources define the term âorganicâ as âunpaidâ. Following other accounts that will, in turn, follow your account is not the same thing as giving people money to follow you. I understand that this question was intended to inquire about how many Twitter followers actually genuinely want to follow the Intentional Insights account. This is a perfectly valid question.
What Iâm saying is that the 5.3 Twitter section can be misinterpreted. People might think it means âGleb was asked how many real followers he had and he mislead the person.â when what really happened looks to me like Gleb was asked how many of his followers he paid money to in exchange for their follow.
If the 5.3 section used different wording /â presentation, I think it would depict the situation more accurately.
I appreciate the huge amount of work it must have taken to put this post together. Nothing is perfect, and itâs hard to edit out every single flaw in something this long.
My stance is currently that Gleb most likely has a learning disorder (perhaps he is on the spectrum) and is also ignorant about marketing, resulting in a low skill level with promotion. Some people here are claiming things that make it seem like they believe Gleb intends to do something bad, like a con. Itâs also possible Gleb was following marketing instructions to the letter which were written by people who are less scrupulous than most EAs (perhaps because he thought it was necessary to follow such instructions to be effective). I wouldnât be surprised if Gleb perceived what he was doing as âwhite liesâ (thinking that there would be a strong net positive impact). Itâs also possible that some of these were ordinary mistakes (though probably not all of them because there are a lot).
Iâd like to discover why people believe things like âthis is a conâ and see whether I change my mind or not. Anyone up for that?
I donât care if it is intentionally a con or not. Given that cons exist, the EA community needs an immune system that will reject them. The immune system has to respond to behavior, not intentions, because behavior is all we can see, and because good intentions are not protection from the effects of behavior.
I no longer believe things Gleb says. In the Facebook thread he made numerous statements that turned out to be fundamentally misleading. Maybe he wasnât intentionally lying; I donât know, Iâm not psychic. But the immune system needs to reject people when the things they say turn out to be consistently misleading and a certain number of attempts to correct fail.
I donât think everyone needs to draw the line in the same place, I approve of people helping others after some people have given up on them as a category, even if I think itâs not going to work in this case. But before you invest, I encourage you to write out what would make you give up. It canât be âhe admits heâs a scam artistâ, because scam artists wonât do that, and because that may not be the problem. What amount of work, lack of improvement from him, and negative effects from his work and interactions would convince you helping was no longer worth your time?
These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I donât know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument:
1.) EA needs a set of rules which cannot be gamed by con artists.
2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists.
vs.
3.) Letâs not ban people and organizations who have good intentions.
A possible compromise between people on different sides would be:
Previously, there had been no rule about this. (Correct me if Iâm wrong about this!) Therefore, we cannot say InIn had broken any rule. Letâs make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period /â number of promotional pieces /â volunteers /â whatever. *
If InIn breaks the new rule after it is made, then weâll both agree they should be banned.
If you think they should be banned right now, whether there was an existing rule or not, please tell me why.
/â* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output.
The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because thatâs a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get.
Comparing 1 /â 3 vs. 1 /â 100 is comparing apples to oranges.
Iâm not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apologize for over-explaining!
I have bunch of different unorganized thoughts on this.
One, absolute number is obviously the incorrect thing to use. Ratio is an improvement, but I feel loses a lot of information. âBetter wrong than vagueâ is a valuable community norm, and how people respond to criticism and new information is more important than whether they were initially correct. It also matters how public and formal the statement was- an article published in a mainstream publication is different than spitballing on tumblr.
Iâm unsure what you mean by âbanâ. There is no governing body or defined EA group. There are people clustering around particular things. I think banning him from the FB group should be based on the expected quality of his contribution to the FB group, incorporating information from his writing elsewhere. Whether people give him money should depend on their judgement about how well the money will be used. Whether he attends or speaks at EAG should be based on his expected contribution. None of these are independent, but they can have different answers.
I donât think any hard and fast rule would work, even if there was a body to choose and enforce it, because anything can be gamed.
What I want is for people to feel free to make mistakes, and other people to feel free to express concerns, and for proportionate responses to occur if the concerns arenât addressed. I think immune system is exactly the right metaphor. If a foreign particle enters your body, a lot of different immune molecules inspect it. Most will pass it by. Maybe one or two notice a concern. They attack to it and alert other immune molecules they should maybe be concerned. This may go nowhere, or it may cause a cascading reaction targeting the specific foreign particle. If a lot of foreign particles show up you may get an organ wide reaction (runny nose) or whole body (fever). The system coordinates without a coordinator.
Every time an individual talked to Gleb privately (which Iâm told happened a lot), that was the first bout of the immune system. Then people complained publicly about specific things in specific posts here, on lesswrong, or on FB, that was the next step. I view the massive facebook thread and public letter as system wide responses necessary only because he did not adjust his behavior after the smaller steps. (Yes, he said he would, and yes, small things changed in the moment, but he kept making the same mistakes). Even now, I donât think you should be âbannedâ from helping him, if youâre making an informed choice. Youâre an individual and you get to decide where your energy goes.
I do want to see changes in our immune system going forward. There is something of a halo effect around the big organizations, and I would like to see them criticized more often, and be more responsive to that criticism. Ben Hoffmanâs series on GiveWell is exactly the kind of thing we need more of. Iâd also like to see us be less rigorous in evaluating very new organizations, because it discourages people from trying new things. Iâve been guilty of this- I was pretty hard on Charity Science originally, and I still donât think their fundraising was particularly effective, but they grew into Charity Entrepreneurship, which looks incredible.
I donât think the consequences of Glebâs actions should wait until there is a formal rule and he has had sufficient time to shoot himself in the foot, for a lot of reasons. One, I donât think a formal rule and enforcement is possible. Two, I think the information he has been receiving for over a year should have been sufficient to produce acceptable behavior, so the chances he actually improves are quite small. Three, I think he is doing harm now, and I want to reduce that as quickly as possible.
I realize the lack of hard and fast rule is harder for some people than for others, e.g. people on the autism spectrum. Thatâs sad and unfair and I wish it werenât true. But as a community weâre objectively very welcoming to people on the spectrum, far more so than most, and in this particular case I think the costs of being more accommodating would outweigh the benefits.
There isnât currently one, but Will is proposing setting up a panel: Setting Community Norms and Values: A response to the InIn Open Letter.
The panel wouldnât have any direct power, but it would âassess potential egregious violations of those principles, and make recommendations to the community on the basis of that assessment.â
Iâm glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of âbetter wrong than vagueâ, (though I am not sure exactly how you would go about implementing it as part of an assessment beyond âIf theyâre always vague, be suspicious.â which doesnât seem actionable.).
Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that itâs too likely to be a net negative. This is totally understandable and totally reasonable.
I think the reason we donât seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems.
I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact.
Perhaps the real crux of our apparent disagreement is something more like differing levels of determination /â ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes.
The way some appear to be seeing this is: âThe community is fed up with InIn. Therefore, letâs take the opportunity to oust them.â.
The way I appear to be seeing this is: âThe community is fed up with InIn. Therefore, letâs take the opportunity to persuade InIn to believe they need to do enough 2-way communication to understand how others think about reputation and promotion.â.
Part of this is because I think Glebâs ignorance about reputation and marketing are so deep that he didnât see a need to spend a significant amount of time learning about these. Perhaps he is/âwas unaware of how much there is for him to learn. If someone could just convince him that there is a lot he needs to learn, he would be likely to make decisions comparable to: taking a break from promotion while he learns, granting someone knowledgeable veto power over all promotion efforts that arenât good enough, or hiring an expert and following all their advice.
(You presented a lot more worthwhile thoughts in your comment and I wish I could reply intelligibly to them all, but unfortunately, I donât have the time to do all of these thoughts justice right now.)
Just a thought on the big picture: EAs have tended to be more comfortable with EAs doing things that many would consider unethical (like being a lawyer or banker) as long as those people use their money or influence for the greater good. But here it appears that EAs want to hold other EAs to higher ethical standards than society does. I understand that this is not a great analogy because an EA organization (especially an outreach one) gets more scrutiny. Still, I think that marketing to a broad audience almost implies a certain amount of exaggeration in order to be competitive. And even though that makes many EAs (myself included) uncomfortable, might it be for the greater good?
My sense is that honest and accurate evaluation of opportunities to do good, and high standards that enable that, has been a core value of EA
I disagree that exaggeration is more effective in broad outreach, e.g. GiveWellâs reputation for honesty and care was central to letting it reach its current large scale (and its astroturfing scandal hurt badly because of that)
Accurate communication tends to work better for things that actually are better, and thus has good incentive properties as a standard
In any case, the focus in the document is mostly on InInâs interactions with the EA community rather than the general public, and it was precipitated by InInâs self-promotion and fundraising directed at the EA community
Thinking people are sometimes mistaken about how they assess different impacts of a job (e.g. most jobs result in increased carbon emissions, pay for the employee, consumer surplus) is not the same as lower ethical standards
Fair enoughâjust thought I would ask.
Note â I will make separate responses as my original comment was too long for the system to handle. This is part two of my comments.
Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether thatâs the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, Iâll be happy to post it as a separate EA Forum post. If thatâs what you want, please go ahead and downvote.
I disagree with other aspects of the post.
1) For instance, the points about affiliation, of which there were 2 substantial ones, about GWWC and ACE (I noted earlier it was a mistake to post about the conversation with Kerry).
A) After Michelle Hutchinson sent the email, we changed the wording to be very clear regarding what we mean, stating that we engaged in âcollaboration with Against Malaria Foundation, GiveDirectly, The Life You Can Save, GiveWell, Animal Charity Evaluators, Giving What We Can, and others about them providing us with numbers of clicks and donations that they can trace to our article.â see link
In other words, to prevent any semantic and philosophical discussion about the meaning of the term âcollaboration,â we gave a very specific and clear statement about the nature of the collaboration at hand to prevent folks from getting confused about what it means. I am very comfortable standing by this statement.
B) Leahâs words were not in any way indicative of a formal endorsement for InIn, nor did we claim they were. They were just a statement of the kind of positive impact that InIn had for ACE. And in fact, we did ask Leah about quoting her in our internal documentation, which is where this information is located, our internal document about our EA impact: see image
2) The claims about astroturfing are way out of line: by comparing it to what GiveWell did, the authors are creating a harsh horns effect â smearing by association, in other words. For context, GiveWellâs senior staff on their paid time as employees went to forums where donations were discussed, and made up fake names to pose as forum members singing the praises of GiveWell. I and many other folks were very disappointed upon finding out what GiveWell did, although I appreciate the way they handled it. I would never want to do anything of the sort.
So letâs compare it to InIn. What the authors of this document point to is instances of InIn volunteers and volunteer/âcontractors on their own, non-paid time, and without any direction from the leadership, and using their real names, engaging with InIn content and posting mostly supportive messages, although with some criticism as well. They did not at all try to hide their identity, nor did they do so on paid time, as did GiveWell employees. We pay people only for specific things, such as doing video editing or social media management, and our miniscule budget does not cover low-impact and unethical activities such as the kind of thing done by GiveWell employees employees in the past.
I do not control what our volunteers or volunteer-contractors do on their non-paid time. I donât have time to monitor all that our volunteers do, and I generally leave it up to them to figure out, as I have an attitude of trust and faith in them. Volunteer management is a delicate balance, as anyone who actually managed volunteers knows. So I only intervene when I hear about problems, otherwise I focus on more high-impact activities such as actually doing the work of outreach to a broad audience that makes a difference in improving the world. When folks engaged in things that got pushback, such as posting on Less Wrong without sharing in their introduction statements their role with InIn, I asked them politely to revise their introduction statements in one-on-one conversations.
Now, since this blowup, I have had a thorough conversation with the Board of Directors and our Advisory Board, and we decided to institute a more formal Conflict of Interest policy. We decided it would be appropriate to have a systematic policy that applies to anyone with an official position in the organization, meaning holding an office or being paid. Hopefully this will help guide peopleâs behavior in a way that results in appropriate disclosures. However, we anticipate it will take some time to shift behavior and not everything will go right. You are welcome to point out to me any instances where thereâs an issues, and Iâll talk to the person who engaged in problematic behavior.
3) Iâm not sure why the volunteer/âcontractor is listed as a dubious practice. All people who are contractors started off by being volunteers. Over time, as we had a need for more work being done, we approached some volunteers who we knew already had a background as contractors on Odesk to do some part-time work for the organization. You can see the screenshots with my description for more details.
It is very common for nonprofit organizations to offer people who volunteer for them to do some part-time work. This is how many other EA organizations besides InIn got started â with volunteers who then went on to do some part-time work. Eventually, these organizations became large enough to have full-time employees, and weâd like InIn to be there eventually.
Some folks expressed disbelief that the volunteer/âcontractors are really there because they support the mission and instead believe that they are just there for the money. Well, thatâs simply not the case. Letâs take the example of Ella, who in October 2015, in response to a fundraising email, made a $10/âmonth donation: see image She voluntarily, out of her own desire, chose to make this donation. Let me repeat â she voluntarily, out of her own volition in response to a fundraising call that went out to all of our supporters, chose to make this donation. Just to be clear, we send out fundraising letters regularly, so itâs not like this was some special occasion. She did not have to do it, itâs just something she wanted to do out of her own volition.
Nor is this in any way an explicit or implicit obligation for contractors â about half of the contractors are also donors, and the others are not. I value, respect, and treasure Ella and all the others contractors, they are a great team and I feel close to them. We have a family environment in the organization, and care about and support each othyer. It makes me very upset and frustrated to see the relationship between us described in this twisted way as a âdubious practice.â
4) The claims about it being bad to call oneself a best-selling author if one did not do it through making the New York Times best-seller list are silly. There are many best-seller lists, and authors who make it to the top of any list describe themselves as best-selling authors: see link The document makes it seem like Iâm not following standard author practice here, and thatâs simply false.
5) The claims about not disclosing paid support are not backed up by any real evidence. I said that I ran the t-shirts by multiple people. Sure, some of them were volunteer/âcontractors for InIn. Does that fact cause them to not count as âpeople?â Wouldnât they be more likely to want higher-quality products so the organization succeeds more? In fact, they gave some of the more stringent criticism of the initial design, because they are more invested in the success of the design.
6) Regarding the Huffington Post piece, the person â Jeff Boxell â did not hear of effective giving before. Now he did, and he intends to use GiveWell and TLYCS as the guide for his donations. I am very comfortable standing by that claim.
P. S. Based on past experience, I learned that back and forth online about this will not be productive, so I did not plan to engage with, and if someone wants to learn more about my perspective, they are welcome to contact me privately by my email.
Also, worrying about the acceptability of policies towards contractors and volunteers acting of their own free choice, for a movement that is all about the consequentialist big picture, is a red herring.
He claims to have 1000+ hours per week (25 people full-time-equivalent) of volunteers and contractors working on InIn projects, with very little to show for it in terms of output.
When you read comments by contractors/âvolunteers, even longtime ones, they donât show anywhere near the understanding of InIn material you would expect from people spending this much time reading InIn writing. Examples: John, Beatrice, Cha.
InIn appears to have developed a culture where whenever Gleb posts something itâs expected that members will show up to comment with vacuous praise.
Iâm not convinced that the contractors are acting on their own, as opposed to because Gleb is paying them or because they hope to be paid in the future, even for things that are nominally unpaid.
Doesnât change the point of my post.
Whether theyâre paid or not is beside the point.
Letâs take a pair of examples:
1) Person A respects person B deeply, reads everything B writes, upvotes Bâs posts, comments to say how insightful they find Bâs writing, etc.
2) Person C is an employee of person D, who is paid to read everything D writes, upvote Dâs posts, comment to say how insightful they find Dâs writing etc.
Persons Bâs actions are fine, person Aâs actions are fine but maybe annoying, person Câs actions are kind of scummy, and person Dâs actions are very scummy.
Sometimes people do things because they want them to happen, sometimes they do things because someone else is paying them to, sometimes itâs in between: itâs a continuum. Situation (1) is the sort of thing you expect at the unpaid end, (2) at the paid end.
I donât think those are good actions. I was just talking about whether he was treating them appropriately. The post implied that people were not being paid enough. Iâm using the same reasoning as in the GWWCâs position on fair trade.
It sounds like I misunderstood your objection. Are you saying that if InIn had an explicit rule like âwe pay 1â3 of the Upwork minimum wage, but we cast this as a 2:1 volunteering:working policy in order to get around their requirementsâ you would be fine with it? The idea being that minimum wages are harmful because they keep people from making mutually beneficial exchanges?
So, first, I think EA organizations should pay at least the legal minimum wage as part of a general work-within-the-law system. Here weâre talking about an Upwork policy, though, which is weaker than a law and itâs more debatable whether to violate it. But if it were just that I agree this piece of things would be much more minor. The problem is Gleb is insisting that this is not whatâs going on, and that all the unpaid work is fully voluntary. And further, that actions they take in their allegedly fully-voluntary time shouldnât be attributable at all to Gleb/âInIn.
Note â I will make separate responses as my original comment was too long for the system to handle. This is part three of my comments.
Now that we got through the specifics, let me share my concerns with this document.
1) This document is a wonderful testimony to bikeshedding, motte-and-bailey, and confirmation bias.
Itâs an example of bikeshedding because the much larger underlying concerns are quite different from the relatively trivial things brought up in this document: see link
Consider the disclosures. Heck, even one of the authors of this document who is a GiveWell employee has very recently engaged in doing the same kind of non-disclosure of her official employment despite GiveWell having a clear disclosures policy and being triply careful after having gotten in trouble for actual astroturfing before: see image So is one of the core issues throughout this whole thread, of InIn volunteer/âcontractors engaging with InIn content on Facebook through likes/âshares. This is something that is widely done within the EA sphere. Why does the disclosures part of the document not list peopleâs actual motivations and beliefs that led them to write this document?
A) For example, letâs take the originator of the thread, Jeff Kaufman. He stated that his real concerns were not with the engagement by contractrors â the original topic of his post â but that his real concerns were about the nature of the content: see image
Now, I responded here: see image with Jeff not raising points in response. This is a classical motte-and-bailey situation â making a strong claim, and then backing away to a more narrow one after being called out on it: see link
This is similar to the concerns that Gregory Lewis raised in his comments in response to the post.
B) Letâs consider Oliver Habrykaâs real concerns, about how I personally made the experience of nearly everyone in EA worse: see image Gregory Lewis says something similar but not quite as strong in his comments here
The experience of nearly everyone in EA being worse due to me is highly questionable, as a number of EAs have upvoted the following comments supportive of InIn/âmyself: see image or this comment: see image or this comment: see image
I find it hard to fathom how Oliver can say what he said, as all three comments and the upvotes happened before Oliverâs comment. This is a clear case of confirmation bias â twisting the evidence to make it agree with oneâs pre-formed conclusion: see link To me Oliver right now is fundamentally discredited as either someone with integrity or as someone who has a good grasp of the mood and dynamics of EAs overall, despite being a central figure in the EA movement and a CEA staff member.
2) This document engages in unethical disclosures of my private messages with others.
When I corresponded with Michelle, I did so from a position as a member of GWWC and the head of another EA organization. Neither was I asked nor did I implicitly permit my personal email exchange to be disclosed publicly. In other words, it was done without my permission in an explicit attempt to damage InIn.
After this situation, both with Michelle and Oliver, how can anyone trust CEA and its arm GWWC right now to not use private emails against them when they might want in the future to damage them after any potential disagreements? And itâs not like I was accusing CEA/âGWWC/âMichelle of anything that they were trying to defend themselves with. This is a purely aggressive, not defensive, use of emails. Itâs especially ironic in a document where I received criticism for sharing my impressions of a phone call, one that I later acknowledged was inappropriate to do but was done in the heat of the moment and in no way intended to damage the other person
Now, I do not know if Michelle herself provided the email, or if Oliver found it through his access to CEA email servers, or if it ended up in the document through other means. Regardless, it had to come from CEA staff. Why would CEA/âGWWC permit its staff to use confidential access to information they have only as CEA staff to critize a nonprofit whose mission is at least somewhat competing, as Vipul Naiak pointed out: see image ? How is the CEA/âGWWC going to be perceived as a result of this? What is the reputation cost there?
I bet some of you might be hating on me right now for pointing this out. Well, youâre welcome to commit the cognitive bias of shooting the messenger, but I am simply pointing out the reality of the situation. In fact, I am using only publicly available statements, and am not revealing in my comment any of the reputationally damaging information I have in personal communications, despite the fact that my own personal communications with CEA staff has been revealed publicly by CEA staff. I do not consider it ethical to share personal exchanges with others in a way that damages those people. I hope we as a movement can condemn this practice.
3) I am incredibly frustrated by all the time and resources â and therefore money, and therefore lives â this episode cost. And for what? For finding out about our volunteer/âcontractors doing social media engagement on their volunteer time? For finding out about my use of the term best-selling author, a standard practice even if I didnât make the NYT best-seller list? I very much appreciate the information about Facebook boosting being problematic, and other things I pointed out as correct in my comments, but this could have been done in a way that didnât drain so much money, time, lives, reputation, and other costs. This is a black mark in the history of the EA movement.
4) And talking about black marks, how many people were driven away from the EA movement because of this? I had many people approach me about how this caused them to be alienated from the EA movement. I asked one of them allowed me to share his comments and impressions publicly: see image
5) Building on the last point, this episode is a classic example of the âFounder effectâ that plagues the EA movement: see link The authors and their supporters are trying to drive away people who share their goals and aspirations, but are somewhat different in their methods. The result of such activities is the evaporative cooling effect, where only those who hold the same methods being part of the movement. This results in the movement being increasingly limited to only a small demographic category. And itâs not like InIn is trying to bring people into the movement â we are trying to spread the ideas of effective giving in an effective manner. Such intolerance is deeply damaging to the movement as a whole.
6) The document and the episode as a whole reveals a fundamental misunderstanding of human nature among the authors. What if I was a different person than I am? What if this drove me to break away from and criticize publicly the EA movement? Emotions have a funny way of reversing themselves sometimes when people feel rejected. Think of a bad breakup you might have had. How much damage do you think would be done in that case to the movement itself, considering the media sway that InIn has? We regularly appear on TV, radio, in prominent public venues. Why would the authors of this document risk such damage? Now, thereâs certainly a part of me that wants to do it, but fortunately I am enough of an aspiring rationalist to recognize that this is an emotion that will pass, and am not overwhelmed with it. What if I was not?
To conclude:
I donât expect that those who come from an established conclusion that âInIn is badâ or âGleb is badâ or something in that style will update. Many have already committed themselves to this belief, and for the sake of consistency, they will hold that belief.
Just keep in mind the deep damage done by this episode, and consider focusing on how to strengthen others aligned with your goals, not bring them down and drive them away. When conflicts within the EA movement grow personal and irrational â âGleb made the experience of almost all EAs significantly worseâ â this tears apart the movement as a whole.
While still enthusiastic about the ideas of EA, and excited to work with many people in the movement, I am deeply disappointed in some EAs at the higher levels of the movement. For the sake of my own mental health, I have been taking a break from the EA Forum and to a large extent from the EA main FB as well. I will continue to be happy to work with those people who want to build up and create and actually do as much good as they can to spread effective giving ideas broadly. Contact me if interested by email, I anticipate I wonât be checking the Forum much: gleb (at) intentionalinsights (dot) org
Finally, letâs get on with it. Letâs orient toward leaving this in the past, learning from it, avoiding doing anything like this again, and trying to work together to do the most good that we can, even if we may disagree somewhat on the best ways of accomplishing these goals.
P. S. Based on past experience, I learned that back and forth online about this will not be productive, so I did not plan to engage with, and if someone wants to learn more about my perspective, they are welcome to contact me privately by my email.
Regarding point #2, Gleb writes above:
Here is the entirety of section 1.2, which does not cite or quote any statement from Glebâs email to Michelle, but rather cites Michelle regarding her own statements:
I had emailed GWWC after seeing it mentioned as a collaborator in InIn promotional documents, inquiring as to whether this had been with the knowledge or consent of GWWC. Michelle replied that it had not been, and explained that to the contrary she had previously made the request quoted. I then asked whether I could cite her, to which she replied affirmatively.
GWWC allowed citation of its own statements regarding the use of its own name in promotional materials against its organizational objection, in response to my question.
Additionally, Gleb has done himself exactly what heâs accusing Michelle of doing! In a comment in the megathread from August he included a screenshot (archived copy) of an email I had sent him.
I have indeed shared private emails when I have been accused of something improperly, and doing so was the only way to address this accusation. I have similarly done so in my statement above with regard to Leahâs email allowing me to share her comment on how InIn helped ACE. I have never done so as an aggressive move to defame someone.
I am confused by how you believe that citing words from an email written by Michelle Hutchinson to me, without my consent to the email being cited, does not constitute disclosure of a private email exchange. The specific method by which this citation got out doesnât matterâwhat matters is that it happened.
Gleb, there is a social norm that things one says in private email will not be publicized without consent. In the document quotes attributed to you from private messages are only included where you have been asked for consent, it has been given, and you have had opportunities to review prior to publication.
The same expectation does not apply to you vetoing Michelleâs statements about what she said (not what you said).
Carl, I guess we have a basic disagreement about the ethics of this. I think it is unethical to disclose any aspect of the exchange without the consent of the other person. You believe it is appropriate to disclose oneâs own aspect of the exchange without the consent of the other person. We can let other people make up their minds about what they consider ethical.
Indeed. However, I will note that my understanding (based on experience, analogy to law, and some web searching) is that my view is standard, while yours is not.
Iâd be curious to learn more about the analogy to law, so that I can update. Perhaps you can post some links here for the basis of your perspective?
No âexchangeâ has been disclosed. Michelle has disclosed her own words and that she said them to you. Are you claiming people can not report their own speech without the permission of their audience?
I am claiming that it is highly problematic ethically to disclose private email exchanges in order to damage other people, without an accusation against you that can be rectified only through disclosing these exchanges. I am comfortable standing by that statement.
Regarding Glebâs point #1 I would like to agree in particular that harsh hyperbole like âGleb made the experience of almost all EAs significantly worseâ is objectionable, and Oliver should not have used it.
Also itâs worth signal-boosting and reiterating to all commenters on this thread that public criticism on the internet, particularly with many critics and one or a few people being criticized, is very stressful, and people should be mindful about that and empathize with Glebâs difficult situation. I will also add that my belief is that Gleb is genuinely interested in doing good, and that one can keep this in mind even while addressing recurring problems. And further that people should separately address individual and organization specific issues from the general issue of popularization.
Regarding the point or lack thereof of the document, I agree that this exercise has been costly in several ways. I have been personally frustrated at spending so much time on it at the expense of valuable work, and dislike getting involved in such a controversy. I donât think the document will instantly solve all problems with InIn and its relationship to EA. However, it documents a pattern of aggressive self-promotion and self-favoring errors, including on indicators used to appeal to and fundraise from the EA community, that is relevant to a number of EAs as individuals.
Jeffâs August post was triggered by discussions about InIn at EA Global, where InIn was fundraising, and much of the document (and in particular the parts I worked on most) addresses claims and connotations from documents making the case for EA impact of InIn. In particular, sections 1, 2.4.1, 3, 4.2, and 5 speak to the reliability of InIn claims of impact. Section 7 explicitly rejects previous false hypotheses considered as possibly relevant to that, and most of 4.1.2 is in the same vein.
âRegarding Glebâs point #1 I would like to agree in particular that harsh hyperbole like âGleb made the experience of almost all EAs significantly worseâ is objectionable, and Oliver should not have used it.â
I agree, and am aware that I tend towards hyperbole in discourse in general. I apologize for that tendency, and am working on finding a communication style that successfully communicates all the aspects of a message that I want to convey, without distorting the accuracy of its denotative message. I am sorry for both the potentially false implications of using such hyperbole, as well as the negative contribution to the conversational climate.
Replacing the fairly vague, and somewhat hyperbolic âalmost allâ with a more precise âabout 70-90%â seems like a strict improvement, and I think captures my view on this more correctly. I do think that something in the 70% â 90% space is accurate, and mostly leaves the core of the argument intact (though I do still think that using the kind of hyperbole I am prone to use creates an unnecessarily adverse conversational style, that I think generally isnât very productive).
I have more or less two kinds of concerns:
Gleb/âInIn acting unethically, overstating impact, manufacturing the illusion of support
InIn content turning people off of EA and EA ideas by presenting them badly
While I think the second category is more serious, the first category is much easier to document and communicate. And, crucially, the concerns in the first category are bad enough that we can just focus there.
When I originally started writing this document I included quite a bit about my concerns in the second category, as you can see in this early draft. Carl and Gregory convinced me that we should instead focus just on the first category.
(Also, the section of conversation you cite doesnât show that I didnât care about the first category, just that I thought the second category was even more serious.)
Jeff, in your comments above, you say describe yourself as having two kinds of concerns, the ones about the content being more serious. However, in your comments here, you describe your âprimary concernâ as the nature of the content. I am now not sure about your actual position.
So question 1: Were you revealing your true beliefs earlier or now?
I also want to point out what you said above
This to me seems a classical example of bikeshedding (not focusing on the âprimary concernâ) and motte-and-bailey (defending a much narrower but stronger position after making initially grand but indifensible claims).
So question 2: Do you agree or disagree that these are examples of bikeshedding and motte-and-bailey?
I hope Jeff will forgive me for answering this comment on his behalf, and Gleb will forgive me for ceasing to pretend he asking in good faith, rather than risible mudslinging in a misguided attempt at a damage limitation exercise (I particularly like the catty âAre you revealing your true beliefs earlier or now?ââsetting an example for aspiring rationalists on how to garb their passive-aggressiveness with the appropriate verbiage).
Jeff notes here and in what you link there are two broad families concerns 1) your product is awful, and 2) your grasping duplicity in self-promotion, among the other problems illustrated above. He says that although [1] is actually the biggest problem, [2] is what he decided to focus onâhe also notes in another comment he was happy to talk about [1], but was persuaded otherwise by Carl and I.
BTW, as both his comments agree on [1] being the big problem, thereâs no inconsistency. I aver that even had Jeff been inconsistent, Glebâs uncharitableness with âwere you lying then or now?â is a much meaner measure than Jeff would have dealt to him had the tables been turnedâweâd have been keen to note the possibility of one sincerely changing ones mind, etc. etc.
Given your cargo-cult understanding of most conceptsâor perhaps more likely your propensity to misuse them in some slanderous hail-mary when the facts plainly arenât on your sideâit is unsurprising you misuse both motte-and-bailey and bikeshedding.
It would only be motte-and-bailey if [1] was some logical extension of [2], and [2] was retreated to in the face of criticism. So: âGleb lies about literally everything about InInâ would be a potential bailey, and âGleb told some half-truths about topic X and nothing elseâ being a motte here (note we documented areas we were mistaken, making this accusation even less plausible).
Yet Jeffâs [1] is not some exaggeration of [2]: you could have rubbish content without being dodgy, or dodgy with great content. Further, Jeff is not âbacking awayâ from [1], but affirming it in addition, but suggesting that [2] is easier to make headway on and that the problems are more than âbad enoughâ to explain why he has an extremely adverse view of InIn and you. He is correct: thanks in part to your endless self-promotion, I was acquianted enough with your content to appreciate it is rubbish for a while, yet I was unaware of all the other shady stuff you were up toâthanks to Jeffâs post, I found it out (and helped find some still further areas where you were up to no good), so I also now hold an extremely adverse view of InIn and yourself, as you kindly link above. I hope the post I contibuted to above can pay this valuable understanding forward to anyone else under the misapprehension that from your crooked timber anything straight can be made.
Bikeshedding applies to an organisation deciding to look at trivial issues because they are easy to work on. As someone who did a lot of the work in this document, it was neither easy to work on, and it definitely wasnât trivial. Itâs not trivial that for an âoutreach orgâ almost all of your social media engagement is illusory. Itâs not trivial how you somehow manage to use 1000 hours a week and produce so little of value. Itâs not trivial how you denied ever soliciting upvotes from the InIn group, got caught, then doubled down before making a tactical retreat. Itâs not trivial how again and again you plant astroturf under the excuse itâs âtheir volunteer timeâ, doubly so given the murky relationship between your VAs paid work and their volunteering. Itâs not trivial how many times you âupdateâ only to get caught red-handed doing something cosmetically different. Itâs also not trivial you think this is trivialâand despite having weeks to see what we were writing, the best you can dredge up in response is some risible complaints against Michelle, Oliver, Jeff, CEA, etc. and flatly denying or ignoring the rest.
I am happy to discover current and prospective donors also donât find it trivial, and I hope that this work has made InInâs financial difficulty even less trivial than it was before.
1) I would prefer to hear Jeffâs answer to my questionsâheâs more than capable of speaking for himself.
2) I will not stoop to engaging with the level of discourse you present in this comment.
These are entirely compatible. I had and have multiple concerns, and described the one I was most worried about as my âprimary concernâ. Thereâs no contradiction here.
I think youâre confused about what bikeshedding means: http://ââbikeshed.org/ââ
In this case itâs more of a motte-and-motte, with the document authors agreeing to focus on motte-A because we didnât have consensus that motte-B should be defended.
(I also appreciate Gregoryâs response to your comment.)
I donât have much interest in engaging much further in this discussion, since I think most things are covered by other people, and Iâve already spent far more time than I think is warranted on this issue.
I mostly wanted to quickly reply to this section of your comment, given that it directly addresses me:
âI find it hard to fathom how Oliver can say what he said, as all three comments and the upvotes happened before Oliverâs comment. This is a clear case of confirmation bias â twisting the evidence to make it agree with oneâs pre-formed conclusion: see link To me Oliver right now is fundamentally discredited as either someone with integrity or as someone who has a good grasp of the mood and dynamics of EAs overall, despite being a central figure in the EA movement and a CEA staff member.â
Iâve responded to Carl Shulmanâs comment below regarding my thoughts on the hyperbole used in the linked comment, which I do think muddled the message, and for which I do apologize.
I do also think that your strict dismissal here of my observation is worrying, and I think misses the point that I was trying to make with my comment. I do agree with Gregoryâs top comment on this post, in that I think your engagement with Effective Altruism has had a large negative impact on the community, and I do also think that you worsened the experience of being a member of the EA community for at least 70% of its members, and more likely something like 80%. If you disagree, I am happy to send Facebook messages to a random sample of 10-20 people who were recently active on the EA Facebook group, and ask them whether they felt that the work of InIn had a negative impact on their experience as an EA, and bet with you on the outcome.
I think your judgement of me as someone âfundamentally discreditedâ, âwithout integrityâ or as someone out of touch with the EA community would be misguided, and that the way you wrote it, feels like a fairly unjustified social attack to me.
I am happy to have a discussion about the content of my comment, i.e. the fraction of the community that was negatively influenced by InInâs actions, though I think most of the evidence has already been brought up by others, or myself, on this, and the implication follows fairly naturally from you having made sure that every potential EA communication channel has featured one or multiple pieces written by InIn at some point, which I generally think worsen peopleâs experience of the intellectual discourse in the community.
I see the less hyperbolic claim (worsening rather than significant worsening of experience as an EA, 70% rather than almost) and still doubt it.
Online fora where InIn can post are only a subset of experience as an EA, itâs still a small minority of content on those forums, readers who find InIn content unwelcome can and do scroll past it, and some like it or parts thereof. I expect a large portion of people donât know or care either way about InInâs effect on their EA experience.
I would still be interested to see the results of such a mini-poll on attitudes toward InIn content from a random sample of some kind (posters/âcommenters vs group members is a significant distinction for that).
Iâll be happy to take that bet. So if I understand correctly, weâd choose a random 10 people on the EA FB groupâones who are not FB friends with you or I to avoid potential personal factors getting into playâand then ask them if their experience of the EA community has been âsignificantly worsenedâ by InIn. If 8 or more say yes, you win. I suggest 1K to a charity of the choice of the winning party? We can let a third party send messages to prevent any framing effects.
Since the majority of the FB group is inactive, I propose that we limit ourselves to the 50 or 100 most recently active members on the FB group, which will give a more representative sample of people who are actually engaging with the community (and since I donât want to get into debates of what precisely an EA is).
Given that I am friends with a large chunk of the core EA community, I donât think itâs sensible to exclude my circle of friends, or your circle of friends for that matter.
Splitting this into two questions seems like a better idea. Here is a concrete proposal:
Do you identify as a member of the EA community? [Yes] [No]
Do you feel like the engagement of Gleb Tsipursky or Intentional Insights with the EA community has had a net negative impact on your experience as a member of the EA community? [Yes] [No]
I am happy to take a bet that chosen from the top 50 most recent posters on the FB group (at this current point in time), 7 out of 10 people who said yes to the first question, will say yes to the second. Or, since I would prefer a larger sample size, 14 out of 20 people.
(Since I think this is obviously a system of high noise, I only assign about 60% probability to winning this bet.)
I sadly donât have $1000 left right now, but would be happy about a $50 bet.
[Posting to note I have agreed to bet against Oliver on his proposed terms above.]
Why do people keep betting against Carl Shulman???
No shame if you lose, so much glory if you win
I wasnât super-confident, and so far it looks neck-and-neck (albeit on a smaller and noisier dataset than we had hoped for, 10 instead of 20).
Any outcome yet?
And the results are in!
The bet was resolved with 6 yes votes, and 4 no votes, which means a victory for Carl Shulman. I will be sending Carl $10, as per our initial agreement.
I should note that this provided the maximum possible evidence for Oliverâs hypothesis given that outcome, and that as a result I update in his direction (although less so because of the small sample).
We had 8â10 responses, and just sent out messages to another batch to get the last two responses. Should be resolved soon.
What will you do about people who donât reply to your messages?
(I havenât run this by Carl yet, but this is my current plan for how to interpret the incoming data)
Since our response rates where somewhat lower than expected (mostly because we chose an account that was friends with only one person from our sample, and so messages probably ended up in peopleâs secondary Inbox), we decided to only send messages until we get 10 responses to (1), since we donât want to spam a ton of people with a somewhat shady looking question (I think two people expressed concern about conducting a poll like this).
Since our stopping criteria is 10 people, we will also stop if we get more than 7 yes responses, or more than 3 no responses, before we reach 10 people.
I agree to this.
Iâm interpreting this as âgo until you get 20 âyesâ responses to (1) and then compare their responses to (2)â.
I am unwilling to take âactive members of the EA groupâ as representative of the EA community, since your actual claim was that I made the experience of the EA community significantly worse, and that includes all members, not simply activists. On average, only 1% of any internet community contribute, but the rest are still community members. Instead, I am fine taking the bet than Benito describesâwho is clearly far from friendly to InIn.
I am even fine with going with your lower estimate of 14 out of 20.
I am fine including friends.
I am fine with the two questions, although I would insist the second question be âsignificantly worseâ not simply ânegative impact,â since that is the claim we are testing, and the same for âsignificant preference for Gleb or InIn to not have engaged.â Words matter.
I am fine with having a pledge of $1K to be contributed as either of us has the money to do so in the future. I presume you will eventually have $1K.
I read âactiveâ to mean actually involved in things, whether socially, online, finding, or campaigning.
The word âactivistâ has a stronger connotation in spite of the same root.
Fair enough
Actually, Iâd suggest just taking a random sample from the FB group. My guess is that your positive connections should be taken into account in this bet Glebâif youâve personally had a significant positive impact on many peopleâs lives in the movement (and helped them be better effective altruists) then thatâs something this is trying to measure.
Also, 10 seems like a small sample, 20 seems better.
Iâm fine taking a random sample of 20 people.
Regarding positive connections, the claim made by Oliver is what weâre trying to measureâthat I made âsignificantly worseâ the experience of being a member of the EA community for âsomething like 80%â of the people there. I had not made any claims about my positive connections.
After some private conversation with Carl Shulman, who thinks that I am miscalibrated on this, and whose reasoning I trust quite a bit, I have updated away from me winning a bet with the words âsignificantly worseâ and also think itâs probably unlikely I would win a bet with 8â10, instead of 7â10.
I have however taken on a bet with Carl with the exact wording I supplied below, i.e. with the words ânet negativeâ and 7â10. Though given Carlâs track record of winning bets, I feel a feeling of doom about the outcome of that bet, and on some level expect to lose that bet as well.
At this point, my epistemic status on this is definitely more confused, and I assign significant probability to me overestimating the degree to which people will report that have InIn or Gleb had a negative impact on their experience (though I am even more confused whether I am just updating about peopleâs reports, or the actual effects on the EA community, both of which seem like plausible candidates to me).
FYI my initial reaction was that people in the community would feel very averse to being so boldly critical, and want to be charitable to InIn (as theyâve been doing for years).
Unfortunately, you and InIn have lost all credibility. There may be nuance to be had, there may be a few errors in the document, there may even be additional deeper reasons for why Carl Shulman, Jeff Kaufman, and the other excellent members of our community have spent so much of their time trying to explain their discomfort with you; however, when the core community has wasted this much time on you, and has shouted this strongly about their discomfort, I simply will not engage further. Iâll not be reading any comment or post by yourself in future, or continuing any conversation with you. This is where the line is drawn in the sand.
I would like to strongly encourage you to keep posting in this thread, and
I would like to encourage others to upvote your posts here to show that your continued participation in this discussion is valued. Having this dialog out in the open helps keep everyone on the same page.EDIT: Rob has convinced me that my recommendation that people upvote Glebâs responses was not a good idea. Instead, also per Robâs suggestion, Iâve added links to Glebâs three response comments at the end of the top-level post.
Upvoting can also be construed as community endorsement. (Gleb himself just cited âa number of EAs have upvoted the following comments supportive of InIn/âmyself...â as an important line of evidence in his denunciation of Oliver Habryka.)
I think people should upvote comments if they think theyâre sufficiently good/âhelpful, and downvote comments if they think theyâre sufficiently bad/âunhelpful. Rather than trying to artificially inflate upvote totals (as Gleb also does when he says that downvotes = âIâll repost this as a top-level threadâ), just edit the OP to link directly to Glebâs reply.
I mention this partly because the top-level comment here is seriously concerning. âInInâs content is so low-quality that itâs doing more harm than goodâ and âInIn regularly engages in dishonest promotional techniquesâ are both really, really serious charges. Using the fact that people have made one serious substantive criticism to try to discredit any other serious substantive criticism they raise is really bad at the community-norms level. More generally, responding to fair, correct, relevant criticisms in large part by trying to discredit the critics is super bad form and shouldnât be seen as normal or OK. Repeatedly accusing people raising (basically fair) concerns of âcosting livesâ because they took the time to fix your mistakes for you is also super bad form and definitely shouldnât be seen as normal or OK. I really donât want casual readers to skim through the comments here, see a highly upvoted comment, and assume that the comment therefore reflects EAâs community standards /â beliefs /â etc.
âa number of EAs have upvoted the following comments supportive of InIn/âmyself...â
This is especially rich given the accusations (which have been proved to my satisfaction) of astroturfing. At a minimum itâs another example of behaving very responsively towards criticism in the moment but making no changes to core beliefs.
Good idea. Done, and edited my comment above.
Note â I will make separate responses as my original comment was too long for the system to handle. This is part one of my comments.
Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether thatâs the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, Iâll be happy to post it as a separate EA Forum post. If thatâs what you want, please go ahead and downvote.
Iâm very proud of and happy with the work that Intentional Insights does to promote rational thinking, wise decision-making, and effective giving to a broad audience. To be clear, we focus on spreading rational thinking in all areas of life, not only charitable giving, with the goal of raising the sanity waterline and ameliorating x-risk. We place articles in major venues, appear on radio and television, and spread our content through a wide variety of other channels. It is not an exaggeration to say we have reached millions of people through our work. Now, we donât have a large resource base. We have a miniscule budget of just over 40K, mostly provided by my wife and I. Itâs thanks to our broad network of volunteers of over 50 people that we can make this difference. A few of these volunteers also provide some contract work, and Iâm really happy they can do so. Thanks to all the folks who helped make this happen!
Letâs go on to the content of the post. I appreciate the constructive part of the criticism of the authors of this post, and think some of points are quite correct.
1) I do think we made some mistakes with our social media, especially on Facebook, and we are working to address that.
2) We have instituted a Conflict of Interest policy to provide clear guidance to anyone in an official position with InIn to disclose their affiliations when making public statements about the organization.
3) Unfortunately, the person I asked to update our social media impact after Jacy Reese thoughtfully pointed out the âsharesâ vs. âlikesâ issue forgot to update the EA Impact document, although she did update the others. Thanks for bringing it to our attention, and itâs now fixed.
4) While I was careful to avoid explicitly soliciting upvotes, my actions were intended to bring information about opportunities to upvote to supporters of Intentional Insights. I should have been clear about that, and I noted that later in the FB post.
5) I am at heart a trusting person. I trusted the figures from TLYCS, and why shouldnât I? They are the experts on their figures. Iâm glad that this situation led to a revision of the figures, as I want to know the actual impact that we are making, and not have a false and inflated belief about our impact.
In part two, I will describe what aspects of the post I disagreed with.
P. S. Based on past experience, I learned that back and forth online about this will not be productive, so I did not plan to engage with, and if someone wants to learn more about my perspective, they are welcome to contact me privately by my email.
I have down-voted this comment because I think as a community we should strongly disapprove of this sort of threat
The criticisms have been raised in an exceptionally transparent manner: Jeff made a public post on Facebook, and Gleb was tagged in to participate. Within that thread the plans to make this document were explained and even linked to: anybody (Gleb included) could read and contribute to that document while it was under construction.
This statementâthat all criticism in the form of down-voting is likely to be driven by personal animosity or an attempt to hide negative feedbackâis both baseless and appears to be an attempt to ward off all criticism. While I feel that Gleb is currently in a very difficult position, this framing of the conversation makes engagement impossible, hence downvoting.
I donât know much about Intentional Insights, so I wonât comment about the organization. However, Iâd like to say that I have thought Glebâs comments on the forum to be consistently the highest quality, so I donât want to see him stop posting. (Even though I never up/âdownvote out of principle â I think people should determine the quality of a post for themselves.) He is the voice of reason in an extremely elitist and conservative community that punishes anyone who is not likewise close-minded. Gleb acts in the public interest, trying to get the public to maximize their impact, whereas most in the EA community are seemingly pursuing personal interests, using EA to justify their belief in technology as the supreme power and discredit spirituality.
If you donât think InIn produces good quality content, itâs not a given that they should be boycotted. Another solution would be to support them more to improve their quality. CEA, on the other hand, has critical moral issues, such as shunning people outside of their elite atheist demographic, even if they are GWWC members.
Huh? I am genuinely confused as to what you mean by that.