Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
Hmm, from what I can tell of my analysis of movements so far (looking in particular at communism, the enlightenment, the free market economists, the women’s rights movement and the climate change movement) I see dilution and value drift as one of the main risks that reduced the effectiveness of these movements (according to the values of the founders).
Importantly, those do not tend to be “watered down” in a naive fashion. Instead the core ideas slowly get replaced by ideas that are easier to understand and easier to spread (which can often also mean that they are more radical), and the intellectual integrity of the movement deteriorates as more and more members join who have not yet understood the arguments, or who simply have gone through a weaker filtering process.
I agree with xccf here that the natural direction in which I expect EA ideas to morph to become more self-propagating is in the direction of existing charity. Though I can also imagine more extreme ideas to be more adaptive in our current context (though I would assign lower probability to that).
As a disclaimer: I am not yet satisfied with my understanding of past social movements, and this is very much a gut judgement. I hope that I will be able to make better and more rigorous arguments in the future (for any side).
Very happy to see an article on this, since I think EA will have to rely a lot on assessing expertise in domains we don’t know very well.
I think it’s outside of the scope of this article, and seems pretty hard to do, but I would be interested in how this breakdown stacks against empirical data (maybe we have something from the forecasting and expertise literature?), and also just in general see a bit more justification on why you chose this specific set of markers to look out for.
But in general, I am happy to see concrete instructional posts on important topics on the forum.
(Reviving a bit of an old thread, but just noticed this response in my inbox)
I think you make a good point here, and I think I might have underestimated the risk of EA becoming too radicalized. I will think about this more, and maybe try to do some concrete scenario planning on specific ways in which I can imagine the EA movement becoming too radical.
It’s a really important thing to look out for and understand well, so I am very happy about your contribution. Thanks!
I know this is a nitpick, and outside of the EA community no one would probably complain, but I do feel a bit uncomfortable with the following:
“To raise money for AMF, rated the world’s most effective charity by GiveWell.”
Though it is GiveWell’s top rated charity, it has not been chosen so because of it’s overall value as an organization, but because of it’s cost-effectiveness compared to other marginal donations. If I remember correctly, the Gates Foundation’s vaccination program is both more effective, and more cost-effective, but currently does not have additional room for funding.
“rated the world’s most cost-effective charity” seems better, though I think the statement “rated as best charity to donate to, by GiveWell” strikes me as most true, or just simply “GiveWell’s top recommended charity”.
Hmm, though I agree with the idea that people tend to be overconfident, the critique of this style of reasoning is exactly that it leads to overconfidence. I think the argument “people tend to be underconfident, not overconfident” does not seem to bear a lot on the truth of this critique.
(e.g. underconfidence is having believes that tend towards the middling ranges, overconfidence is having extreme beliefs. Eliezer argues that this style of reasoning leads one to assign extremely low probabilities to events, which should be classified as overconfident)
Interesting idea. Main thing that bothers me is the conflation between “animal lives” and “animal life-years”. It’s standard in development economics to use life-years as the relevant measures, not total amount of deaths-averted. So I think using life-years as the measure would result in more accurate results (e.g. ask how much a person is willing to pay to save 15 years of pig-life).
I am quite confused why you decided to post this here. As kbog correctly pointed out, this just reads like a normal news article on something that seems somewhat relevant to EA, but not clearly. If you are looking for feedback on your writing, I could imagine people being happy to help, but this just being posted without much context feels strange to me.
Excellent, thank you a lot for doing this analysis!
Would it be possible to extract the concrete average growth rates and post them in this post?
E.g.: 5 members/month for GWWC, 3 Signups/day, etc.
I am very much in favor of posts like this and would love there to be a lot more posts like this.
Among other things I was the person who designed the website, so am really happy about feedback on this.
When it comes to classifying the design-language that I used for EA Global, I think minimalist fits quite well. I don’t think using basic background imagery, especially if it’s the only visual element on the page and is clearly related to the brand identity, would count much against a minimalist style. In general the usage of images is limited, and the whole style is monochromatic (with some very exceptions) to put full focus on the UI elements.
In particular, if you scroll down on any of the content pages, you will find a complete minimalist style, with a complete absence of distracting elements and a strong focus on content.
Is there actually anything that you would change about the website? In particular the comparison with .impact doesn’t really work, since that page doesn’t really have much content, and also kind-of fails in its navigation because of the absence of a navbar or any other classical navigation element
(e.g. I definitely didn’t expect the team link to actually go somewhere on the .impact page, but expected it to be an external link, since the page itself communicated a one-page design without any hierarchical structure. This is added to by the absence of breadcrumbs or other hierarchical context element on the teams page and other sub-pages. I feel like in this case someone took the minimalist idea too far and actually removed important UI elements from the page.)
I would expect that if we were to hold the event in Madison or a similar location, that a much larger percentage of the attendees than 10% would have to travel. My rough guess based on the distribution EAs around the globe, is that if we would want to get a good share of the community to attend, then at least 50% of attendees would have to travel to Madison.
My usual estimates in time costs usually ran into something around the $200,000 - $400,000 range for having the event in a more remote location, usually with a fairly low bound on how much people value their time (i.e. $20 or so), which would fit well with your estimate, when you increase the percentage of attendees who would have to travel.
I would also expect in practice that because of the heavy-tail distributed nature of income in the community (and the population at large) that the actual value of the average person’s time would be a good bit higher than the value of the median person, which is where my $20 intuition would come from. So I am not that sure whether $30 is actually high. My guess is that it would still be an underestimate, though I wouldn’t be confident. (E.g. if Dustin Moskovitz wants to attend, he is pretty justified in valuing his time at something on the order of $1,000 - $10,000 an hour, and weaker version of this are true for many other high-earning members of the community.)
I do think that ~20 hours in travel costs seems a bit high, but something on the order of ~10 hours is probably correct.
Regarding 3:
The EA Hub, the EA survey, the traffic numbers for the forum and the location of EAG attendees,as well as most other survey data we have all tend to agree quite well on the distribution of the EA community. They all look roughly like the EA Hub map:
For reference, here is the distribution of people who answered the EA survey (conditional on people who filled out the whole thing and gave additional information):
https://awesome-table.com/-K7ENuzdTLgyTnE0kf2r/view
For reference, here is the distribution of traffic to the Forum:
https://drive.google.com/file/d/0B09ZIi0x-2JtM2h3Tlo1MmhhNG8/view?usp=sharing
And here is the distribution by country:
https://drive.google.com/file/d/0B09ZIi0x-2JtanMybDBpNjZfREE/view?usp=sharing
It would take me a while to make the origins of the participants for EAG 2015 into a nice map, but it generally follows a similar distribution, with the East Coast being naturally somewhat underrepresented (since we didn’t have an event there).
In general, San Francisco is the biggest hub, the East Coast has a good amount of people but is quite spread out, and London+Oxford is about half the size of the Bay Area, with a good amount of people spread around the UK. Usually London + Berlin still is only about 50% − 60% of the size of the Bay Area. (For the Google Analytics data above, make sure to add up Oakland, Berkeley and San Francisco to get an accurate number for the Bay Area, and probably add up Cambridge, Oxford and London to get a somewhat similar comparison for the London area).
Ah, sorry for that. I was a bit unclear in what I wanted to express with the above, sorry for being confusing:
Here are the two separate things I wanted to say:
London itself is about as closely connected via public transport, group houses, people visiting each other and just physical density of people as the East Bay and SF are. Let’s call this group the “core Bay Area”, and the other part the “core London Area”. It takes about 30-45 minutes to get from any point in the East Bay to any location in SF, and similarly it takes about 30-45 minutes to get from any point in London to any other point in London and both cost about $10.
Then the London core area, which in the EA Forum statistics is just London, has 2118 sessions in the period from July 1st to now. The core Bay Area, which consists in the statistic above of San Francisco, Berkeley and Oakland, has 3776 sessions in the same time, making the London core Area about 56% the size of the core Bay Area.
The wider Bay Area, including the South Bay, San Jose, etc. are about as closely connected as Cambridge, London and Oxford are. I.e. it takes about 2 hours to get from any point in the space to another, and it costs something around $30-$50 to do so. Then the total amount of sessions from the wider Bay Area in the same period is 4771, and 4144 in the wider London area, making the wider London Area about 86% the size of the wider Bay Area.
I did an analysis on the traffic data on that a while ago, and forgot to make this distinction clear.
(To quickly check the thing above I went through the top 50 entries for the Google Analytics account sorted via city, and added all of the ones that are close to London and all of the ones that are close to SF to this spreadsheet, together with the number of sessions. I did not find any entries of cities on that list that were not Oxford, London or Cambridge that had any significant amount of people and were comparatively close to London, though I might have missed one or two with <50 sessions, since I am not as familiar with the British city names. Here is the spreadsheet with my numbers:
Yeah, LA and San Diego are probably a bit farther away than the other cities. I would be happy with a comparison that removes them.
Though removing them from the comparison doesn’t change too much. The general point I was trying to make was more that the highest density areas of the two locations, in which frequent travel is actually feasible, aren’t of equal size, though their wider areas are indeed quite comparable. And that there is a really big difference in a 2 hour drive and a 30 minute drive (i.e. over the past two years I’ve been to Oxford more often than to Stanford, simply because it’s so far away).
For the sake of EA Global travel times, I think treating them both as about similar size seems reasonable to me. Which is what we did in the analysis for this year’s EAG. Though for everyday community building considerations, the difference in density is actually pretty important (and is reflected in the number of meetups, social events, EA orgs, etc.).
Yep, agree with this. Sadly since online surveys tend to be the easiest way to conduct these, we don’t really have much different data. There are a few things we will hopefully be able to estimate soon, which might help us spot inconsistencies between these:
of members in EA student chapters in different locations
of people who attend different EAGx events
Origin of people who attend EAG (sadly we only have country-wide data for this year’s EAG, since our registration completion rates dropped quite a bit when we increased the length, so we had to cut some questions)
Distribution of people engaging with the EA Facebook groups
Distribution of people having taken the GWWC pledge
Distribution of people who donate to meta-EA charities
Geographic distribution of newsletter subscribers for 80K and the EA newsletter
I would guess that at some point CEA will look into all of these, though I would be somewhat surprised if any of these massively disagree with the EA-hub/survey data. Still seems valuable to check though.
“Regarding Gleb’s point #1 I would like to agree in particular that harsh hyperbole like “Gleb made the experience of almost all EAs significantly worse” is objectionable, and Oliver should not have used it.”
I agree, and am aware that I tend towards hyperbole in discourse in general. I apologize for that tendency, and am working on finding a communication style that successfully communicates all the aspects of a message that I want to convey, without distorting the accuracy of its denotative message. I am sorry for both the potentially false implications of using such hyperbole, as well as the negative contribution to the conversational climate.
Replacing the fairly vague, and somewhat hyperbolic “almost all” with a more precise “about 70-90%” seems like a strict improvement, and I think captures my view on this more correctly. I do think that something in the 70% − 90% space is accurate, and mostly leaves the core of the argument intact (though I do still think that using the kind of hyperbole I am prone to use creates an unnecessarily adverse conversational style, that I think generally isn’t very productive).
I don’t have much interest in engaging much further in this discussion, since I think most things are covered by other people, and I’ve already spent far more time than I think is warranted on this issue.
I mostly wanted to quickly reply to this section of your comment, given that it directly addresses me:
“I find it hard to fathom how Oliver can say what he said, as all three comments and the upvotes happened before Oliver’s comment. This is a clear case of confirmation bias – twisting the evidence to make it agree with one’s pre-formed conclusion: see link To me Oliver right now is fundamentally discredited as either someone with integrity or as someone who has a good grasp of the mood and dynamics of EAs overall, despite being a central figure in the EA movement and a CEA staff member.”
I’ve responded to Carl Shulman’s comment below regarding my thoughts on the hyperbole used in the linked comment, which I do think muddled the message, and for which I do apologize.
I do also think that your strict dismissal here of my observation is worrying, and I think misses the point that I was trying to make with my comment. I do agree with Gregory’s top comment on this post, in that I think your engagement with Effective Altruism has had a large negative impact on the community, and I do also think that you worsened the experience of being a member of the EA community for at least 70% of its members, and more likely something like 80%. If you disagree, I am happy to send Facebook messages to a random sample of 10-20 people who were recently active on the EA Facebook group, and ask them whether they felt that the work of InIn had a negative impact on their experience as an EA, and bet with you on the outcome.
I think your judgement of me as someone “fundamentally discredited”, “without integrity” or as someone out of touch with the EA community would be misguided, and that the way you wrote it, feels like a fairly unjustified social attack to me.
I am happy to have a discussion about the content of my comment, i.e. the fraction of the community that was negatively influenced by InIn’s actions, though I think most of the evidence has already been brought up by others, or myself, on this, and the implication follows fairly naturally from you having made sure that every potential EA communication channel has featured one or multiple pieces written by InIn at some point, which I generally think worsen people’s experience of the intellectual discourse in the community.
Since the majority of the FB group is inactive, I propose that we limit ourselves to the 50 or 100 most recently active members on the FB group, which will give a more representative sample of people who are actually engaging with the community (and since I don’t want to get into debates of what precisely an EA is).
Given that I am friends with a large chunk of the core EA community, I don’t think it’s sensible to exclude my circle of friends, or your circle of friends for that matter.
Splitting this into two questions seems like a better idea. Here is a concrete proposal:
Do you identify as a member of the EA community? [Yes] [No]
Do you feel like the engagement of Gleb Tsipursky or Intentional Insights with the EA community has had a net negative impact on your experience as a member of the EA community? [Yes] [No]
I am happy to take a bet that chosen from the top 50 most recent posters on the FB group (at this current point in time), 7 out of 10 people who said yes to the first question, will say yes to the second. Or, since I would prefer a larger sample size, 14 out of 20 people.
(Since I think this is obviously a system of high noise, I only assign about 60% probability to winning this bet.)
I sadly don’t have $1000 left right now, but would be happy about a $50 bet.
After some private conversation with Carl Shulman, who thinks that I am miscalibrated on this, and whose reasoning I trust quite a bit, I have updated away from me winning a bet with the words “significantly worse” and also think it’s probably unlikely I would win a bet with 8⁄10, instead of 7⁄10.
I have however taken on a bet with Carl with the exact wording I supplied below, i.e. with the words “net negative” and 7⁄10. Though given Carl’s track record of winning bets, I feel a feeling of doom about the outcome of that bet, and on some level expect to lose that bet as well.
At this point, my epistemic status on this is definitely more confused, and I assign significant probability to me overestimating the degree to which people will report that have InIn or Gleb had a negative impact on their experience (though I am even more confused whether I am just updating about people’s reports, or the actual effects on the EA community, both of which seem like plausible candidates to me).
I strongly agree with many points in the article. But I think the main think I want to call into question, though not necessarily disprove, is the thesis that “good associations” is equal to “warm words”. And even going so far as suggesting that warm words might not be a good idea at all for Effective Altruism.
A good case to compare ourselves to here is Science, and its associative perception. Science is famous for lacking warm words in its language. If you go to the page of a research institute, or a scientific journal, you will be bombarded with cold and analytic words. The government is another interesting reference class. It uses no warm words, and avoided doing so from the very beginning. But Science and Government are probably the two most successful institutions that have arisen in the last thousand years. How come both of these institutions are so successful, even though they completely lack warmth in their presentation?
It is true that warm words can create positive associations, but so can cold words. The space of associations is large, and “warmth” is only one positive attribute that you can use. Equally important dimensions are “growth”, “stability”, “authority”, “honesty”, “wealth”, “reliability” and “consistency” (and many more). An analysis that argues for using more warm words needs to make a more precise claim about why we should use warmth, in particular if warmth is often hard to combine with authority, ambitiousness and objectivity (as other commenters have noticed).
The second thing I want to highlight is the question of “what kind of person do we want to attract?”. As EA, our goal is not to just grow, it is to create a healthy ecosystem in which ideas can thrive, projects can be started, and the climate of discussion is generally allowed to be friendly and at an intellectual high level (among many other things). In creating this ecosystem, the question of “will this keep some people from joining the movement?” is comparatively unimportant to the question of “how can we get the people that the current movement is lacking?”.
And so I pose the question, “does using warm language, attract any groups of people that would significantly improve the health of the EA ecosystem?”. To answer this question, we need to understand what kind of person is attracted by warm and compassionate language. And we need to understand what kinds of people we want to have more of in EA. To answer these sub-questions, we might want to start with looking at existing communities that use a warmer language than we do, and see whether having members migrate from there, and enter our community, would be a net improvement for the EA community.
This analysis would require a bit more space than I have in this comment, but looking at it from the outside, it isn’t immediately clear to me that the communities that are known for warmth would be particularly valuable for EA. That said, communities that are slightly more associated with warmth, such as the education, biology and social science community, are indeed groups in which EA appears to be lacking, and might be valuable contributions to the EA ecosystem. It is not clear to me that growth from the social impact sector would be a strong improvement, which is also a community that emphasizes warmth more than we do (and would be compatible with other EA ideas).
In conclusion, I mostly want to highlight that using more warm language is not clearly a good choice, and might come with higher costs than naively expected. To answer that question I would love to see more analysis along the lines of this post, and this comment, by the broader EA community. I also want to emphasize that the language that we are using is a really important choice, and that almost every decision in this domain comes with tradeoffs. Emphasizing warmth will almost always mean a lower emphasis on the other attributes that we were highlighting previously. We need to be aware what those tradeoffs are, and choose our signaling carefully and consciously according to an analysis of what community we want to build. This is a difficult task, but with large potential payoffs.