The value of n is so high that Peter wouldn’t want to embarrass the rest of us with how smooth he is by disclosing that information. Yet I’ve got access to it! It’s a well-kept secret that 8% of all historical growth of the EA movement is due to Peter bringing cute girls into the movement by way of telling all of them he passes by about the thought experiment about the drowning child.
Would you mind please sharing a link to this startup ‘Roam’? They sound interesting but I’ve not heard of it. I’d look it up myself but I doubt I’d know how to find the right website just by searching the word “roam.”
Summary: There are multiple reasons why, in my opinion, we in EA should not encourage intra-community dating beyond how it arises organically in the community. Yet that’s not the same thing as not thinking about it. A modicum of public discussion about intra-community dating is probably not ‘culty’ compared to much of what the EA community already engages in regardless. One solution may be for those of us who are personal friends with each other in EA to make greater effort to provide support to each other in our mutual pursuits of romantic partners amenable to an EA lifestyle, especially including outside the EA community as well.
I agree the EA community should not systematically think about us dating each other. By “systematically,” I mean that I don’t think the EA community ought to try seeking a programmatic way for us to date each other. There are multiple reasons I expect doing so would be a poor choice for the EA community. The concern we’ve discussed in this thread in that is that it could make EA look ‘culty,’ which I agree is a legitimate concern. One issue I’ve got with how the EA community tends to think about brand management and public relations, or whatever the social movement equivalent for those concepts are, is that we tend to reflexively care about it only when it comes up at random, as opposed to thinking about it systematically.That’s relevant because, relative to much more significant aspects of EA, whether we openly “think about dating each other” is not that ‘culty.’ There is some op-ed in a semi-popular magazine, print and/or online, about how communities concerned about AI alignment as an existential risk amount to doomsday cults. Much of the population perceives veganism as a cult. I’ve met a lot of people over the years who have told me that the phenomenon of widespread adoption of common lifestyle changes among community members still makes it gives off ‘culty’ vibes. Meanwhile, plenty of cultures within global society publicly and systematically encourage dating within their cultures. It seems like doing this along lines of national or religious identity is more publicly acceptable than doing so along racial lines. Like with what form it would likely take in EA, plenty of subcultures and movements that lend themselves to particular ways of life have online dating websites dedicated to their communities.
Thus, I think the other downsides to systematically encouraging dating within the EA community, such as the skewed gender ratio perhaps quickly resulting in the system failing to satisfy the needs of most involved individuals, are greater than EA appearing ‘culty.’ It’s important to distinguish why I think we shouldn’t systematically encourage intra-community dating because I also expect it would be wrong for us to “not think about” each other’s dating needs at all. For example, I don’t think it’s a negative thing that this EA Forum post and all these discussions in the comments are publicly taking place on the EA Forum. It seems to me the majority of community members never check the EA Forum with a frequency approaching a regular basis, never mind the millions of people who hear about EA but never become part of the movement. I think the solution is for us to extend private offers as peers in the same community to talk about each other’s efforts to find romantic partners who spend our lives with also fits with living the EA-inspired lives we each want to live out.
Strongly upvoted. This is an approach I’ve taken to dating outside the EA community. Most of my dating is typically outside the EA community. I’ve not found success in long-term romance. I’m pretty confident that’s due to factors in my private life unrelated to this specific approach to dating EA community members can take. I’d recommend more in EA try it as well.
I responded to Marisa with this comment which pushes back on the notion that inter-EA dating is a particularly culty and insular phenomenon. Upshots:
Some public accusations of cultishness should be taken seriously, but EA should respond to them by doing what we do best: looking into scientific research, specifically about cults, in evaluating such allegations to ourselves. This is a more sensible approach than hand-wringing about hypothetical accusations of cultishness that haven’t been levelled yet. To do so only plays into the hands of moral panics over cults in public discourse that don’t themselves typically lessen the harms of cults, real or perceived.
Dozens if not hundreds in EA have dated, formed relationships, gotten married or started families in ways that have benefited themselves personally and also their capacity to do good. This is similarly true in its own ways of tens of millions of people who marry and start families within their own religions, cultures or ethnic groups, including in more diverse and pluralistic societies. While EA ought to be worried about ways in which it could cult-like, the common human tendency to spend our lives with those who share our own respective ways of life doesn’t appear to be high on that list.
One could argue that that’s a problematic tendency within societies at large and EA should aspire to more than that. Given my perception that those in EA who’ve formed flourishing relationships within the community have done so organically as individuals, there doesn’t seem to me to be a reason to encourage intra-community dating. Yet to discourage it based on a concern it may appear cult-like would be to impel community members to a kind of romantic asceticism for nobody’s benefit.
Summary: Concerns about apparent or actual cultishness are serious but ought to be worked through in a more rational way than is typical of popular discourse about cults. EA pattern-matches to being a small, niche community on the fringe of mainstream society, which is also a common characteristic and tell of a cult. Yet there is widespread cognitive dissonance in society at large about how social structures involving tens of millions of people also have harmful, cult-like aspects to them as well. It’s perhaps the majority of people in even more diverse societies that marry and raise families within their own religion, culture or ethnic group.
That many of us in EA are strongly inclined to spend our lives with those who share our own way of life doesn’t distinguish as problematic from the rest of society. One could argue that almost all cultures are cult-like and EA should aspire to be(come) a super-rational community free of the social problems plaguing all others. That would seem to me to be molehill mountaineering that can be disregarded as vain attempts to impel EA to be(come) quixotically perfect.
Regarding ‘culty-ness,’ I feel like too many subcultures or countercultures play into the hands of the paranoid accusations of a generic and faceless public. Several years ago, when I was both aware of evidence-based definitions of cults and was in extreme disagreement with mainstream societies, I thought accusations of being a cult levelled at movements that weren’t unambiguously cults ought to be disregarded. I no longer feel this way, as I now recognize that a degree of cultishness in an organization or community can exist on a spectrum. Ergo, some public accusations of appearing to or actually being a cult ought to be taken very seriously.
EA is a small, niche community on the fringes of society. Putting that way may seem to stigmatize EA as pattern-matching to those fringe movements that pose a serious threat to society at large. That’s not what I meant. I just pointed out that this is a crucial function between society at large and subcultures which may begin alienating themselves to the point of falling down the rabbit hole of becoming a cult.
Yet it seems there are mass groups in all mainstream societies that if they were a small, fringe group would be labelled cults, but only are not because they’ve been normalized over several decades. Such groups can constitute tens or even hundreds of millions of people. I believe such groups are often whole religions, or social structures similar to religions, which as they transform into mainstream institutions are sanitized in a way that makes them less harmful per individual than small, niche cults like the Church of Scientology.
Nonetheless, they often cause significant harm. So, much of humanity has severe cognitive dissonance about what is and isn’t a cult, and why all kinds of mainstream institutions shouldn’t be considered just as harmful as cults. This should cause us to take concerns of being culty with a grain of salt when it comes from a source that is selective in its opposition to cult-like groups. What I’ve never understood is, if some in EA are concerned that EA may seem like or take on actual cult-like tendencies, why none of us try assessing this for ourselves. As a movement that aspires to be scientific, we in EA ought to be able to assess to what extent our community is like a cult by reviewing the scientific research and literature on the subject of cults.
With all this in mind, we can put in context the concern some features in EA might make it appear like a cult to the rest of the world. While optics matter, they aren’t everything. Of all the things EA has been accused of being a cult for, that those in EA tend to form relationships with one another isn’t a frequent one. It’s perhaps the majority of people in diverse societies that tend to date, marry or start families with those from their own ethnic, religious and cultural background. Most people don’t call them cults. That’s because there’s a common understanding that individuals are drawn to spend their lives with those who share a common way of life. Outsiders to most ways of life understand that, even if they don’t totally understand a respective way of life itself.
One lingering concern for some in EA might be that we ought to aspire to be far better in how we conceive of and do good things than the rest of the world. That might include being less cult-like than even entire cultures which themselves aren’t technically cults. There are freethinking, cosmopolitan atheists who would call all religions and most cultures cults. Such accusations may cite that intermarriage between members of one culture because they are the same culture only occurs to irrationally preserve and perpetuate that culture, and its traditions and institutions.
I don’t totally disagree with such freethinkers myself but I wouldn’t take that criticism to heart to the point of discouraging relationships among those in EA. Relationships within the EA community are imperfect in their own ways, as is the case with all kinds of relationships inspired by a particular way of life. Yet I’ve seen dozens if not hundreds in EA personally flourish and enhance the good they’re doing by being in relationships with other community members. Taking every naysayer to heart won’t free us of problems. After all, we in EA are only human (and, I’ll postulate, will be imperfect even in light of potentially becoming post-human super-beings in the future).
I appreciate this informative comment. I’ve got a couple of relevant points to add.
1. As a community coordinator for EA, a few years ago I was aware more in EA were interested in dating others in the community. I shared a link to reciprocity.io around in EA Facebook groups like EA Hangout. This got a few more hundred people to get on reciprocity. I talked to Katja Grace, who originally had the idea.
Reciprocity.io was written to support the much smaller Bay Area rationality community, which had the time had over 100 people but not too many more than that. So many in EA getting on reciprocity.io caused it to crash. The code wasn’t particularly worth saving and at the time Katja suggested that if someone wanted, it might be better to make a newer, better site from scratch.
2. As far as I’m aware, LGBTQ+ people are significantly overrepresented in the EA community relatve to the background population. I don’t know how much of this is determined by feeder communities for EA, i.e., how much the communities people find EA from are themselves disproportionately representative of the LGBTQ+ community. Feeder communities for EA include:
animal advocacy movements
organizations focused on particular causes in the non-profit sector
Caveats: I don’t know more specifically than that how the representation for LGBTQ+ folks in EA skews. By representation I mean statistical representation, not representation of LGBTQ+ as identities. Neither am I suggesting that anyone ought to infer anything else about the experiences and status of LGBTQ+ folks in the EA community based just on the fact they’re overrepresented in the EA community.
I haven’t put any thoughts into how this otherwise impacts the gender ratio of the EA community or dating prospects for individual community members therein. I just offer the info in case it inspires others’ insights about intra-community dating and relationships.
This still neglects the possibility that if governments across the world are acting in a matter suboptimally, then them cooperating with each other, and a close and cozy relationship between expert communities and governments may come up the cost of a negative relationship with broad sections of the public. Who and what ‘the public’ should usually be unpacked but suffice to say there are sections of civil society that are closer to correctly diagnosing problems and solutions regarding social crises, as far as expert communities are concerned, than governments. For example, expert communities sometimes have more success in achieving their goals working with many environmental movements around the world to indirectly move government policy than working with governments directly. This is sometimes observed today in progress made in tackling the climate crisis. Similarly during the Cold War, social movements (anti-war, anti-nuclear, environmental movements) in countries on both sides played a crucial in moving governments towards policy that deescalated nuclear tensions, like the SALT treaties, that an expert organization like the Bulletin of Atomic Scientists (BAS) would advocate for. It’s not clear that movements within the scientific community to deescalate nuclear tensions between governments would have succeeded without broader movements in society pursuing the same goals.
Obviously such movements can be a hindrance to the goals for improving the world pursued by expert communities, when governments are otherwise the institutions that would advance progress towards these goals better than those movements. A key example of this is how environmental movements have played a positive role in combating pollution and deescalating nuclear tensions during the Cold War, they’ve been counterproductive by decreasing public acceptance and the political pursuit of the safest forms of nuclear energy. Many governments around the world which otherwise would build more nuclear reactors to produce energy and electricity to replace fossil fuels don’t do so because they rightly fear the public backlash that would be whipped up by environmental movements. Some sections of the global environmental movement have become quite effective on freezing the progress on climate change that could be made by governments around the world building more nuclear reactors.
There are trade-offs in the relationships expert communities face in building relationships with sections of the public like social movements vs. governments. I haven’t done enough research to know if there is a super-effective strategy for knowing what to do under any conditions, as an expert community. Suffice to say, there aren’t easy answers for effective altruism as a social and intellectual movement, or the expert communities to which we’re connected, to resolve these issues.
While we are on this topic, I thought it would be fit if we acknowledge what similar issues effective altruism as a movement faces. Effective altruism as a global community has been crucial the growing acceptance of AI alignment as a global priority among some institutions in Silicon Valley and other influential research institutions across the world, both academic and corporate. We’ve also influenced some NGOs in policymaking and world governments to take seriously transformative AI and the risks it poses. Yet it’s mostly been indirect, has had little visible impact and hasn’t produced a better, ongoing relationship between EA as a set of institutions, and governments.
We’re now in a position where as much as EA might be integrated with efforts in AI security in Silicon Valley and universities around the world, governments of countries like Russia, China, South Korea, the European Union, and at least the military and intelligence institutions of the American government are focused on it. Those governments focusing on AI security more is in part a consequence of EA perpetuating greater public consciousness regarding AI alignment (the far-bigger factor being the corporate and academic sectors achieving major research progress in AI as recognized through significant milestones and breakthroughs). There are good reasons why some EA-aligned organizations would keep private that they’ve developed working relationships with the research arms of world governments on the subject of AI security. Yet from what we can observe publicly, it’s not clear that at present perspectives from EA and expert communities we work with would have more than a middling influence on the choices world governments make regarding matters of security in AI R&D.
I’ve identified the chapters in OHSM that, if there is an answer to these questions to be found in the book, they will be in there. They are 5 chapters, totaling roughly 100 pages in number. Half the chapters focus on ties to other social movements, and half the chapters focus on political parties/ideologies. I can and will read them, but to give a complete answer to your questions, I’d have to read most of at least a couple of chapters. That will take time. Maybe I can provide specific answers to more pointed questions. If you’ve read this comment, pick one goal from one cause area, and decide if you think the achievement of that goal depends more on EA’s relationship to either another social movement, or a political ideology. At that level of specificity, I expect I can achieve something like giving one or two academic citations that should answer that question. Again, I will answer the question at the highest level, but at that point I’m writing a mini-book review on the EA Forum that will take a couple of weeks for me to complete.
I’m aware of a practical framework that social movements along other kinds of organizations can use. There are different versions of this framework, for example, in start-up culture. I’m going to use the version I’m familiar with from social movements. I haven’t taken the time yet to look up in the OHSM if this is a framework widely and effectively employed by social movements overall.
A mission is what a movement seeks to ultimately accomplish. It’s usually the very thing that inspires the creation of a movement. It’s so vast it often goes unstated. For example, the global climate change movement has a mission of ‘stopping the catastrophic impact of climate change’. Yet that’s so obvious it’s not like at meetings environmentalists need to establish the fact they’ve gathered is to stop climate change. It’s common knowledge.
The mission of effective altruism is, more or less, “to do the most good”. Cause areas exist in other movements similarly broad to effective altruism, but they’re not the same thing as a mission. The cause area someone focuses on will be due to their perception of how to do the most good, or their evaluation of how they can personally do the most good. So each cause area in EA represents a different interpretation of how to do the most good, as opposed to being a mission or goal in and of itself.
Goals are the factors a movement believes are the milestones to be completed to complete a mission. The movement believes each goal by itself is a necessary factor in completing the mission, and that the full set of goals combined fulfills the sufficient condition to complete the mission. So for the examples you gave, the set up would be as follows:
Cause: Global poverty alleviation
Mission: End extreme global poverty.
Goals: Improve trade and foreign aid.Cause: Factory Farming
Mission: End factory farming.
Goals: Gain popular support for legal and corporate reforms.Cause: Existential risk reduction
Mission: Avoid extinction.
Goals: MItigate extinction risk from AI, pandemics, and nuclear weapons.Cause: Climate Change
Mission: Address climate change.
Goals: Pursue cap-and-trade, carbon taxes and clean tech
Cause: Wild Animal Welfare
Mission: Improve the welfare of wild animals.
Goals: Do research to figure out how to do that.Having laid it out like this, it’s easier to see (1), why a “cause” isn’t a “mission” or “goal”; and, (2), how this framework can be crucial for clarifying what a movement is about at the highest level of abstraction. For example, while the mission of the cause of ‘global poverty alleviation’ is ‘eliminate extreme global poverty’, the goals of systemic international policy reform don’t match up to what EA primarily focuses on to alleviate global poverty, which is a lot of fundraising, philanthropy, research and field activity, focused on global health, not public policy. Your framing assumes ‘existential risk reduction’ refers to ‘extinction risk’, but ‘existential risk’ has been defined as long-term outcomes that permanently and irreversibly alter the trajectory of life, humanity, intelligence and civilization on Earth or in the universe. That includes extinction risks but can also include risks of astronomical suffering. If nitpicking the difference between missions and goals seems like needless semantics, remember that because EA as a community doesn’t have a clear and common framework for defining these things, we’ve been debating and discussing them for years. Below goals are strategy and tactics. The strategy is the framework a movement employs for how to achieve the goals. Tactics are the set of concrete, action-oriented steps the movement takes to implement the strategy. The mission is to the goals as the strategy is to the tactics. There is more to get into about strategy and tactics, but this is too abstract a discussion to get into that. For figuring out what an effective social movement is, and how it becomes effective, it’s enough to start thinking in terms of mission and goals.
This isn’t from the OHSM, but two resources to learn more about this topic are the Wikipedia article on ‘satisficing’, a commonly suggested strategy for adapting utilitarianism in response to the demandingness criticism, and this section of the ‘consequentialism’ article on the Stanford Encyclopedia of Philosophy focused on the demandingness criticism.
Why have you found it underwhelming?
Same as with my response to your other questions in your other comment, it’s easier to operationalize ‘success’, ‘failure’, and ‘support’ with missions, goals and objectives in mind. The other questions I believe I can find answers for more easily, but these ones aren’t answerable without specified goals.
These questions seem too general to provide a satisfying answer. I’d have to quote a few whole chapters to give a complete answer. An answer applicable to effective altruism depends on making assumptions about what the community’s goals are. I think it’s safe to make some assumptions here for the sake of argument. To start off, it’s safe to say effective altruism is in practice a reformist as opposed to revolutionary movement. Beyond that, it’d be helpful to specify what kind of goals you have in mind, and what means of achieving them are either preferred and/or believed to be most effective.
Whether effective altruism should be sanitized seems like an issue separate from how big the movement can or should grow. I’m also not sure questions of sanitization should be reduced to just either doing weird things openly, or not doing them at all. That framing ignores the possibility of how something can be changed to be less ‘weird’, like has been done with AI alignment, or, to a lesser extent, wild animal welfare. Someone could figure out how to make it so betting on pandemics or whatever can be done without it becoming a liability for the reputation of effective altruism.