I am also skeptical of democratizing EA funding decisions because I share your concerns about effective implementation, but want to note that EA norms around this are somewhat out of step with norms around priority setting in other fields, particularly global health. In recent years, global health funders have taken steps to democratize funding decisions. For instance, the World Health Organization increasingly emphasizes the importance of fair decision-making processes; here’s a representative title: “Priority Setting for Universal Health Coverage: We Need Evidence-Informed Deliberative Processes, Not Just More Evidence on Cost-Effectiveness.”
Plausibly the increased emphasis on procedural approaches to priority setting is misguided, but there has been a decent amount of philosophical and empirical work on this. At a minimum, it seems like funders should engage with this literature before determining whether any of these approaches are worth considering.
lilly
- 20 Jan 2023 18:48 UTC; 3 points) 's comment on Funding by Voting: Ignorance and Irrationality by (
Thanks for highlighting this!
My impression is also that the track record of so-called “fair processes” is really spotty for the reasons you note here—it’s hard to actually make them inclusive, they’re incredibly costly and burdensome, they often serve as a cover for NGOs/multilaterals doing what they wanted to do anyways, etc.
I just think there is a more charitable case to be made for democratizing funding decisions than is made in this post, and also a much broader range of implementation strategies that could be considered. (I’m personally a fan of surveying potential beneficiaries on their priorities, as others have suggested.) I worry about EA becoming an echo chamber, so think it’s worth figuring out why multiple large fields (and, I think, most academics who work on these issues) have reached different conclusions than EA about this.
I am sceptical of the idea that a substantial amount of EA funds should be allocated democratically.
I appreciate the spirit of this post, and agree with many of the specific concerns you raise. However, you seem to be making assumptions about what it’d look like to allocate funding democratically (with regard to who would get to weigh in, what they’d be weighing in on, and how they’d be weighing in), and then drawing much broader conclusions about whether democratizing funding is a good idea. I think you persuasively argue against one such approach, but that others are worth considering.Individual voters do not have sufficient incentives to allocate collective funds in an effective manner
For instance, I agree with this point with regard to EA voters, but why would the voters have to be EAs? Why not find the stakeholders who do have sufficient incentives to allocate funding well; i.e., those who stand to benefit most from the decisions that are made? And why have them vote on different cause areas, versus weighing in on specific initiatives within a given cause area? Etc.
Any proposal to democratize funding is going to face challenges—I suspect it will inevitably be expensive and burdensome to do this well. But presumably, the way to figure out whether this is worth pursuing is to determine the point of allocating funding (more) democratically, generate specific proposals on this basis, and then evaluate whether any could be implemented effectively.
Sure! A couple of thoughts:
I agree that democratizing funding is easier for GWH causes than for more longtermist ones, and there is correspondingly more precedent for this in global health. I’m not going to do a lit review, but Tables 3 and 4 here list some of the things that have been tried (though I wouldn’t read the paper). Personally, I think the move is probably to survey potential beneficiaries—rather than doing something more deliberative—and then factor their preferences/values into decisions about which projects within a given cause area to prioritize (rather than having them choose causes). The case is trickier for longtermist causes—both normatively and practically—but Will MacAskill and Tyler John’s WaPo op-ed touches on some creative ways of doing this.
But my point is really: EA has developed some excellent, remarkably creative solutions to other problems in priority setting. My sense is that when GiveWell was started, the perception of many people in global health was that it would be impossible to do what they were trying to do. Open Philanthropy has also developed some innovative approaches to priority setting, and seems to do a great job of implementing them.
When we look at efforts by many non-EA organizations to allocate funding democratically, the track record does not look good (to me). But the EA community has solved other, likely far more challenging priority-setting problems, so I think it’d be a mistake to say “this is impossible to do well” without seriously interrogating all of the options.
Thanks for your work on this report; I look forward to reviewing it more carefully, but on a first pass, this seems comprehensive and great.
I just came to make a small point, which is that there seems to be good evidence regarding the non-linear health effects of smoking. For instance, here’s what UpToDate’s article on smoking cessation says:
”Data are inconsistent as to whether reducing cigarette smoking from higher to lower levels is associated with improved outcomes. At least two prospective cohort studies found that people who reduced smoking by at least 50 percent had no change in all-cause mortality, whereas those who quit smoking completely had decreased risks of all-cause mortality. However, another cohort study did find reduced risk for mortality associated with smoking reduction (HR 0.85, 95% CI 0.77-0.95); the benefit with smoking reduction was mainly seen in among those with heavy smoking use and was mainly due to a reduction in cardiovascular mortality. Nevertheless, consistent benefits in cardiovascular disease risk have not been seen with reduction in smoking short of quitting. This is because even low levels of tobacco smoke exposure increase cardiovascular risk. A separate cohort study found that a reduction in smoking may decrease the risk of lung cancer. One reason that a reduction in smoking may not consistently improve health outcomes is that people may compensate for smoking reduction with increased puffs, volume, or duration in order to maintain nicotine intake and forestall nicotine withdrawal symptoms.” (These are the cited studies.)
For me, this raises the question of whose behavior is primarily affected by higher cigarette taxes (non-smokers who don’t start, light smokers, or heavy smokers) and how (cutting back vs. quitting). You mention that taxes lower both smoking prevalence and intensity, but given the non-linear relationship, I’d expect them to be more effective if they primarily reduced prevalence.
Avoid using too many metaphors, analogies and extremely technical words when they are not needed
Thanks for highlighting these issues! I think implementing this recommendation would also make EA more inclusive for many native English speakers. EAs often use words and phrases that aren’t used routinely (or in the same way) by native speakers who aren’t EAs. Some native speakers—perhaps especially people who are just learning about EA or didn’t study philosophy/economics—may similarly worry about being judged when they aren’t familiar with certain terms. So when we can minimize jargon use while still communicating clearly, we should.
Thanks for looking into this! This is super interesting.
A couple of thoughts:
-
I wouldn’t expect child marriage to be bad primarily because it is bad for girls’ health (although it is hard for me to imagine becoming pregnant before 15 isn’t linked to higher maternal morbidity, as you note). Child marriage seems bad because aspects of being a child bride/mother seem intrinsically hard/undesirable. Here’s an example of what I mean: I assume many married children are having sex that isn’t very consensual, since their partners likely are not chosen by the children themselves and many are quite young. I agree that DALYs can indirectly convey the mental health harms of, e.g., non-consensual sex, but also think non-consensual sex is bad independent of whether it is well captured by DALYs. (I assume “sexual violence” is referring to something different?)
-
The idea of assessing whether a pregnancy was “wanted” in a kid who grew up in a culture that permits child marriage seems somewhat fraught. Presumably, most of these cultures place a high premium on women having kids (and probably a lot of kids, hence the child marriages). This isn’t to say that we should completely ignore kids’ preferences (adapted preferences are still preferences!), but I don’t think we should take them at face value, either. I’d be eager to see more qualitative research on child brides, since this could help clarify nuances here.
-
My prior would be that outcomes for kids married before 15 and kids married between 15 and 18 would be different. You also suggest before the table that there is some evidence for this. My reaction to the “unsures” in the table for kids < 15 is thus “we’d probably find an effect if there was more data,” but I’m curious whether you drew different conclusions.
-
And lastly, one of the cited studies mentions that there is a relationship between child marriage and fertility (i.e., child brides have more kids and start earlier). I didn’t read the whole paper, but if this is true, it seems potentially quite important, because it likely has implications for socioeconomic development. Preventing child marriage strikes me as a relatively non-problematic way of reducing fertility (which I know not everyone thinks is good) and deferring fertility (which definitely does seem good—having a 15 year old mother probably isn’t great for the kid).
Thanks again for this really interesting and thoughtful report!
-
Thanks so much for this really thoughtful response!
I appreciate how carefully you’ve reviewed the data, and how open-mindedly you’re approaching this issue. I think your assessments largely seem reasonable, but we might diverge here:My intuition from the outset of this project that child marriage would be intrinsically bad because it would expose children to unwanted sex and pregnancy. I have tried to leave that intuition to one side as much as possible because I know ideas about gender and adulthood/childhood are pretty culturally entangled.
I agree that ideas about gender and adulthood (and marriage, the value of tradition, etc) are informed by our cultures, personal experiences, political views, and so on. And when it’s possible to study an issue rigorously and reach meaningful results without relying on priors, we should do so. My worry here is that we’ve both identified challenges inherent to researching this issue (e.g., that many girls who are having non-consensual sex will not screen positive on existing surveys). Given this, I think it’d be a mistake to draw conclusions solely on the basis of existing data. In my view, we have many independent reasons to think child marriage is bad and not a lot of reasons to think it is good, and the data doesn’t move me much in this regard (i.e., I’m not being confronted with a lot of information that makes me think child marriage is good; it seems like most existing data suggests it is bad or unclear).
I’m also struck by the differences between Burkina Faso, Ethiopia, and Tanzania in the table you shared (e.g., 76% of girls in Ethiopia did not want to get married, and 75% of girls in Tanzania did!). This makes me think that child marriage might be a much more important cause in some places than in others, which would be consistent with other GHD initiatives, for which there is often substantial variation in cost-effectiveness for the same kind of program across contexts. Thank you again for all of your work on this!
I think collecting data is a great idea, and I’m really glad this is happening. Thank you for doing this! Because one of your goals is to “better [understand] the experiences of women and gender minorities in the EA community,” I wanted to relay one reaction I had to the Community Health Team’s website.
I found some of the language offputting because it seems to suggest that instances of (e.g.) sexual misconduct will be assessed primarily in terms of their impact on EA, rather than on the people involved. Here’s an example:
“Our goal is to help the EA community and related communities have more positive impact, because we want a radically better world. A healthy community is a means to that end.”
My basic reaction is: it is important to prevent sexual harassment (etc) because harassment is bad for the people who experience it, regardless of whether it affects the EA community’s ability to have a positive impact.
This language is potentially alienating in and of itself, but also risks contributing to biased reporting by suggesting that the Community Health Team’s response to the same kind of behavior might depend, for instance, on the perceived importance to EA of the parties involved. People are often already reluctant to disclose bad experiences, and I worry that framing the Community Health Team’s work in this way will compound this, particularly in cases where accusations are being made against more established members of the community.
- Things that can make EA a good place for women by 6 Apr 2023 17:09 UTC; 182 points) (
- 9 Mar 2023 17:48 UTC; 4 points) 's comment on Against EA-Community-Received-Wisdom on Practical Sociological Questions by (
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community’s ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
Harassment is intrinsically bad (i.e., it is an end we should avoid).
Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused’s and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, and still feel alienated by this language.)
If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: “If someone does X, the punishment will be Y” because Y will depend on who the “someone” is. In the absence of clear standards of conduct, there will be more harmful behavior.
It’s intuitively unjust to make the consequences of bad behavior contingent upon someone’s perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university’s disciplinary committee, and most people think it’s very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.
That this is reasonably well-ingrained in the community is less clear to me, especially post FTX. If the Community Health Team does see their goal as simply “support the community by supporting community members,” why not just plainly state that?
I’d actually love the Community Health Team to clarify:
-
Holding fixed the facts of a case, would the Community Health Team endorse a policy of considering the value of the accused/their work to EA when deciding how forcefully to respond? For example, if someone did something bad at an EAG, would “how valuable is this person’s work to the community?” be considered when deciding whether to ban them from future EAGs?
-
If the Community Health Team does endorse (1), how much weight does the “value to the community” criterion get relative to other criteria in determining a response?
-
If the Community Health Team does not endorse (1), are there any policies or procedures on the books to prevent (1) from happening?
- 20 Feb 2023 23:01 UTC; 52 points) 's comment on EV UK board statement on Owen’s resignation by (
- 20 Feb 2023 23:35 UTC; 9 points) 's comment on EV UK board statement on Owen’s resignation by (
- 9 Mar 2023 17:48 UTC; 4 points) 's comment on Against EA-Community-Received-Wisdom on Practical Sociological Questions by (
-
I think your general point is a good one—EA has been criticized for a lot of things, many critiques of EA are unfair, and journalists score points by writing salacious stories. I also agree that it’s really hard to interpret some of the anecdotes in the TIME article without more context. But I don’t agree with this:
I’m not saying that EA is perfect or that nothing in the article is true, but rather that reading it, my gut instinct was that roughly 80% was entirely misleading
I think we have good reason to believe the article is broadly right, even if some of the specific anecdotes don’t do a good job of proving this. Here’s a rough summary of the main (non-anecdote) points of the article:
EA involves many “complex professional relationships” (true)
“Most of the access to funding and opportunities within the movement [is] controlled by men” (true)
“Effective altruism’s overwhelming maleness, its professional incestuousness, its subculture of polyamory and its overlap with tech-bro dominated ‘rationalist’ groups have combined to create an environment in which sexual misconduct can be tolerated, excused, or rationalized away.” This language is inflammatory (“overwhelming”, “incestuous”), but we can boil this down to a more sterile sounding claim: “EA is very male and there is a complex mixing of professional and personal relationships, partly because there are many polyamorous people in the EA/rationalist communities; this creates an environment in which sexual misconduct may be addressed suboptimally.” This strikes me as totally plausible.
“Several women say that the way their allegations were received by the broader EA community was as upsetting as the original misconduct itself. ‘The playbook of these EAs is to discourage victims to seek any form of objective, third-party justice possible.’” This is an ambiguous one: the Community Health Team explicitly says otherwise (“if a crime happened, we think the victim should consider going to the police”), but boy is there a lot of discussion within EA about the importance of protecting EA’s reputation, so someone might understandably feel pressure not to go to the police, even if no one ever told them not to.
“The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them.” (seems fair)
“The hard question for the Effective Altruism community is whether the case of the EA house in San Francisco is an isolated incident, with failures specific to the area and those involved, or whether it is an exemplar of a larger problem for the movement.” (I think people agree with this)
“That balancing of interests is a starkly different approach than the one espoused by the #MeToo” (true)
“#MeToo urged society to ‘believe women;’ EAs tend to be a bit more skeptical.” (also seems true)
“EA’s polyamorous subculture was a key reason why the community had become a hostile environment for women.” To be clear, I think this article is bigoted about polyamory, and I agree with this. But it’s plausible that the polyamorous subculture contributes a hostile environment for women, if only because it is combined with a 2:1 male:female ratio and high tolerance for people hitting on each other in EA spaces. In general, the fact that a lot of people are polyamorous would seem to have two important implications: 1) more people are open to dating other people and 2) casually mentioning that you’re in a relationship is less likely to signal to someone that they shouldn’t hit on you. As a result of all of these things (not just the polyamorous subculture), women who don’t want to be hit on at EA events may have a set of relatively suboptimal options—being hit on, or saying “please stop hitting on me”. There are also upsides to the polyamorous subculture, including for women, but if there are more women in the “I don’t want to be hit on at this event” camp than not, it’s conceivable that the net effect is bad for women.
Importantly, these kinds of claims are hard to substantiate with stories: I mean, how do you prove that the combination of gender imbalance in EA, men having disproportionate power in EA, mixture of professional and personal relationships, and polyamorous subculture have created an environment in which sexual misconduct is not addressed optimally? You suggest:
The default orientation to this should be “Let’s carefully evaluate each claim by the evidence we have for it, and assess the context of those claims”, instead of an automatic “believe and support every claim in this article, both the facts and the narrative the writer is trying to use those facts for.”
And sure; we should evaluate evidence critically. My impression was that people did basically adopt this orientation, at least as was evidenced in the comments section here.
But it’s just hard to present evidence that conclusively proves the very reasonable—if possibly incorrect—hypothesis put forward by both the reporter and interviewees. In addition, this is not the first time these kinds of allegations have been made, other women have since said they’ve grappled with gender issues, and the article includes unambiguously bad anecdotes (“an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs… arranged for her to be flown to the U.K. for a job interview. [She was] surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, ‘he told me he needed to masturbate before seeing me’).
In sum, I think we have good reason to think that the article is broadly true, even if some of the specific stories leave a lot of questions unanswered. And I’m glad the Community Health Team is trying to answer those questions.
I took Aella’s post to be making a point about how EAs should read the article (“I’m mostly posting this because to me, it feels like there’s an imbalance in the models people are using to make sense of this. I don’t want EA to overcorrect, but I want it to reach a reasonable equilibrium”).
I agree that “most of the audience,” i.e., readers of TIME who largely aren’t familiar with EA, may well walk away from this article with an impression that’s inaccurate. But why shouldn’t EAs steelman it, especially when we have independent reasons to think many of the major claims in the article are true?
I admit that my desire to steelman this article stems from my personal experiences in EA, and my general sense—as a woman in EA, who is friends with other women in EA, and has heard stories from still other women in EA—that this article does get at something important about the Bay Area EA community, even if it makes some important mistakes, many of which Aella helpfully identifies.
To steelman your comment, I assume by “your” you mean the EA community’s (not my) “tendency to undercompensate and not realize how distorted these things actually are” (although I’m still not sure quite what this means), but as I said before, I have seen very little evidence that the community’s response to the TIME article has been uncritical or unreasonable. I’d be eager to hear specific examples of the mistakes you think the community has made since the article came out.
EAs, evidently, have not risen above that.
Again, I would love for you to provide an example of something unreasonable the community has done in response to the TIME article. As far as I can tell, the community is trying to figure out what is going on, but people’s responses have generally been compassionate, open-minded, and reasonable.
If you think there’s an actual problem, I think the correct avenue is doing a real investigation and a real writeup.
This would not be a good use of my time, in part because others are much better positioned than I am to do this (and are). I also don’t think the bar for making the point in an EA Forum comment that “these kinds of claims are hard to substantiate with stories” should be that I myself have to go substantiate these claims with stories.
Notably: I think what happened to Aella is really bad. I don’t think we should steelman the claims made about Aella, which I have no reason to doubt are lies, and are cruel. I’m really sorry this happened; it shouldn’t have.
But I think steelmanning the TIME article is importantly different: among other things, this article is based on interviews with dozens of EAs who level critiques against a community of tens of thousands of people backed by billions of dollars in funding; this isn’t about a single individual being harassed and doxxed. And the article could be much more inflammatory than it is, I think. Many of the central claims in the article are true, and people don’t really seem to be contesting this; what seems to be under dispute is whether features of EA culture that are bad for women (uncontested) have led to higher rates of sexual misconduct against women (contested). This is a question the reporter gestures at herself (“The hard question for the Effective Altruism community is whether the case of the EA house in San Francisco is an isolated incident, with failures specific to the area and those involved, or whether it is an exemplar of a larger problem for the movement”). I hope the Community Health Team will get to the bottom of this.
Thanks for your comment, which has been helpful in clarifying my own thinking. Particularly this:
If someone invests a lot of effort into searching for good evidence and comes up empty that’s a signal for the availability of good evidence.
I take the article’s thesis to be:
(1) The culture of EA is characterized by a skewed gender ratio, gendered power imbalances, mixing of professional/personal relationships, etc; (2) this (Increases the risk of? Leads to more of? Undermines reporting of?) sexual misconduct
I think the article does a pretty good job of proving (1), which is what I meant when I said the article is “broadly right.” Perhaps the crux of our disagreement is that it’s not exactly clear what claim (2) is.
I think it’s pretty intuitive that (1) would increase the risk of a weak version of (2): i.e., the cultural dynamics in EA lead to women having encounters of a sexual nature that they don’t want to be having (e.g., getting hit on at professional events, feeling like they’ll suffer professional repercussions for rejecting people, etc). I also think this kind of claim is hard to prove—it’s just difficult to establish causation between one amorphous cultural phenomenon and another. Given that, I think the article does a good job of showing that women EAs in the Bay Area were repeatedly made uncomfortable by men’s behavior towards them. Without more context, it’s hard to know how much to condemn the men here (e.g., were people making innocent mistakes, or was this more malicious?), but clearly a lot of women felt bad about the encounters they had. I don’t think we have reason to doubt this, and I think this is something the EA community should reflect on and investigate further.
There’s also a separate, stronger claim that the article seems to be gesturing at, which is maybe closer to: “rates of sexual harassment and assault are higher in the EA community than elsewhere, at least in the Bay Area.” And I agree with you that the article hasn’t really provided much evidence for this (beyond that one extremely troubling anecdote), although I think we have some independent reasons to worry about this. I’ll be curious to see what conclusions the Community Health Team reaches on this point.
I just wanted to flag one issue that may have contributed to this situation, as well as some of the others described in the TIME article:
As far as I am aware, CEA has no clear, public, working definition of sexual harassment, and there are no clear guidelines regarding appropriate and inappropriate behavior at different types of EA events. This is a significant problem in a community where personal and professional relationships are frequently intertwined.
The lack of guidelines will predictably lead to bad consequences:
People may engage in behavior that seems non-problematic to them, but that puts others in situations they don’t want to be in. (Guidelines alleviate the pressure on individuals to consistently exercise good judgment, and can also serve an educational purpose by clarifying that/why certain behaviors are harmful.)
People may not report situations that made them uncomfortable because no rules were technically broken. (When some mixing of professional/personal relationships is considered appropriate, it is harder to discern when lines have been crossed, especially when these lines have not been explicitly drawn.)
The Community Health Team may have substantial leeway in resolving disputes, potentially leading to more biased or unfair decisions.
I am really glad to hear there will be an external investigation by an independent law firm, and I hope one of the things they will recommend is developing clearer standards regarding appropriate conduct.
- Things that can make EA a good place for women by 6 Apr 2023 17:09 UTC; 182 points) (
- 24 Feb 2023 11:57 UTC; 46 points) 's comment on EV UK board statement on Owen’s resignation by (
- 22 Jun 2023 10:54 UTC; 17 points) 's comment on Update on project on reforms at EA organizations by (
- 9 Mar 2023 17:48 UTC; 4 points) 's comment on Against EA-Community-Received-Wisdom on Practical Sociological Questions by (
Thanks for pointing this out; I agree. I feel like the TIME article was held to a standard of scrutiny that was unusual and unwarranted, and that was frustrating and felt bad.
[Edit: My reaction was informed by the “People Will Sometimes Just Lie About You” post having 330 upvotes, and the comments there suggesting many people were reluctant to update much/at all in the direction of “EA has a problem with sexual harassment” on the basis of the TIME article. Unless people had good reasons to strongly hold the prior that EA doesn’t have a problem with sexual harassment—which some may—this seemed misguided to me, given the reporter had spoken with 30 EAs who shared anecdotes that ranged from “ambiguous but worrisome” to “clearly bad.” That is relatively good evidence in the context of the kind of evidence we generally get about sexual harassment, which is notoriously difficult to study and report on, and in the absence of much other evidence about sexual harassment in EA, seemed worth taking seriously. But I also understand why some people felt differently.]
I suspect it’s because many people like the culture around dating in EA, and are glad EA hasn’t taken a hard line on, e.g., workplace relationships. And people may be worried about the difficulty of drawing lines well, or having outside actors do it, and the chilling effect this could have on people’s ability to date within EA (which I can understand and am sympathetic to).
I think, though, that the solution is to create guidelines, and then develop creative solutions in accordance with them (e.g., more events/apps designed for EAs interested in dating).
Even if there are cases in which it would theoretically be reasonable to employ different priors for men vs. women, I doubt people will be able to reliably identify these cases, choose appropriate priors, and correctly apply the priors they’ve chosen. When you couple these challenges with the fact that there are significant downsides associated with trying to discriminate in a principled way (e.g., harming people, alienating people, creating self-fulfilling prophesies, making it harder for members of an already disadvantaged group to succeed, etc), it seems like a bad idea to base priors on the variability hypothesis in basically any context.