I ultimately agree with you (pure time discounting is wrong…even if our increasing wealth makes it a useful practical assumption) but I don’t think you’re argument is quite as strong as you think (nor is Cowan’s argument very good).
In particular, I’d distinguish my selfish emotional desires regarding my future mental states from my ultimate judgements about the goodness or badness of particular world states. But I think we can show these have to be distinct notions[1]. Someone who was defending pure time discounting could just say: well while, as far as my selfish preferences go, I don’t care whether I have another 10 happy years now or in 500 years it’s nevertheless true that morally speaking the world in which that utility is realized now is much better than the one it is realized later.
This is also where Cowan’s argument falls apart. The pareto principle is only violated if a world in which one person is made better off and everyone else’s position is unchanged isn’t preferable to the default. But he then makes the unjustified assumption that Sarah isn’t ‘made worse off’ by having her utility moved into the future. But that just begs the question since,if we believe in pure time discounting, Sarah’s future happiness really is worth only a fraction of what it would be worth now. In other words we are just being asked to assume that only Sarah’s subjective experience and not the time at which it happens affect her contribution to overall utility/world value.
Having said all this, I think that every reason one has for adopting something like utilitarianism (or hell any form of consequentialism) screams out against accepting pure time preferences even if not formally required. The only reason people are even entertaining pure discounting is that they are worried about the paradoxes you get into if you end up having infinite total utility (yes, difficulties remain even if you just try and directly define a preference relation on possible worlds)
—-
^1: I mean your argument basically assumes that, other things being equal, if a world where my selfish desires are satisfied is better than one in which it is not. While that is a coherent position to hold (it’s basically what preference satisfaction accounts of morality hold) it’s not (absent some a priori derivation of morality) required.
For instance, I’m a pure utilitarian so what I’d say is that while I selfishly wish to continue existing I realize that if I suddenly disappeared in a poof of smoke (suppose I’m a hermit with not affected friends or relatives) and was replaced by an equally happy individual that would be just as good a possible world as the one in which I continued to exist.
TruePath
Could you provide some evidence that this rate of growth is unusual in history? I mean it wouldn’t shock me if we looked back at the last 5000 years and saw that most societies real production grew at similar rates during times of peace/tranquility but that resulted in small absolute growth that was regularly wiped out by invasion, plague or other calamity. In which case the question becomes whether or not you believe that our technological accomplishments make us more resistant to such calamities (another discussion entirely).
Moreover, even if we didn’t see similar levels of growth in the past there are plenty of simple models which explain this apparent difference as the result of a single underlying phenomenon. For instance, consider the theory that real production over and above subsistence agricultural level grows at a constant rate per year. As this value was almost 0 for the past 5,000 years that growth wouldn’t be very noticeable until recently. And this isn’t just some arbitrary mathematical fit but has a good justification, e.g., productivity improvements require free time, invention etc.. etc.. so only happens in the percent of people’s time not devoted to avoiding starving.
Also, it’s kinda weird to describe the constant rate of growth assumption as business as usual but then pick a graph where we have an economic singularity (flat rate of growth gives a exponential curve which doesn’t escape to infinity at any finite time). Having said all that, sure it seems wrong to just assume things will continue this way forever but it seems equally unjustified to reach any other conclusion.
Could you say a bit more about what you want this flag to symbolize/communicate? Flags for nations need to symbolize what holds the members of that country together and unifies them but, when it comes to an idea, it seems the flag is more a matter of what you want to communicate to others about the virtues of your idea. I mean I’m having trouble imagining that a utilitarian flag could do $1000 worth of good unless it does some important PR work for utilitarianism.
If it was me I’d be trying to pick a flag to communicate the idea that utilitarianism is (or well a natural consequence of/close too) universal love/empathy/concern. My sense is that opposition to utilitarianism seems frequently rooted in this idea that it’s cold, uncaring calculation. But since you are the one putting up the money maybe you can lay out a bit more what you want to communicate and what use you see this flag being put to.
I feel like there is some definitional slipping going on when you suggest that a painful experience is less bad when you are also experiencing a pleasurable one at the same time. Rather, it seems to me the right way to describe this situation is that the experience is simply not as painful as it would be otherwise.
To drive this intuition consider S&M play. It’s not that the pain of being whipped is just as bad...it literally feels different than being whipped in a different context that simply makes it less painful.
Better yet notice the way opiates work that leaves you aware of the physical sensation of the pain but mind it less. Isn’t it just that when we experience a pleasure and pain at the same time the nuerochems created by the pleasure literally blunt the pain much like opiates would.
On a related note I’m a bit uncomfortable about inferring too much about the structure of pain/pleasure based on our evolved willingness to seek it out/endure it and worry that also conflates reward and pleasure but it’s a hard problem.
The “go extinct” condition is a bit fuzzy. It seems like it would be better to express what you want to change your mind about as something more like (forget the term for this). P(go extinct| AGI)/P(go extinct).
I know you’ve written the question in terms of go extinct because of AGI but I worry this leads to relatively trivial/uninformative about AI ways to shift that value upward.
For instance, consider a line of argument:
-
AGI is quite likely (probably by your own lights) to be developed by 2070.
-
If AGI is developed either it will suffer from serious alignment problems (so reason to think we go extinct) or it will seem to be reliable and extremely capable so will quickly be placed into key roles controlling things like nukes, military responses etc...
-
The world is a dangerous place and there is a good possibility that there is a substantial nuclear exchange between countries before 2070 which would substantially curtail our future potential (eg by causing a civ collapse which, due to our use of all much of the easily available fossil fuels/minerals etc we can’t recover from).
-
By 2 that exchange will, with high probability, have AGI serving as a key element in the causal pathway that leads to the exchange. Even tho the exchange may we’ll have happened w/o AGI it will be the case that the ppl who press the button relied on critical Intel collected by AGI or AGI was placed directly in charge of some of the weapons systems involved in one of the escalating incidents etc...
I think it might be wise to either
a) Shift to a condition in terms of the ratio between chance of extinction and chance of extinction conditional on AGI so the focus is on the effect of AGI on likelihood of extinction.
b) If not that at least clarify the kind of causation required. Is it sufficient that the particular causal pathway that occured include AGI somewhere in it? Can I play even more unfairly and simply point out that by butterfly effect style argument the particular incident that leads to extinction is probably but for caused by almost everything that happens before (if not for some random AI thing years ago the soldiers who provoked the initial confrontation would probably have behaved/been different and instead of that year and incident it would have been one year before or hence).
But hey, if you aren’t going to clarify away these issues or say that you’ll evaluate to the spirit of the Q not technical formulation I’m going to include in my submission (if I find I have the time for one) a whole bunch of technically responsive but not really what you want arguments about how extinction from some cause is relatively likely and that AGI will appear in that causal chain in a way that makes it a cause of the outcome.
I mean I hope you actually judge on something that ensures you’re really learning about impact of AGI but gotta pick up all the allowed percentage points one can ;-).
-
I ultimately agree with you but I think you miss the best argument for the other side. I think it goes like this:
Humans are particularly bad at coordinating to reduce harms that are distant in time or are small risks of large harms. In other words out of sight out of mind. We are much better at solving problems which we experience at least some current harm from but prefer to push off harms into the future or a low probability event.
The argument for this point is buttressed by the very fact that we aren’t doing anything about warming right now.
Geoengineering takes the continuous harms of from increasing temperatures and renders them discontinuous and increases the risk of sudden major negative effects.
The argument here is that Geoengineering let’s us eliminate all negative effects as long as it is effective but if the Geoengineering mechanism ever fails we experience all the built up warming at once. Maybe we get hit by a bit solar flare and can’t launch our sunshade or shoot our sulfur into the stratosphere.
You make a lot of claims here that seem unsupported and based on nothing but vague analogy with existing primitive means of altering our brain chemisty. For instance a key claim that pretty most of your consequences seem to depend on is this: “It is great to be in a good working mood, where you are in the flow and every task is easy, but if one feels “too good”, one will be able only to perform “trainspotting”, that is mindless staring at objects.
Why should this be true at all? The reason heroin abusers aren’t very productive (and, imo, heroin isn’t the most pleasurable existing drug) is because of the effects opiates have as depressants making them nod off etc.. The more control we achieve over brain stimulation the less likely wireheading will have the kind of side-effects which limit functioning. Now one might have a more subtle argument that suggests the ability of even a directly stimulated brain to feel pleasure will be limited and thus if we directly stimulate too much pleasure we will no longer have the appropriate rewards to incentivize work but it seems equally plausible that we will be able to seperate pleasure and motivation/effort and actually enhance our inclination to work while instilling great pleasure.
I simply don’t believe that anyone is really (when it comes down to it) a presentist or a necessitist.
I don’t think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).
More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus’s ship style worries is to shrug and say there isn’t any fact of the matter but the presentist can’t take that line because there are huge moral implications to where we draw the line for them.
Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?
Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that and think of ourselves as helping...either by making those suffering feel like failures/hopeless/misunderstood (just get help and it’s all good) or causing us to believe we’ve done our part. Maybe this is just irrational bias I don’t know.
--
While I like the motivation I worry that this article does more to make us feel better that ‘something is being done’ than it does anything for EA community members with these problems. Indeed, I worry that sharing what amounts to fairly obvious truisms that any google search would reveal actually saps our limited moral energy/consideration for those with mental illness (ohh good we’ve done our part).
Now I’m sure the poster would defend this piece by saying well maybe most EA people with these afflictions won’t get any new information from this but some might not and it’s good to inform them. Yes, if informing them were cost free it would. However, there is still a cost in terms of attention, time, pushing readers away from other issues. Indeed, unless you honestly believe that information about every mental illness ought to be posted on every blog around the world it seems we ought to analyze how likely this content on this site is to be useful. I doubt EA members suffer these diseases at a much greater rate than the population in general while I suspect they are informed about these issues at a much greater rate making this perhaps the least effect place to advertise this information.
I don’t mean to downplay these diseases. They are serious problems and to the extent there is something we can do with a high benefit/cost ratio we should. So maybe a post identifying media that is particularly likely to serve afflicted individuals who would benefit from this and urging readers to submit this information would be helpful.
Cosmic EA: How Cost Effective Is Informing ET?
I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).
IMO this concern is more persuasive than the risk of trying Geoengineering.
But I disagree that Geoengineering isnt going to happen soon. All the same reasons we aren’t doing anything about global warming now are reasons that we’ll flip on a dime when we start seeing real harms.
While this isn’t an answer I suspect that if you are interested in insect welfare one first needs a philosophical/scientific program to get a grip on what that entails.
First, unlike other kinds of animal suffering it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population. Thus, unlike large animals, where one can find common ground between various consequentialist moral views it seems quite likely that whether a particular intervention is good or actually harmful for insects will often turn on subtle questions about one’s moral views, e.g., average utility or total, does the welfare of possible future beings count, is the life of your average insect a net plus or minus.
As such simply donating to insect welfare risks doing (what you feel is) a great moral harm unless you’ve carefully considered these aspects of your moral view and chosen interventions that align with them.
Secondly, merely figuring out what makes insects better off is hard. While our intuitions can go wrong its not too unreasonable to think that we can infer other mammals and even vertebrates level of pain/pleasure based on analogies to our own experiences (a dog yelping is probably in pain). However, when it comes to something as different as an insect its unclear if its even safe to assume an insect’s neural response to damage even feels unpleasant. After all, surely at some simple enough level of complexity, we don’t believe those lifeform’s response to damage manifests as a qualitative experience of suffering (even though the tissues in my body can react to damage and even change behavior to avoid further damage without interaction with my brain we don’t think my liver can experience pain on its own). At the very least to figure out what kinds of events might induce pain/pleasure responses in an insect would require some philosophical analysis of what is known about insect neurobiology.
Finally, it is quite likely that it will be the indirect effects of any intervention on the wider insect ecosystem rather than any direct effect which will have the largest impact. As such, it would be a mistake to try and engage in any interventions without first doing some in depth research into the downstream effects.
Point of this all is that with respect to insects we need to support academic study and consideration more before actually engaging in any interventions.
This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don’t even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn’t be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.
Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt description (a world with many people with a small amount of positive utility) presents the situation one way while the flavor text is phrased so as to trigger our empathetic/whats it like response as if it it didn’t satisfy the overt description. For instance if we remove the flavor about it being a very highly overpopulated world and simply said consider a universe with many many beings each with a small amount of utility then finding that superior no longer seems particularly troubling. It just states the principle allowing addition of utilities in the abstract. However, sneak in the flavor text that the world is very overcrowded and the temptation is to imagine a world which is ACTIVELY UNPLEASANT to be in, i.e., one in which people have negative utility.
More generally, I find these kind of considerations far more compelling at convincing me I have very poor intuitions for comparing the relative goodness/badness of some kinds of situations and that I better eschew any attempt to rely MORE on those intuitions and dive into the math. In particular, the worst response I can imagine is to say: huh, wow I guess I’m really bad at deciding which situations are better or worse in many circumstances, indeed, one can find cases where A seems better than B better than C better than A considered pairwise, guess I’ll throw over this helpful formalism and just use my intuition directly to evaluate which states of affairs are preferable.
The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.
No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing).
If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing ‘Give Now’ impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices.
Moreover, there is nothing morally wrong with putting your organization’s best foot forward or using standard charity/advertising tactics. Despite the joke it’s not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it.
I mean it’s important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It’s unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries.
What EA charities should do, however, is pursue a market segmentation strategy. Avoid any falsehoods (as well as annoying behavior likely to result in substantial criticism) when putting a good face on their situation/effectiveness and make sure detailed truthful and complete data and analysis is available for those who put in the work to look for it.
Everyone is better off this way. No on is lied to. The charities get more money and can do more with it. The people who decide to give for impulsive or other less than rational reasons can feel good about themselves rather than feeling guilty they didn’t put more time into their charitable decisions. The people who care about choosing the most effective evidence backed charitable efforts can access that data and feel good about themselves for looking past the surface. Finally, by having the same institution chase both the smart and dumb money the system works to funnel the dumb money toward smart outcomes (charities which lose all their smart money will tend to wither or at least change practices).
I think this post is confused on a number of levels.
First, as far as ideal behavior is concerned integrity isn’t a relevant concept. The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one’s word or other actions will have on the perceptions (and thus future actions) of other people.
Now the post rightly notes that as a limited human agent we aren’t truly able to engage in this kind of analysis. Both because of our computational limitations and our inability to perfectly deceive it is beneficial to adopt heuristics about not lying, stabbing people in the back etc.. (which we may judge to be worth abandoning in exceptional situations).
However, the post gives us no reason to believe it’s particular interpretation of integrity “being straightforward” is the best such heuristic. It merely asserts the author’s belief that this somehow works out to be the best.
This brings us to the second major point, even though the post acknowledges the very reason for considering integrity is that, “I find the ideal of integrity very viscerally compelling, significantly moreso than other abstract beliefs or principles that I often act on.” the post proceeds to act as if it was considering what kind of integrity like notion would be appropriate to design into (or socially construct) in some alternative society of purely rational agents.
Obviously, the way we should act depends hugely on the way in which others will interpret our actions and respond to them. In the actual world WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST. It doesn’t matter if some other alternate notion of integrity might have been better to have if we don’t show integrity in the traditional manner we will be punished.
In particular, “being straightforward” will often needlessly imperil people’s estimation of our integrity. For example, consider the usual kinds of assurances we give to friends and family that we “will be there for them no matter what” and that “we wouldn’t ever abandon them.” In truth pretty much everyone, if presented with sufficient data showing their friend or family member to be a horrific serial killer with every intention of continuing to torture and kill people, would turn them in even in the face of protestations of innocence. Does that mean that instead of saying “I’ll be there for you whatever happens” we should say “I’ll be there for you as long as the balance of probability doesn’t suggest that supporting you will cost more than 5 QALYs” (quality adjusted life years)?
No, because being straightforward in that sense causes most people to judge us as weird and abnormal and thereby trust us less. Even though everyone understands at some level that these kind of assurances are only true ceterus parabus actually being straightforward about that fact is unusual enough that it causes other people to suspect that they don’t understand our emotions/motivations and thus give us less trust.
In short: yes, the obvious point that we should adopt some kind of heuristic of keeping our word and otherwise modeling integrity is true. However, the suggestion that this nice simple heuristic is somehow the best one is completely unjustified.
I think all the problems with involving EA with causes that require political changes (increase gov funding for mental health...even all Gates’ billions wouldn’t go very far if he tried to directly fund a substantial slice of the mental health healthcare expenditures for first world) apply to changing gov funding and many of the issues are even harder because they derive from hard to shift societal attitudes. These make even direct funding of many types of research difficult.
For instance, a big problem (imo) with the way depression drugs are researched is that it sets as it’s goal finding a drug that makes depressed people feel better without improving the mood of non-depressed individuals (is we impose a higher standard in terms of safety and risk of abuse for these drugs than similarly serious conditions). Yes, I agree that depression involves a cluster of symptoms but you could say the same about the disorders that result from the failure to produce various growth hormones but it doesn’t follow that the treatment of such conditions should be possible using medications that wouldn’t make normal people grow larger or produce more muscle or whatever.
Sure, those drugs will likely have side effects when taken at high levels (mania has lots of drawbacks even when without the depressive part) and concerns about abuse aren’t unfounded but if we were willing to treat the loss of QALYs from depression as seriously as we take those from a heart attack there would be no question we would risk it. As is, however, we have a culture in which doctors which treat mental health are more responsible to the families of those suffering (who may sue if their loved one uses medication to commit suicide...but more importantly will be seen as having failed their patient in the way the oncologist whose patient dies because of well-judged risks they took wouldn’t be).
Unfortunately, I think the combination of general discomfort with anything that sounds transhumanist (would it be so bad if some ppl were a bit unnaturally happier) plus the fact that the majority of society is more interested in what makes them feel good (lack of guilt and keeping that friend or family member around) than in making the depressed feel better when that involves risk.
And I’m afraid this is a broader issue in EA for mental health. Too often the real limitations are hard to change social attitudes and hard to fix with charitable donations.
I’m not sure I complete followed #1 but maybe this will answer what you are getting at.
I agree that the following argument is valid:
Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.
However, I still don’t think you can make the argument that I can’t think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn’t apply discounting when evaluating what is morally preferable. And I think this substantially reduces just how compelling the point is. I mean I do lots of things I’m aware are morally non-optimal. I probably should donate more of my earnings to EA causes etc.. etc.. but sometimes I choose to be selfish and when I consider cryonics it’s entirely as a selfish choice (I agree that even without discounting it’s a waste in utilitarian terms).
(Note that I’d make a distinction between saying something is morally favorable and that it is bad or blameworthy to do it but that’s getting a bit into the weeds).
—-
Regarding the theoretical problems I agree that they aren’t enough of a reason to accept a true discounting rate. Indeed, I’d go further and say that one is making a mistake to infer things about what’s morally good because we’d like our notion of morality to have certain nice properties. We don’t get to assume that morality is going to behave like we would like it to …we’ve just got to do our best with the means of inference we have.
I thought the archetypal example was where everyone had a mild preference to be with other members of their race (even if just because of somewhat more shared culture) and didn’t personally really care if they weren’t in a mixed group. But I take your point to be that, at least in the gender case, we do have the preference not to be entirely divided by gender.
So yes, I agree that if the effect leads to too much sorting then it could be bad but it seems like a tough empirical question whether we are at a point where the utility gains from more sorting are more or less than the losses.
Yes and reading this again now I think I was way too harsh. I should have been more positive about what was obviously an earnest concern and desire to help even if I don’t think it’s going to work out. A better response would have been to suggest other ideas to help but other than reforming how medical practice works so mental suffering isn’t treated as less important than being physically debilitated (docs will agree to risky procedures to avoid physical loss of function but won’t with mental illness …likely because the family doesn’t see the suffering from the inside but do see the loss in a death so are liable to sue/complain if things go bad).
The parent post already responded to a number of these points but let me give a detailed reply.
First, the evidence you cite doesn’t actually contradict the point being made. Just because women rate EA as somewhat less welcoming doesn’t mean that this is the reason they return at a lower rate. Indeed, the alternate hypothesis that says it’s the same reason women are less likely to be attracted to EA in the first place seems quite plausible.
As far as the quotes we can ignore the people simply agreeing that something should be done to increase diversity and talk about the specific reactions. I’ll defer the one about reporting a sexist remark till the end and focus on the complaints about the environment. These don’t seem to be complaints suggesting any particular animus or bad treatment of women or other underprivileged groups merely people expressing a distaste for the kind of interactions they associate with largely male groups. However, other people do like that kind of interaction so, like the question of what to serve for dinner or whether alcohol should be served, you can’t please everyone. While it’s true that in our society there is a correlation between male gender and a preference for a combative, interrupting challenging style of interaction there are plenty of women who also prefer this interaction style (and in my own experience at academic conferences gay men are just as likely as straight men to behave this way). Indeed, the argument that it’s anti-woman to interact in a way that involves interrupting etc.. when some women do prefer this style is the very kind of harmful gender essentialism that we should be fighting against.
Of course, I think everyone agrees that we should do what we can to make EA more welcoming *when that doesn’t impose a greater cost than benefit.* Ideally, there would be parts of EA that appeal to people who like every kind of interaction style but there are costs in terms of community cohesion, resources etc.. etc..
The parent was arguing, persuasively imo, that imposing many of the suggested reforms would impose substantial costs elsewhere not that it might not improve diversity or offer benefits to some people. I don’t see you making a persuasive case that the costs cited aren’t very real or that the benefits outweigh them.
This finally brings us to the complaint about where to report a sexist comment. While I think no one disagrees that we should condemn sexist comments creating an official reporting structure with disciplinary powers is just begging to get caught up in the moderators dilema and create strife and argument inside the community. Better to leave that to informal mechanisms.