Hello, I’m Devin, I blog here along with Nicholas Kross. Currently working on a bioethics MA at NYU.
Devin Kalish
I think this lack of ability to self-advocate is actually crucial to our failures to treat non-human animals with minimum decency. In fact that difference, and its arbitrariness, is one of my favorite alternatives to the argument from marginal cases:
“Say that you go through life neglecting, or even contributing to the suffering of factory farmed animals. One day, you meet someone, who tells you that she used to be a battery cage hen. She is, understandably, not pleased with how she was treated before magically transforming into a conversant agent who could confront you about it. How would you justify yourself to her?”
“This, I think, is importantly different from a closely related case, in which a rock you once kicked around, and which suffered from this, transforms and confronts you. In such a case, you could honestly say that you didn’t think you were hurting the rock at all, because you didn’t think the rock could be hurt. If this rock person was reasonable, and you could convince the rock that your extremely low credence in a scenario like this was reasonable, then it seems as though this would be a perfectly adequate excuse. There is no parallel between this reason and what you might say to the humanized hen, unless you were mistaken about the fact that as a hen she was suffering in her conditions. Perhaps you could instead say that you had, quite reasonable, very very low credence that she would ever be in a position to confront you about this treatment. Do you think she would accept this answer? Do you think she should? What differs between this case and the real world, in terms of what is right or wrong in your behavior, if we agree that your lack of credence that she would transform would be reasonable, but not a good enough answer? It is generally accepted that one should be held as blameworthy or blameless based on their actual beliefs. If these lead you astray in some act, it is a forgivable accident. Given that you are in the same subjective position in this world as you are in the real world, in terms of your credence that you actually will be confronted by a humanized hen, then it seems as though if you have adequate justification in the real world, then there is also something you could give as an adequate justification to this hen. Working backwards, if you have no adequate excuse you can tell the hen, you have no adequate excuse in the real world either.”
Anyway, I think this is my favorite piece of Julian’s so far!
I didn’t downvote (I rarely engage in karma voting) but if I had to guess, I would say that having the entire content of the comment be “downvote me” misled people who didn’t understand the connection to your previous comment immediately (i.e. more confusion than some specific plan to go against your stated purpose).
A nit picking (and late) point of order I can’t resist making because it’s a pet peeve of mine, re this part:
“the public perception seems to be that you can’t be an effective altruist unless you’re capable of staring the repugnant conclusion in the face and sticking to your guns, like Will MacAskill does in his tremendously widely-publicised and thoughtfully-reviewed book.”
You don’t say explicitly here that staring at the repugnant conclusion and sticking to your guns is specifically the result of being a bullet biting utilitarian, but it seems heavily implied by your framing. To be clear, this is roughly the argument in this part of the book:
-population ethics provably leads every theory to one or more of a set of highly repulsive conclusions most people don’t want to endorse
-out of these the least repulsive one (my impression is that this is the most common view among philosophers, though don’t quote me on that) is the repugnant conclusion
-nevertheless the wisest approach is to apply a moral uncertainty framework that balances all of these theories, which roughly adds up to a version of the critical level view, which bites a sandpapered down version of the repugnant conclusion as well as (editorializing a bit here, I don’t recall MacAskill noting this) a version of the sadistic conclusion more palatable and principled than the averagist one
Note that his argument doesn’t invoke utilitarianism anywhere, it just invokes the relevant impossibility theorems and some vague principled gesturing around semi-related dilemmas for person-affecting ethics. Indeed many non-utilitarians bite the repugnant conclusion bullet as well, what is arguably the most famous paper in defense of it was written by a deontologist.
I can virtually guarantee you that whatever clever alternative theory you come up with, it will take me all of five minutes to point out the flaws. Either it is in some crucial way insufficiently specific (this is not a virtue of the theory, actual actions are specific so all this does is hide which bullets the theory will wind up biting and when), or winds up biting one or more bullets, possibly different ones at different times (as for instance theories that deny the independence of irrelevant alternatives do). There are other moves in this game, in particular making principled arguments for why different theories lead to these conclusions in more or less acceptable ways, but just pointing to the counterintuitive implication of the repugnant conclusion is not a move in that game, but rather a move that is not obviously worse than any other in the already solved game of “which bullets exist to be bitten”.
Maybe the right approach to this is to just throw up our hands in frustration and say “I don’t know”, but then it’s hard to fault MacAskill, who again, does a more formalized version of essentially this rather than just biting the repugnant conclusion bullet.
Part of my pet peeve here is with discourse around population ethics, but also it feels like discourse around WWOTF is gradually drifting further away from anything I recognize from its contents. There’s plenty to criticize in the book, but to do a secondary reading skim from a few months after its release, you would think it was basically arguing “classical utilitarianism, therefore future”, which is not remotely what the book is actually like.
I can understand some of these even where I disagree, but could you elaborate on why a group being more “aspie” contributes to sexual harassment (disclosure I am an aspie, but in fairness I’m also male and feel that I understand that one much more).
I don’t agree with MIRI on everything, but yes, this is one of the things I like most about it
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
Personally I think the Most Important Century series is closest to my own thinking, though there isn’t any single source that would completely account for my views. Then again I think my timelines are longer than some of the other people in the comments, and I’m not aware of a good comprehensive write up of the case for much shorter timelines.
The impact for me was pretty terrible. There were two main components of the devastating parts of my timeline changes which probably both had a similar amount of effect on me:
-my median estimate year moved back significantly, cut down by more than half
-my probability mass on AGI significantly sooner than even that bulked up
The latter gives me a nearish term estimated prognosis of death somewhere between being diagnosed with prostate cancer and colorectal cancer, something probably survivable but hardly ignorantle. Also everyone else in the world has it. Also it will be hard for you to get almost anyone else to take you seriously if you tell them the diagnosis.
The former change puts my best guess arrival for very advanced AI well within my life expectancy, indeed when I’m middle aged. I’ve seen people argue that it is actually in one’s self interest to hope that AGI arrives during their lifetimes, but as I’ve written a bit about before this doesn’t really comfort me at all. The overwhelming driver of my reaction is more that, if things go poorly and everything and everyone I ever loved is entirely erased, I will be there to see it (well, see it in a metaphorical sense at least).
There were a few months, between around April and July of this year, when this caused me some serious mental health problems, in particular it worsened my insomnia and some other things I was already dealing with. At this point I am doing a bit better, and I can sort of put the idea back in the abstract idea box AI risk used to occupy for me and where it feels like it can’t hurt me. Sometimes I still get flashes of dread, but mostly I think I’m past the worst of it for now.
In terms of donation plans, I donated to AI specific work for the first time this year (MIRI and Epoch, the process of deciding which places to pick was long, frustrating, and convoluted, but probably the biggest filter was that I ruled out anyone doing significant capabilities work). More broadly I became much more interested in governance work and generally work to slow down AI development than I was before.
I’m not planning to change career paths, mostly because I don’t think there is anything very useful I can do, but if there’s something related to AI governance that comes up that I think I would be a fit for, I’m more open to it than I was before.
I think the overall balance of positive and negative sources is fair when only viewed from a “positive versus negative” standpoint. As I think Habiba Islam pointed out somewhere much of the positive reading is much much longer. Where I think this will wind up running into trouble is something like this:
-While there is some primary reading in this list, most of the articles, figures, events, ideas etc. that are discussed across these readings appear in the secondary sources.
-This is pretty much inevitable, the list would multiply out far too much if she added all of the primary sources needed to evaluate the secondary sources from scratch
-Most of the secondary sources are negative, and often misleading in some significant way
-The standard way to try to check these problems without multiplying out primary sources too much is to read other pieces arguing with the original ones
-The trouble is, there are very few of those outside of blogs and the EA forum on these topics, something I’ve been hand wringing about for a while, and Thorn seems to only be looking at more official sources like academic/magazine/newspaper publications
-I think Thorn will try to be balanced and thoughtful, but I think this disparity will almost ensure that the video will inherit many of the flaws of its sources
Endorsed. A bunch of my friends were recommending that I read the sequences for a while, and honestly I was skeptical it would be worth it, but I was actually quite impressed. There aren’t a ton of totally new ideas in it, but where it excels it honing in on specific, obvious-in-retrospect points about thinking well and thinking poorly, being clear engaging and catchy describing them, and going through a bit of the relevant research. In short, you come out intellectually with much of what you went in with, but with reinforcements and tags put in some especially useful places.
As a caveat I take issue with a good deal of the substantial material as well. Most notably I don’t think he describes those he disagrees with fairly sometimes, for instance David Chalmers, and I think “Purchase Fuzzies and Utilons Separately” injected a basically wrong and harmful meme into the EA community (I plan to write a post on this at some point when I get the chance). That said if you go into them with some skepticism of the substance, you will come out satisfied. You can also audiobook it here, which is how I read it.
Interesting, I’ll have to think about this one a bit, but I tend to think that something like Shiffrin’s gold bricks argument is the stronger antinatalist argument anyway.
Thanks, I appreciate the added information! I’m not sure I’m convinced that this was worthwhile, but I feel like I now have a much better understanding of the case for it.
Thanks, this is indeed helpful. I would also like to know though, what made this property “the most appropriate” out of the three in a bit greater detail if possible. How did its cost compare to the others? Its amenities? I think many people in this thread agree that it might have been worth it to buy some center like this, but still question whether this particular property was the most cost effective one.
Larry Temkin is a decent candidate. I think he has plenty of misunderstandings about EA broadly, but he also defends many views that are contrary to common EA approaches and wrote a whole book about his perspective on philanthropy. As far as philosopher critics go, he is a decent mixture of a) representing a variety of perspectives unpopular among EAs and 2) doing so in a rigorous and analytic way EAs are reasonably likely to appreciate, in particular he has been socially close to many EA philosophers, especially Derek Parfit.
I’ve tended to be pretty annoyed by EA messaging around this. My impression is that the following things are true about EAs talking to the media:
-Journalists will often represent what you say in a way you would not endorse, and will rarely revise based on your feedback on this, or even give you the opportunity to give feedback
-It is often imprudent to talk to the media, at least if you are not granted anonymity first, because it shines a spotlight on you that is often distorted, and always invites some possible controversy directed at you
However, the advice is often framed as though a third thing is also true:
-It is usually bad for Effective Altruism if Effective Altruists talk to the media without extreme care
My personal impression has been that the articles about EA that are most reflective of the EA I know tend to involve interviews with EAs, and that the parts of those articles that are often best reflective of EA are the parts where the interviewed EAs are quoted. The worst generally contain no interviews at all. Interviews like this might grant unearned credibility, but at minimum, they also humanize us, depict some part of the real people that we are. I guess this might not be everyone’s experience, but it’s worth remembering that even if the parts where the EAs are interviewed are often misrepresentative, so are the parts, often to a greater degree, where they aren’t. This is especially true of articles that are written in relative good faith but by outsiders briefly glancing in for their impressions, and it is my impression that this describes the overwhelming majority of pieces written on EA, especially where interviewed EAs get quoted.
Still, I don’t think this advice is the main reason EA has failed so badly with PR recently. FTX was the obvious one, but in terms of actual media strategy I stand by this comment as my main diagnosis of our mistake. With some honorable exceptions, EA’s media strategy this past few months seems to me something like: shine highbeams on ourselves, especially this rather narrow part of ourselves, mostly don’t respond to critics directly in any very prominent non-EA-specific place, except maybe Will MacAskill will occasionally tweet about it, and don’t respond to very harsh critics even this much. I think pretty much every step in this strategy crashed and burned.
No problem, welcome to the forum! You can feel free to share whatever you’re comfortable with, but personally I would recommend you don’t post your email address in the comments, as there was recently someone webcrawling the forum for email addresses to send a scam email to. I would reserve information like that to DMs, my own plan is to DM you his email address if and when he gives me his approval. If you’d prefer, feel free to give me your email address and I can send it to him instead, again, whatever works best for you.
I almost never engage in karma voting because I don’t really have a consistent strategy for it I’m comfortable with, but I just voted on this one. Karma voting in general has recently been kind of confusing to me, but I feel like I have noticed a significant amount of wagon circling recently, how critical a post was of EA didn’t used to be very predictive of its karma, but I’ve noticed that recently, since around the Bostrom email, it has become much more predictive. Write something defensive of EA, get mostly upvotes, potentially to the triple digits. Write something negative, very mixed to net negative voting, and if it reaches high enough karma, possibly even more comments. Hanania’s post on how EA should be anti-woke just got downvoted into the ground twice in a row, so I don’t think the voting reflects much ideological change by comparison (being very famous in EA is also moderately predictive, which is probably some part of the Aella post’s karma at least, and is a more mundane sort of bad I guess).
I’m still hopeful this will bounce back in a few hours, as I often see happen, but I still suspect the overall voting pattern will be a karmic tug of war at best. I’m not sure what to make of this, is it evaporative cooling? Are the same people just exhausted and taking it out on the bad news? Is it that the same people who were upvoting criticism before are exhausted and just not voting much at all, leaving the karma to the nay sayers (I doubt this one because of the voting patterns on moderately high karma posts of the tug of war variety, but it’s the sort of thing that makes me worry about my own voting, how I don’t even need to vote wrong to vote in a way that creates unreasonable disparities based on what I’m motivated to vote on at all, and just voting on everything is obviously infeasible). Regardless, I find it very disturbing, I’m used to EA being better than this.