English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
Manuel Del Río Rodríguez
The question that heads this post obviously answers itself, in that only actual perpetrators of bad deeds and their direct instigators (intellectual or otherwise) are to be held accountable for them; nevertheless, I must admit that I found Eliezer Yudkowsky’ analogy unconvincing, and (not quite, but feeling a little bit) disingenuous. Whenever we see examples of adherents of some creed, ideology, religious or thought system going into nefarious places, it is natural to wonder if said ideas (whether properly or mistakenly interpreted) influenced or condoned the path they took. Some articles I have read lately have pointed the finger towards the hubristic hazards of miscalculating for optimal results, and the concomitant dangers of risky betting and of cutting corners. As is well known, the road to hell is paved with good intentions. And besides, as has been stated, a lot of the people involved in this weren’t just ‘fellow travelers’ or ocasional readers of EA material. A lot of them were very visibly engaged in and seen as poster childs for the movement. And I am sure most of them were innocent victims, especially so the rank-and-file workers of FTX and Alameda.
Having said that, I do not find it reasonable either to go to masochistic extremes of self-flagellation. Humans being what they are, there will always be cases of wolves in sheep’s clothing, and never enough controls to catch them in advance. Which is humbling, in a not necessarily bad way. My impression is that the EA community and its members are a wonderful group of people and they will probably come out of this situation wiser, if sadder. And that obviously, it is wrong to blame EA for what has happened.
As Eliezer Yudkowsky mentions Caroline Ellison’s blog, I would like to say that I have been reading it of late, and even taking into account the potential deceitfulness of words and the pictures we build with them, I do not get from both its contents and her general trajectory that she could be a morally bankrupt person. On the contrary, the impression I got is of a true believer, and a good person. This does not preclude the possibility that, under circumstances of a certain naiveté and inexperience in a field as murky as crypto, she might have let herself go along with what she might have perceived as temporary and ‘bad’ expedient means. But to believe this person ever intended to purposely and maliciously scam people our of their money or be privy to a fraud is, for me, completely out of the question. I believe the best option is to be charitable and await to see what the courts of law have to say once the dust has settled. As for SBF, and after reading some of the things he has said and done, that’s a completely different story.
Caroline might have tripped in a pretty awful way, but I am sure she is good at the core and acted with ultimate good intentions in mind (even if the path to hell is paved with good intentions. That is why pure utilitarianism can take you to dark places). Her beauty or lack thereof has nothing to add to that. And anyway, de gustibus non est disputandum—for me, she is the opposite of not pretty.
I rather liked this comment, and think it really hits the nail on the head. Myself being a person that has only recently come into contact and developed an interest, and therefore having mostly an ‘outsider’ perspective, I would add that there’s a big difference in the perception of ‘effective altruism’, which almost anybody would find reasonable and morally unobjectionable, and ‘Effective Altruism’ / Rationalism as a movement with some beliefs and practices that will be felt as weird and rejectable by many people (basically, all those mentioned by S.E. Montgomery like elitism, long-termism, utilitarianism, a general hibristic and nerdy belief that complex issues and affairs are reducible to numbers and optimization models, etc...).
Hello there! I found your post and the comments really interesting (as soon as I finish writing this, I will be checking The Best Textbooks on Every Subject list in LW), but would like to contribute an outsider’s 2¢, as I have only recently discovered and started to take an interest in EA. The thing is, without trying to be disrespectful, that this Rationalist movement that possibly led many of you to EA feels really, really, really weird and alien on a first glance, like some kind of nerdy, rationalist religion with unconventional and controversial beliefs (polyamory or obsessing with AI) and a guru who does not appear to be well-known and respected as a scientist outside of his circle of followers, and whose main book seems to be a fanfiction-esque rewrite of a Harry Potter book with his ideas intertwined. I repeat that I do not mean this as an evaluation (it is probably ‘more wrong’, if you’ll allow the pun), but from an external perspective, it almost feels like some page from a book with entries on Scientology and Science Fiction. I feel that pushing the message that you have to be a Rationalist or Rationalist-adjacent as a prerequisite to really appreciate and value AE can very easily backfire.
Being the Sequences as long as you say, perhaps even a selection might not be the best way to get people interested if they aren’t already piqued or have a disproportionate amount of free time in their hands. Like, if a Marxist comes and tells you that you need to read through the three volumes of Capital and the Grundrisse before making up your mind on if the doctrine is interesting, personally relevant or a good or a bad thing, or if a theologian does the same move and points towards Thomas Aquinas’ very voluminous works, you would be justified in requiring first some short and convincing expository work with the core arguments and ideas to see if they look sufficiently appealing and worth engaging in. Is there something of the kind for Rationalism?
Best greetings.
M.
Thanks for the recommendations! I wouldn’t have any issues either with a moderately-sized book (say, from 200-400 pages long).
Cheers.
M.
I listened to the interview yesterday. My take on what he said on this was rather that EA’s core principles don’t have to necessarily restrict it to what is its de facto socially liberal, coastal, Democratic party demographic, and that socially conservative people could perfectly buy into them, if they aren’t packaged as ‘this lefty thing’.
I find myself agreeing with quite a lot of what he says in this video as, on a personal level, the greatest difficulty I find when trying to wind my mind around the values and principles of EA (as opposed to effective altruism, in small caps) is the axiom of impartiality, and its extension to a degree to animals. Like, in some aspects, it is trivially obvious that all humans (and by extension, this applies to any creatures with sufficient reason and moral conscience) should be the possessors of an equal set of rights, but if you try to push it into moral-ethical obligations towards them, I can’t quite understand why we are supposed not to make distinctions, like valuing more those that are closest to us (our community) and those we consider wise, good, etc… That would not preclude the possession, at the same time, of a more generalist and abstract empathy for all sentient beings, and a feeling of a degree of moral obligation to help them (even if to a lesser degree than those closest to one).
Book critique of Effective Altruism
Thanks for the advice! I have also discovered the ‘block quote’ and inserted it too.
Thanks a lot for this post. I have found it a superb piece and well worth meditating about, even if I have to say that I am probably biased because, a priori, I am not too inclined towards Utilitarianism in the first place. But I think the point you make is complex and not necessarily against consequentialism as such, and would probably go some way to accommodate the views of those who find a lot of it too alien and unpalatable.
I haven’t read enough on the topic yet, but my impression is that my train of belief would indeed be something somewhat like ‘a contractualist who wants to maximize utility’.
Hi there, and thanks for the post. I find myself agreeing a lot with what it says, so probably my biases are aligning with it, and that has to be said. I am still trying to catch up with the main branches of ethical thought and giving them a fair chance, which I think utilitarianism deserves (by instinct and inclination I am probably a very Kantian deontologist), even if it instinctively feels ‘wrong’ to me.
Online EA bookclub, anyone?
Well, that looks a bit like some twitter-level trolling and a textbook example of ‘begging the question’, doesn’t it? But let me follow the guidelines...
I wouldn’t say I am a ‘convinced EA’ or consider correct the assumption that posting on the forum is a necessary and sufficient condition thereof. I am interested in EA, and feel that some degree of ‘effective altruism’ with small caps is probably a valid moral obligation whatever your philosophical stance.
As for the books, I am a bit of a bookworm and appreciate being persuaded by detailed arguments, which I tend to find more in books -and they are less taxing on my eyes. And there are aspects of EA that I probably need to read solid arguments for, as they feel alien to some of my presuppositions (utilitarianism as a moral framework, rights of non rational and non-moral creatures, etc...).
спасибо, Alex! I quickly checked with the search engine if there were any ongoing bookclubs but didn’t find yours.
Just joined the EA Anywhere Slack channel, and might join you for your book club, although I imagine you’ve already gone through the most obvious first choices.
Thanks for the other links too!
Thanks, Martijn. I would like to give it a go, even if I am rather busy with work, reading and studying at the moment.
Wonderful! This will make me feel (slightly) less stupid for asking very basic stuff. I actually had 3 or so in mind, so I might write a couple of comments.
Most pressing: what is the consensus on the tractability of the Alignment problem? Have there been any promising signs of progress? I’ve mostly just heard Yudkowky portray the situation in terms so bleak that, even if one were to accept his arguments, the best thing to do would be nothing at all and just enjoy life while it lasts.
Another probably very silly question: in what sense isn’t AI alignment just plain inconceivable to begin with? I mean, given the premise that we could and did create a superintelligence many orders of magnitude superior to ourselves, how could it even make sense to have any type of fail-safe mechanism to ‘enslave it’ to our own values? A priori, it sounds like trying to put shackles on God. We can’t barely manage to align ourselves as a species.
Just listened to a podcast interview of yours, Geoffrey Miller (Manifold, with Steve Hsu). Do you really believe that it is viable to impose a very long pause (you mention ‘just a few centuries’). The likelihood of such a thing taking place seems to me more than extremely remote -at least until we get a pragmatic example of the harm AI can do, a Trinity test of sorts.
Hello! My name is Manuel del Río. I am a 42-year old Spanish trilingual (Galician, English and Spanish) TEFL teacher and satellite school head of studies, and I have been working for the last twelve years in Official Language Schools of the Galician Autonomous Region (Northwestern corner of Spain ). Other work I have done was in the fields of translation and cultural journalism.
I studied at the University of Santiago de Compostela, where I took degrees in History and English Philology, and a Master’s in Literary Theory, Comparative Literature and Cultural Studies.
I recently learned about Effective Altruism as an aftermath of the FTX bankruptcy affair (some marginal good can come out of the bad, I guess) and am interested in learning more about it and possibly getting engaged. I find its minimalistic program of helping others in a rationally optimized way quite appealing, and have informed myself about some of the wonderful work the people in this community are doing. I do have some intellectual qualms about strict utilitarianism as an ethical compass, and have a Confucian affinity for helping those near us and therefore, making distinctions, but I don’t find them incompatible with a juxtaposed broader sweep. One can help the community AND the wider circle of humanity.
I look forward to getting to know more about EA and its community everyone in this community, and getting myself involved.
Have a nice day!