English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
Manuel Del Río Rodríguez
I’d say there are two main aspects that impact negatively on EA portrayal. One I’ve mentioned below—Lewis goes out of his way to establish that the inner circle were ‘the EAs’, and implicitly seems to be making the point that Sam’s mentality is a perfect match to EA mentality. But much more damning is how he depicts The Schism in early Alameda. Even though he is practically siding with Sam in the dispute, from what he describes it beggars belief how the EA community -and more so its top figures- didn’t react in a stronger way after hearing what the Alameda quitters were saying. The pattern of the early Alameda mess very eerily prefigured what would happen, and Sam’s shadiness.
Reading the book right now like everybody else, I guess. If Lewis is to be believed (complex in parts, as he is clearly seeing all this through Sam-tinted glasses), ALL the members of his inner circle (Caroline, but also Nishad and Wang) were committed EAs, which is something I find disturbing.
I was surprised too, and would be more except for awareness of human fallibility and how much of a sucker we are for good stories. I don’t doubt that some of what Lewis said in that interview might be true, but it is being massively distorted by affinity and closeness to Sam.
Dunno, but I’d guess it would depend on the rough percentages through which you weigh the different moral stances. Myself, I tend to feel like 70% deontologist, 30% consequentialist, which means I would definitely write the negative review (I’m not vegan or vegetarian either, so it’s really a no-brainer for me here, though). Ultimately, you have to make the choice which you think is the best given the limited information available.
I think that it has been said that among the leadership, Nishad Singh was pretty close to EA too. Further down the line, it is commonly said that Alameda especially attracted a lot of EA people as it was part of its appeal from the beginning. Needless to say, though, these people would have been completely in the dark about what was happening until they were told, in the very end.
I mostly agree that people seem to have overreacted and castigated themselves about SBF-FTX, but also feel the right amount of reaction should be non-trivial. We aren’t just talking about SBF, as the whole affair included other insiders who were arguably as ‘true believers’ in EA as it is reasonable to expect (like Caroline Ellison) and SBF-FTX becoming poster-children of the movement at a very high level. But I think you are mostly right: one can’t expect omniscience and a level of character-detection amongst EAs when among the fooled were much more cynical, savvy and skeptic professionals in finance.
For what it’s worth, I feel some EA values might have fueled some of Sam’s bad praxis, but weren’t the first mover. From what I’ve read, he absorbed (naive?) utilitarianism and a high-risks stake from the home. As for the counterfactual of him having ending up where he has without any involvement with EA… I just don’t know. the story that is usually told is that his intent was working in charity NGOs before Will McAskill steered him towards an ‘earning to give’ path. Perhaps he would have gone into finance anyway after some time. It’s very difficult to gauge intentions and mental states- I have never been a fan of Sam’s (I discovered his existence, along with that of EA after and because of the FTX affair), but I can still assume that, if it comes to ‘intent’, his thoughts were probably more in a naive utilitarian, ‘rules are for the sheep, I am smart enough to take dangerous bets and do some amoral stuff towards creating the greater good’ frame than ‘let me get rich by a massive scam and fleece the suckers’. Power and vanity would probably reinforce these as well.
You’re right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I’m speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
Just listened to a podcast interview of yours, Geoffrey Miller (Manifold, with Steve Hsu). Do you really believe that it is viable to impose a very long pause (you mention ‘just a few centuries’). The likelihood of such a thing taking place seems to me more than extremely remote -at least until we get a pragmatic example of the harm AI can do, a Trinity test of sorts.
Another probably very silly question: in what sense isn’t AI alignment just plain inconceivable to begin with? I mean, given the premise that we could and did create a superintelligence many orders of magnitude superior to ourselves, how could it even make sense to have any type of fail-safe mechanism to ‘enslave it’ to our own values? A priori, it sounds like trying to put shackles on God. We can’t barely manage to align ourselves as a species.
Wonderful! This will make me feel (slightly) less stupid for asking very basic stuff. I actually had 3 or so in mind, so I might write a couple of comments.
Most pressing: what is the consensus on the tractability of the Alignment problem? Have there been any promising signs of progress? I’ve mostly just heard Yudkowky portray the situation in terms so bleak that, even if one were to accept his arguments, the best thing to do would be nothing at all and just enjoy life while it lasts.
Thanks, Martijn. I would like to give it a go, even if I am rather busy with work, reading and studying at the moment.
спасибо, Alex! I quickly checked with the search engine if there were any ongoing bookclubs but didn’t find yours.
Just joined the EA Anywhere Slack channel, and might join you for your book club, although I imagine you’ve already gone through the most obvious first choices.
Thanks for the other links too!
Well, that looks a bit like some twitter-level trolling and a textbook example of ‘begging the question’, doesn’t it? But let me follow the guidelines...
I wouldn’t say I am a ‘convinced EA’ or consider correct the assumption that posting on the forum is a necessary and sufficient condition thereof. I am interested in EA, and feel that some degree of ‘effective altruism’ with small caps is probably a valid moral obligation whatever your philosophical stance.
As for the books, I am a bit of a bookworm and appreciate being persuaded by detailed arguments, which I tend to find more in books -and they are less taxing on my eyes. And there are aspects of EA that I probably need to read solid arguments for, as they feel alien to some of my presuppositions (utilitarianism as a moral framework, rights of non rational and non-moral creatures, etc...).
Online EA bookclub, anyone?
Hi there, and thanks for the post. I find myself agreeing a lot with what it says, so probably my biases are aligning with it, and that has to be said. I am still trying to catch up with the main branches of ethical thought and giving them a fair chance, which I think utilitarianism deserves (by instinct and inclination I am probably a very Kantian deontologist), even if it instinctively feels ‘wrong’ to me.
I haven’t read enough on the topic yet, but my impression is that my train of belief would indeed be something somewhat like ‘a contractualist who wants to maximize utility’.
Thanks a lot for this post. I have found it a superb piece and well worth meditating about, even if I have to say that I am probably biased because, a priori, I am not too inclined towards Utilitarianism in the first place. But I think the point you make is complex and not necessarily against consequentialism as such, and would probably go some way to accommodate the views of those who find a lot of it too alien and unpalatable.
Thanks for the advice! I have also discovered the ‘block quote’ and inserted it too.
It was for me. Also, I had read about Tara and others leaving Alameda and having issues with Sam, but not the gory details.