English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
Manuel Del Río Rodríguez 🔹
I feel one is always allowed not to speak about what they don’t want to, but that if one does decide to speak about something, they should never make a statement they know is a lie. This is sad, because depending on the issue and how it relates to your career and other stuff, you might not be able to just keep quiet, and besides, your silence is going to be interpreted uncharitably. People who have shown to consistently value and practice truth-saying should be allowed some sort of leeway, like ‘I will only answer n randomly chosen questions today (n also randomized) and you are not entitled to press further on anything I don’t answer’.
I am not being precise with language, but what I meant was something like sometimes you know that stating some truths, or merely accepting the possibility of some things being true and being willing to explore them and publicize them no matter the consequences might have negative consequences, like being hurtful and/or offending to people, frequently for good, pragmatic and historical reasons. Prioritizing not to harm would feel like a perfectly valid, utilitarian consideration, even if I disagree with it trumping all others. In Haidt’s moral framework terms, one can prioritize Care/Harm versus Liberty/Oppression. Myself, I have a deontological, quasi-religious belief in truth and truth-seeking as an end in itself.
I agree with that, and that our goal should be to achieve both, but reality being what it is, there are going to be times when truth-seeking and kindness confront each other, and one has to make a trade-off. Ultimately, I choose truth-seeking in case of conflict, even weighing in the negative effects it can generate. But to each his own.
Really agree with this take. Ultimately, I get the impression that there seems to be a growing divide in EA between people who prioritize more truthseeking and those who prioritize better PR and kindness. And these are complex topics with difficult trade-offs that each has to navigate and establish on a personal basis.
I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy—actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry.
“This is like saying that if I break into the Federal Reserve Bank, make off with a million bucks, spend it all on Powerball tickets and happen to win, it was okay.” – Judge Kaplan
Mostly agree and have found your post insightful, but am not too sure about the ‘confront this a bit’ part. I feel both most EAs and most Rationalists are very solidly on the left (not the radical, SJW fringe, but very clearly left of center, Democratic-leaning). I vaguely remember having read somewhere Tyler Cowen describing EA as ‘what SJW should be like’. Still, I feel that political partisanship and accepting labels is such a generally toxic and counterproductive affair that it is best avoided. And I think there’s probably some inevitable tension inside EA between people who prioritize the search for veracity and effectiveness, and a high degree of respect for the freedom to explore unconventional and inconvenient truth, and others who might lean more towards prioritizing more left-coded practices and beliefs.
It was for me. Also, I had read about Tara and others leaving Alameda and having issues with Sam, but not the gory details.
I’d say there are two main aspects that impact negatively on EA portrayal. One I’ve mentioned below—Lewis goes out of his way to establish that the inner circle were ‘the EAs’, and implicitly seems to be making the point that Sam’s mentality is a perfect match to EA mentality. But much more damning is how he depicts The Schism in early Alameda. Even though he is practically siding with Sam in the dispute, from what he describes it beggars belief how the EA community -and more so its top figures- didn’t react in a stronger way after hearing what the Alameda quitters were saying. The pattern of the early Alameda mess very eerily prefigured what would happen, and Sam’s shadiness.
Reading the book right now like everybody else, I guess. If Lewis is to be believed (complex in parts, as he is clearly seeing all this through Sam-tinted glasses), ALL the members of his inner circle (Caroline, but also Nishad and Wang) were committed EAs, which is something I find disturbing.
I was surprised too, and would be more except for awareness of human fallibility and how much of a sucker we are for good stories. I don’t doubt that some of what Lewis said in that interview might be true, but it is being massively distorted by affinity and closeness to Sam.
Dunno, but I’d guess it would depend on the rough percentages through which you weigh the different moral stances. Myself, I tend to feel like 70% deontologist, 30% consequentialist, which means I would definitely write the negative review (I’m not vegan or vegetarian either, so it’s really a no-brainer for me here, though). Ultimately, you have to make the choice which you think is the best given the limited information available.
I think that it has been said that among the leadership, Nishad Singh was pretty close to EA too. Further down the line, it is commonly said that Alameda especially attracted a lot of EA people as it was part of its appeal from the beginning. Needless to say, though, these people would have been completely in the dark about what was happening until they were told, in the very end.
I mostly agree that people seem to have overreacted and castigated themselves about SBF-FTX, but also feel the right amount of reaction should be non-trivial. We aren’t just talking about SBF, as the whole affair included other insiders who were arguably as ‘true believers’ in EA as it is reasonable to expect (like Caroline Ellison) and SBF-FTX becoming poster-children of the movement at a very high level. But I think you are mostly right: one can’t expect omniscience and a level of character-detection amongst EAs when among the fooled were much more cynical, savvy and skeptic professionals in finance.
For what it’s worth, I feel some EA values might have fueled some of Sam’s bad praxis, but weren’t the first mover. From what I’ve read, he absorbed (naive?) utilitarianism and a high-risks stake from the home. As for the counterfactual of him having ending up where he has without any involvement with EA… I just don’t know. the story that is usually told is that his intent was working in charity NGOs before Will McAskill steered him towards an ‘earning to give’ path. Perhaps he would have gone into finance anyway after some time. It’s very difficult to gauge intentions and mental states- I have never been a fan of Sam’s (I discovered his existence, along with that of EA after and because of the FTX affair), but I can still assume that, if it comes to ‘intent’, his thoughts were probably more in a naive utilitarian, ‘rules are for the sheep, I am smart enough to take dangerous bets and do some amoral stuff towards creating the greater good’ frame than ‘let me get rich by a massive scam and fleece the suckers’. Power and vanity would probably reinforce these as well.
You’re right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I’m speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
Just listened to a podcast interview of yours, Geoffrey Miller (Manifold, with Steve Hsu). Do you really believe that it is viable to impose a very long pause (you mention ‘just a few centuries’). The likelihood of such a thing taking place seems to me more than extremely remote -at least until we get a pragmatic example of the harm AI can do, a Trinity test of sorts.
Another probably very silly question: in what sense isn’t AI alignment just plain inconceivable to begin with? I mean, given the premise that we could and did create a superintelligence many orders of magnitude superior to ourselves, how could it even make sense to have any type of fail-safe mechanism to ‘enslave it’ to our own values? A priori, it sounds like trying to put shackles on God. We can’t barely manage to align ourselves as a species.
Wonderful! This will make me feel (slightly) less stupid for asking very basic stuff. I actually had 3 or so in mind, so I might write a couple of comments.
Most pressing: what is the consensus on the tractability of the Alignment problem? Have there been any promising signs of progress? I’ve mostly just heard Yudkowky portray the situation in terms so bleak that, even if one were to accept his arguments, the best thing to do would be nothing at all and just enjoy life while it lasts.
Thanks, Martijn. I would like to give it a go, even if I am rather busy with work, reading and studying at the moment.
Part A.
This will not be fully theoretical: I’ve already been donating 5% for the last 2 year. First pick would be the Malaria Consortium. It seems to be very cost-effective ($5,000 per life saved on average, $7 ). It also has strong evidence of impact.
Second option would be the Against Malaria Foundation. It is pretty similar to the first choice in target, effectiveness and evidence of impact, but numbers are slightly worse (perhaps?). Cost per life is $2000 dollars more, which looks worse, but cost of output (per bednet output cost, as opposed to the consortium’s children treated with a full course of medicine) is a bit lower, at $6. Also, working on prevention seems more far-sighted and perhaps controlable.
Third choice, Helen Keller International. Cost of outcome is practically the same as in the two previous cases, although it is much cheaper in cost of output (just $2 for supplements), but I am more uncertain about the specific results.
Part B.
For all the reasons exposed above, if I have to choose only one, it would be the Malaria Consortium.
Part C.
Generally, decisions relating to investments for retirement in 20 years’s time. Perhaps I should also consider alternative jobs or job promotion through this quantitative mindset.