I am Issa Rice. https://issarice.com/
Eliezer’s tweet is about the founding of OpenAI, whereas Agrippa’s comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil’s grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI’s work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Eliezer is claiming (Eliezer only has to compare a world where OpenAI didn’t exist vs the actual world where it does exist).
Personally, I agree with Eliezer that the founding of OpenAI was a terrible idea, but I am pretty uncertain about whether Open Phil’s grant was a good or bad idea. Given that OpenAI had already disrupted the “nascent spirit of cooperation” that Eliezer mentions and was going to do things, it seems plausible that buying a board seat for someone with quite a bit of understanding of AI risk is a good idea (though I can also see many reasons it could be a bad idea).
One can also argue that EA memes re AI risk led to the creation of OpenAI, and that therefore EA is net negative (see here for details). But if this is the argument Agrippa wants to make, then I am confused why they decided to link to the 2017 grant.
What textbooks would you recommend for these topics? (Right now my list is only “Linear Algebra Done Right”)
What textbooks would you recommend for these topics? (Right now my list is only “Linear Algebra Done Right”)
I would recommend not starting with Linear Algebra Done Right unless you already know the basics of linear algebra. The book does not cover some basic material (like row reduction, elementary matrices, solving linear equations) and instead focuses on trying to build up the theory of linear algebra in a “clean” way, which makes it enlightening as a second or third exposure to linear algebra but a cruel way to be introduced to the subject for the first time. I think 3Blue1Brown videos → Vipul Naik’s lecture notes → 3Blue1Brown videos (again) → Gilbert Strang-like books/Treil’s Linear Algebra Done Wrong → 3Blue1Brown videos (yet again) → Linear Algebra Done Right would provide a much smoother experience. (See also this comment that I wrote a while ago.)
Many domains that people tend to conceptualize as “skill mastery, not cult indoctrination” also have some cult-like properties like having a charismatic teacher, not being able to question authority (or at least, not being encouraged to think for oneself), and a social environment where it seems like other students unquestioningly accept the teachings. I’ve personally experienced some of this stuff in martial arts practice, math culture, and music lessons, though I wouldn’t call any of those a cult.
Two points this comparison brings up for me:
EA seems unusually good compared to these “skill mastery” domains in repeatedly telling people “yes, you should think for yourself and come to your own conclusions”, even at the introductory levels, and also just generally being open to discussions like “is EA a cult?”.
I’m worried this post will be condensed into people’s minds as something like “just conceptualize EA as a skill instead of this cult-like thing”. But if even skill-like things have cult-like elements, maybe that condensed version won’t help people make EA less cult-like. Or maybe it’s actually okay for EA to have some cult-like elements!
He was at UW in person (he was a grad student at UW before he switched his PhD to AI safety and moved back to Berkeley).
Setting expectations without making it exclusive seems good.
“Seminar program” or “seminar” or “reading group” or “intensive reading group” sound like good names to me.
I’m guessing there is a way to run such a group in a way that both you and I would be happy about.
The actual activities that the people in a fellowship engage in, like reading things and discussing them and socializing and doing giving games and so forth, don’t seem different from what a typical reading club or meetup group does. I am fine with all of these activities, and think they can be quite valuable.
So how are EA introductory fellowships different from a bare reading club or meetup group? My understanding is that the main differences are exclusivity and the branding. I’m not a fan of exclusivity in general, but especially dislike it when there doesn’t seem to be a good reason for it (e.g. why not just split the discussion into separate circles if there are too many people?) or where self-selection would have worked (e.g. making the content of the fellowship more difficult so that the less interested people will leave on their own). As for branding, I couldn’t find a reason why these groups are branded as “fellowships” in any of the pages or blog posts I looked at. But my guess is that it is a way to manufacture prestige for both the organizers/movement and for the participants. This kind of prestige-seeking seems pretty bad to me. (I can elaborate more on either point if you want to understand my reasoning.)
I haven’t spent too much time looking into these fellowships, so it’s quite possible I am misunderstanding something, and would be happy to be corrected.
I didn’t. As far as I know, introductory fellowships weren’t even a thing in EA back in 2014 (or if they were, I don’t remember hearing about them back then despite reading a bunch of EA things on the internet). However, I have a pretty negative opinion of these fellowships so I don’t think I would have wanted to start one even if they were around at the time.
(I tried starting the original EA group at UW in 2014. I’m no longer a student at UW and don’t even live in the Seattle area currently.)
Seems like you found the Messenger group, which is the most active thing I am aware of. You’ve also probably seen the Facebook group and could try messaging some of the people there who joined recently.
I don’t want to discourage you from trying, but here are some more details: I was unable to start an EA group at UW in 2014 (despite help from Seattle EA organizers). At the time I thought this was mainly due to my poor social skills (and, to be honest, I think my poor social skills were still a significant factor). But then Rohin Shah (who was one of the organizers or creators of the successful group at UC Berkeley) tried starting the group again in 2016 and it still didn’t take off. I think a bunch of factors make it pretty difficult to start an EA group at UW (less curious/smart students, people being more narrowly career-oriented, UW being a commuter school, etc.; given how big the school is, I think the people at UW are very unintuitively bad), and this is something I wish I knew better back in 2014 (at the time at least, I had only heard of successful student groups so I thought it would be easy to get a group going and meet Really Cool People).
Scott Garrabrant has discussed this (or some very similar distinction) in some LessWrong comments. There’s also been a lot of discussion about babble and prune, which is basically the same distinction, except happening inside a single mind instead of across multiple minds.
There are already websites like Master How To Learn and SuperMemo Guru, the various guides on spaced repetition systems on the internet (including Andy Matuschak’s prompt-writing guide which is presented in the mnemonic medium), and books like Make It Stick. If I was working on such a project I would try to more clearly lay out what is missing from these existing resources.
My personal feeling is that enough popularization of learning techniques is already taking place (though one exception I can think of is to make SuperMemo-style incremental reading more accessible). So I would be much more interested in having people push the field forward (e.g. What contexts other than book learning can spaced repetition be embedded in? How do we write even better prompts, especially when sharing them with other people? Why are the people obsessed with learning not often visibly more impressive than people who don’t think about how to learn, and what can we do about that?).
(I read the non-blockquote parts of the post, skimmed the blockquotes, and did not click through to any of the links.)
It seems like the kind of education discussed in this post is exclusively mass schooling in the developing world, which is not clear from the title or intro section. If that’s right, I would suggest editing the title/intro to be clearer about this. The reason is that I am quite interested in improving education so I was interested to read objections to my views, but I tend to focus on technical subjects at the university level so I feel like this post wasn’t actually relevant to me.
For the past five years I have been doing contract work for a bunch of individuals and organizations, often overlapping with the EA movement’s interests. For a list of things I’ve done, you can see here or here. I can say more about how I got started and what it’s like to do this kind of work if there is interest.
Vipul Naik asked a similar question near the beginning of the pandemic.
What are your thoughts on chronic anxiety and DP/DR induced by psychedelics? Do you have an idea of how common this kind of condition is and how best to treat or manage it?
What do you think of the research chemicals scene (e.g. r/researchchemicals)?
For me, I don’t think there is a single dominant reason. Some factors that seem relevant are:
Moral uncertainty, both at the object-level and regarding metaethics, which makes me uncertain about how altruistic I should be. Forming a community around “let’s all be altruists” seems like an epistemic error to me, even though I am interested in figuring out how to do good in the world.
On a personal level, not having any close friends who identify as an effective altruist. It feels natural and good to me that a community of people interested in the same things will also tend to develop close personal bonds. The fact that I haven’t been able to do this with anyone in the EA community (despite having done so with people outside the community) is an indication that EA isn’t “my people”.
Insufficiently high number of people who I feel truly “get it” or who are actually thinking. I think of most people in the movement as followers or promoters and not even doing an especially good job at it.
Generic dislike of labels and having identities. This doesn’t explain everything though, because I feel less repulsed by some labels (e.g. I feel less upset about calling myself a “rationalist” than about calling myself an “effective altruist”).
How is Nonlinear currently funded, and how does it plan to get funding for the RFPs?
Another idea is to set up conditional AMAs, e.g. “I will commit to doing an AMA if at least n people commit to asking questions.” This has the benefit of giving each AMA its own time (without competing for attention with other AMAs) while trying to minimize the chance of time waste and embarrassment.
That one is linked from Owen’s post.
In the April 2020 payout report, Oliver Habryka wrote:
I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).
I’m curious to hear more about this (either from Oliver or any of the other fund managers).