I am Issa Rice. https://issarice.com/
riceissa
Scott Garrabrant has discussed this (or some very similar distinction) in some LessWrong comments. There’s also been a lot of discussion about babble and prune, which is basically the same distinction, except happening inside a single mind instead of across multiple minds.
There are already websites like Master How To Learn and SuperMemo Guru, the various guides on spaced repetition systems on the internet (including Andy Matuschak’s prompt-writing guide which is presented in the mnemonic medium), and books like Make It Stick. If I was working on such a project I would try to more clearly lay out what is missing from these existing resources.
My personal feeling is that enough popularization of learning techniques is already taking place (though one exception I can think of is to make SuperMemo-style incremental reading more accessible). So I would be much more interested in having people push the field forward (e.g. What contexts other than book learning can spaced repetition be embedded in? How do we write even better prompts, especially when sharing them with other people? Why are the people obsessed with learning not often visibly more impressive than people who don’t think about how to learn, and what can we do about that?).
(I read the non-blockquote parts of the post, skimmed the blockquotes, and did not click through to any of the links.)
It seems like the kind of education discussed in this post is exclusively mass schooling in the developing world, which is not clear from the title or intro section. If that’s right, I would suggest editing the title/intro to be clearer about this. The reason is that I am quite interested in improving education so I was interested to read objections to my views, but I tend to focus on technical subjects at the university level so I feel like this post wasn’t actually relevant to me.
For the past five years I have been doing contract work for a bunch of individuals and organizations, often overlapping with the EA movement’s interests. For a list of things I’ve done, you can see here or here. I can say more about how I got started and what it’s like to do this kind of work if there is interest.
Vipul Naik asked a similar question near the beginning of the pandemic.
What are your thoughts on chronic anxiety and DP/DR induced by psychedelics? Do you have an idea of how common this kind of condition is and how best to treat or manage it?
What do you think of the research chemicals scene (e.g. r/researchchemicals)?
For me, I don’t think there is a single dominant reason. Some factors that seem relevant are:
Moral uncertainty, both at the object-level and regarding metaethics, which makes me uncertain about how altruistic I should be. Forming a community around “let’s all be altruists” seems like an epistemic error to me, even though I am interested in figuring out how to do good in the world.
On a personal level, not having any close friends who identify as an effective altruist. It feels natural and good to me that a community of people interested in the same things will also tend to develop close personal bonds. The fact that I haven’t been able to do this with anyone in the EA community (despite having done so with people outside the community) is an indication that EA isn’t “my people”.
Insufficiently high number of people who I feel truly “get it” or who are actually thinking. I think of most people in the movement as followers or promoters and not even doing an especially good job at it.
Generic dislike of labels and having identities. This doesn’t explain everything though, because I feel less repulsed by some labels (e.g. I feel less upset about calling myself a “rationalist” than about calling myself an “effective altruist”).
How is Nonlinear currently funded, and how does it plan to get funding for the RFPs?
Another idea is to set up conditional AMAs, e.g. “I will commit to doing an AMA if at least n people commit to asking questions.” This has the benefit of giving each AMA its own time (without competing for attention with other AMAs) while trying to minimize the chance of time waste and embarrassment.
That one is linked from Owen’s post.
[Question] Why “cause area” as the unit of analysis?
In the April 2020 payout report, Oliver Habryka wrote:
I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).
I’m curious to hear more about this (either from Oliver or any of the other fund managers).
I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn’t been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?
I think LTFF is doing something valuable by giving people the freedom to not “sell out” to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I’m worried about a situation where receiving a grant from LTFF isn’t enough to be sustainable, so that people go back to doing more “safe” things like working in academia or at an established org.
Any thoughts on this topic?
- Grantees: how do you structure your finances & career? by Aug 4, 2022, 10:21 PM; 159 points) (
- Aug 20, 2023, 4:15 AM; 10 points) 's comment on RyanCarey’s Quick takes by (
Ok I see, thanks for the clarification! I didn’t notice the use of the phrase “the MIRI method”, which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).
MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .
The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn’t involved in creating the model (although the post author seems to have sent it to MIRI before publishing the post). I wonder if I’m missing something though, or misinterpreting what you wrote.
Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn’t seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don’t have time to write the full post.
- Oct 30, 2020, 9:39 AM; 2 points) 's comment on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher by (
I think the forum software hides comments from new users by default. You can see here (and click the “play” button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.
Edit: The comments mentioned above are now visible on this post.
So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.
Can you say how you came up with the “moving from 1% to 0.8%” part? Everything else in your comment makes sense to me.
(I tried starting the original EA group at UW in 2014. I’m no longer a student at UW and don’t even live in the Seattle area currently.)
Seems like you found the Messenger group, which is the most active thing I am aware of. You’ve also probably seen the Facebook group and could try messaging some of the people there who joined recently.
I don’t want to discourage you from trying, but here are some more details: I was unable to start an EA group at UW in 2014 (despite help from Seattle EA organizers). At the time I thought this was mainly due to my poor social skills (and, to be honest, I think my poor social skills were still a significant factor). But then Rohin Shah (who was one of the organizers or creators of the successful group at UC Berkeley) tried starting the group again in 2016 and it still didn’t take off. I think a bunch of factors make it pretty difficult to start an EA group at UW (less curious/smart students, people being more narrowly career-oriented, UW being a commuter school, etc.; given how big the school is, I think the people at UW are very unintuitively bad), and this is something I wish I knew better back in 2014 (at the time at least, I had only heard of successful student groups so I thought it would be easy to get a group going and meet Really Cool People).