I may be misremembering, but I have the cached belief that GiveWell records and publishes something like all meetings including board meetings. If so you could listen to the last board meeting to see how things were at.
A high quality podcast has been made (for free, by the excellent fanbase). It’s at www.hpmorpodcast.com.
I think this comment suggests there’s a wide inferential gap here. Let me see if I can help bridge it a little.
If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.
I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it’s not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom’s is much higher.
It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society’s massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they’d done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.
In general I think someone’s ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA . I don’t think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.
I’m focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the ‘safest bets’. I am interested to know whether this perspective makes the grant’s intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.
 I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.
 Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you’ve not already figured out how to incentivise—I don’t think we’ve figured it all out yet.
Ah yes, agree. I meant coordination, not collusion. Promotion also seems fine.
MIRI helped us know how much to donate and how much of a multiplier it would be, and updated this recommendation as other donors made their moves. I added something like $80 at one point because a MIRI person told me it would have a really cool multiplier, but not if I donated a lot more or a lot less.
I imagined Alex was talking about the grant reports, which are normally built around “case for the grant” and “risks”. Example: https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology
I haven’t yet finished thinking about how the EA Forum Team should go about doing this, given their particular relationship to the site’s members, but here’s a few thoughts.
I think, for a platform to be able to incentivise long-term intellectual progress in a community, it’s important that there are individuals trusted on the platform to promote the best content to a place on the site that is both lasting and clearly more important than other content, like I and others have done on the AI Alignment Forum and LessWrong. Otherwise the site devolves into a news site, with a culture that depends on who turns up that particular month.
I do think the previous incarnation of the EA Forum was much more of a news site, where the most activity occurred when people turned up to debate the latest controversy posted there, and that the majority of posts and discussion on the new Forum are much more interested in discussion of the principles and practice of EA, rather than conflict in the community.
(Note that, while it is not the only or biggest difference, LessWrong and Hacker News both have the same sorting algorithm on their posts list, yet LW has the best content shown above the recent content, and thus is more clearly a site that rewards the best content over the most recent content.)
It’s okay to later build slower and more deliberative processes for figuring out what gets promoted (although you must move much more quickly than the present day academic journal system, and with more feedback between researchers and evaluators). I think the Forum’s monthly prize system is a good way to incentivise good content, but it crucially doesn’t ensure that the rewarded content will continue to be read by newcomers 5 years after it was written. (Added: And similarlycurrent new EAs on the Forum are not reading the best EA content of the past 10 years, just the most recent content.)
I agree it’s good for members of the community to be able to curate content themselves. Right now anyone can build a sequence on LessWrong, then the LW team moves some of them up into a curated section which later get highlighted on the front page (see the library page, which will become more prominent on the site after our new frontpage rework). I can imagine this being an automatic process based on voting, but I have an intuition that it’s good for humans to be in the loop. One reason is that when humans make decisions, you can ask why, but when 50 people vote, it’s hard to interrogate that system as to the reason behind its decision, and improve its reasoning the next time.
(Thanks for your comment Brian, and please don’t feel any obligation to respond. I just noticed that I didn’t intuitively agree with the thrust of your suggestion, and wanted to offer some models pointing in a different direction.)
I did spend a day or two collating some potential curated sequences for the forum.
I still have a complete chronological list of all public posts between Eliezer and Holden (&friends) on the subject of Friendly AI, which I should publish at some point
I spent a while reading through people’s work like Nick Bostrom and Brian Tomasik (I didn’t realise how much amazing stuff Tomasik had written)
I found a bunch of old EA blogs by people like Paul Christiano, Carl Shulman, and Sam Bankman-Fried that would be good to collate the best pieces from
I constructed a mini versions of things like the sequences, the codex, and Owen Cotton-Barratt’s excellent intro to EA (prospecting for gold) as ideas for curated sequences on the Forum.
I think it would be good from a long-term community norms standpoint to know that great writing will be curated and read widely.
Alas, CEA did not seem to have the time to work through any sequences (seemed like there was a lot of worries about what signals the sequences would send, and working through the worries was very slow going). At some point if this ever gets going again, it would be good to have a discussion pointing to any good old posts that should be included.
+1, a friend of mine thought it was an official statement from CEA when he saw the headline, was thoroughly surprised and confused
(Your crossposting link goes to the edit page of your post, not the post itself.)
Woop! Congrats to all the prize winners. Great posts!
Conceptually related: SSC on Joint Over- and Underdiagnosis.
I think this is a good comment about how the brain works, but do remember that the human brain can both hunt in packs and do physics. Most systems you might build to hunt are not able to do physics, and vice versa. We’re not perfectly competent, but we’re still general.
+1 on being confused, I’ve heard good things about CC. Just now checking the wikipedia page, their actual priorities list is surprisingly close to GiveWell priorities lists (macronutrients, malaria, deworming, and then further down cash transfers) - and I see Thomas Schelling was on the panel! In particular he seems to have criticised the use of discount rates on evaluating the impact of climate change (which sounds close to an x-risk perspective).
I would be interested in a write-up from anyone who looked into it and made a conscious choice to not associate with / to not try to coordinate with them, about why they made that choice.
+1 Distill is excellent and high-quality, and plausibly has important relationships to alignment. (FYI some of the founders lately joined OpenAI, if you’re figuring out which org to put it under, though Distill is probably its own thing).
That all makes a lot of sense! Thanks.
I think it does, it just is unlikely to change it by all that much.
Imagine there are two donor lotteries, each one having had 40k donated to them, one with lots of people in the lottery you think are very thoughtful about what projects to donate to, and one with lots of people in the lottery you think are not thoughtful about what projects to donate to. You’re considering which to add your 10k to. In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there’s increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.
This isn’t the main consideration—unless you think the other people will do something actively very harmful with the money. You’d have to think that the other people will (in expectation) do something worse with a marginal 10k than you giving away 10k does good.
I think there are busy people will have the connections to make a good grant but won’t have the time to write a full report. In fact, I think there are many competent people that are very busy.
You’re right that I had subtly become nervous about joining the donor lottery because “then I’d have to do all the work that Adam did”. Thanks for reminding me I don’t have to if it doesn’t seem worth the opportunity cost, and that I can just donate to whatever seems like the best opportunity given my own models :)
I also think this sort of question might be useful to ask on a more individual basis—I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they’d encourage you to inform them about.