Larks’ post was one of the best of the year, so it’s nice of him to effectively make a hundreds-of-dollars donation to the EA Forum Prize!
Yep, that’s it.
Have you heard of Neumeir’s naming criteria? It’s designed for businesses, but I think it’s an OK heuristic. I’d agree that there are better available names, e.g.:
CEEALAR. Distinctiveness: 1, Brevity: 1, Appropriateness: 4, Easy spelling and punctuation: 1, Likability: 2, Extendability: 1, Protectability: 4.
Athena Centre. 4,4,4,4,4,4,4
EA Study Centre. 3,3,4,3,3,3,3.
Tom Inglesby on nCoV response is one recent example from just the last few days. I’ve generally known Stefan Schubert, Eliezer Yudkowsky, Julia Galef, and others to make very insightful comments there. I’m sure there are very many other examples.
Generally speaking, though, the philosophy would be to go to the platforms that top contributors are actually using, and offer our services there, rather than trying to push them onto ours, or at least to complement the latter with the former.
Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.
Reasons this might be better than the EA Forum Prize:
1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively
2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.
One would have to check the rules and regulations.
Hmm, but is it good or sustainable to repeatedly switch parties?
Interesting point of comparison: the Conservative Party has ~35% as many members, and had held government ~60% more often over the last 100 years, so the Leverage per member is ~4.5x higher. Although for many people, their ideology would mean they cannot credibly be involved in one or the other party.
The obvious approach would be to by-default invest in the stock market, (or maybe a leveraged ETF?), and only move money from that into other investments when they have higher EV.
I think Pablo is right about points (1) and (3). Community Favorites is quite net-negative for my experience of the forum (because it repeatedly shows the same old content), and probably likewise for users on average. “Community” seems to needlessly complicate the posting experience, whose simplicity should be valued highly.
Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills.
If I’m understanding the categories correctly, I agree here.
While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue… Perhaps you, enlightened reader, can judge that “How to solve AI Ethics: Just use RNNs” is not great. But is it really efficient to require everyone to independently work this out?
I agree. I think part of the equation is that peer review does not just filter papers “in” or “out”—it accepts them to a journal of a certain quality. Many bad papers will get into weak journals, but will usually get read much less. Researchers who read these papers cite them, also taking into account to their quality, thereby boosting the readership of good papers. Finally, some core of elite researchers bats down arguments that due to being weirdly attractive yet misguided, manage to make it through the earlier filters. I think this process works okay in general, and can also work okay in AI safety.
I do have some ideas for improving our process though, basically to establish a steeper incentive gradient for research quality (in the dimensions of quality that we care about): (i) more private and public criticism of misguided work, (ii) stronger filters on papers being published in safety workshops, probably by agreeing to have fewer workshops, with fewer papers, and by largely ignoring any extra workshops from “rogue” creators, and (iii) funding undersupervised talent-pipeline projects a bit more carefully.
Bar guvat V jbhyq yvxr gb frr zber bs va gur shgher vf tenagf sbe CuQ fghqragf jub jnag gb jbex va gur nern. Hasbeghangryl ng cerfrag V nz abg njner bs znal jnlf sbe vaqvivqhny qbabef gb cenpgvpnyyl fhccbeg guvf.
Filtering ~100 applicants down to a few accepted scholarship recipients is not that different to what CHAI and FHI already do in selecting interns. The expected outputs seem at least comparably-high. So I think choosing scholarship recipients would be similarly good value in terms of evaluators’ time, and also a pretty good use of funds.
It’s an impressive effort as in previous years! One meta-thought: if you stop providing this service at some point, it might be worth reaching out to the authors of the alignment newsletter, to ask whether they or anyone they know would jump in to fill the breach.
Yep, I’d actually just asked to clarify this. I’m listing schools that are good for doing safety work in particular. They may also be biased toward places I know about. If people are trying to become professors, or are not interested in doing safety work in their PhD then I agree they should look at a usual CS university ranking, which would look like what you describe.
That said, at Oxford there are ~10 CS PhD students interested in safety, and a few researchers, and FHI scholarships, which is why it makes it to the Amazing tier. At Imperial, there are 2 students and one professor. But happy to see this list improved.
On a short skim, this seems more like a research agenda? There are a few research agendas by now…
The only lit review I’ve seen is . I probably should’ve said I haven’t seen any great lit reviews, because I felt this one was OK—it covered a lot of ground. However, it is a couple of years old, and it didn’t organize the work in a way that was satisfying for me.
1. Everitt, Tom, Gary Lea, and Marcus Hutter. “AGI safety literature review.” arXiv preprint arXiv:1805.01109 (2018).
I think the option of having (a possible renamed) EA Grants as one option in EA funds is interesting. It could preserve almost all of the benefits (one extra independent grantmaker picking different kinds of targets) while reducing maybe half the overhead, and clarifying the difference between EA Grants and EA Funds.
Given that community groups are much more homogenous funding targets than EA projects in-general, it makes perfect sense that we allocate one CEA team to evaluating them, while we allocate a few teams to evaluating other small-scale EA projects.
Many infamous ideologies have impaired decision-making in important positions leading to terrible consequences like wars and harmful revolutions: communism, fascism, ethno-nationalism, racism, etc.
I’ve become pretty pessimistic about rationality-improvement as an intervention, especially to the extent that it involves techniques that are domain-general, with a large subjective element and placebo effect/participant cost. Basically most interventions of this sort haven’t worked, though they induce tonnes of biases that allow them to display positive testimonials: placebo effects, liking instructors, having a break from work, getting to think about interesting stuff, branding of techniques, choice-supportive bias, biased sampling of testimonials, etc etc etc.
The nearest things that I’d be interest in would be: 1) domain-specific training that delivers skills and information from trained experts in a particular area, such as research, 2) freely available online reviews of literature on rationality interventions, similar to what gwern does for nootropics, 3) new controlled experiments on existing rationality programs such as Leverage and CFAR 4) training in risk assessment for high-risk groups like policymakers.
I think it’s a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.
I don’t have any inside info, and perhaps “pressure” is too strong, but Holden reported recieving advice in that direction in 2016:
“Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)”
[My views only]
Thanks for putting up with my follow-up questions.
Out of the areas you mention, I’d be very interested in:
Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!
I’d be interested in:
Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
Sundry ex risks/GCRs
I’d be a little interested in:
Increasing economic growth
I think the other might be disadvantageous based on my understanding that it’s better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.
Out of those you haven’t mentioned, but that seem similar, I’d also be interested in:
Promotion of effective altruism
Scholarships for people working on high-impact research
More on AI safety—OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul’s funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]