Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai
For Covid-19 spread, what seems to be the relative importance of: 1) climate, 2) behaviour, and 3) seroprevalence?
The comment was probably strong-downvoted because it is confidently wrong in two dimensions:
1. The EA Forum only exists to promote impactful ideas. So to say that the question “where are impactful ideas?” is a distraction from the question “when should we post on the Forum?” is to have things entirely backwards. To promote good ideas, we do need to know where they are.
2. We are trying to address what a community-builder should do, not a content-creator. It is a non-sequitur to try to replace the important meta-questions of what infrastructure and incentives there should be, with the question of when an individual should post to the forum.
Almost all content useful to EAs is not written on the forum, and almost all authors who could write such content will not write it on the forum. So it would be a lot more valuable to reward good content whether or not it is on the forum. It is harder to evaluate all content, but one can consider nominated content. If this is outside one’s job description, then can one change the job description?
One relevant datapoint is Stripe Press. The tech company Stripe promotes some books on startups and progress studies, with the stated goal of sharing ideas that would inspire startups (that might use their product). They outsource the printing.
Does the rate of consumption of books increase when Stripe reprints them?
Of its 600 ratings, the The Dream Machine has recieved 300 since Nov 2018 (published in 2001, re-published in Sep 2018), based on viewing the 10th page of ratings sorted by new. So it’s read at ~10x the previous rate.
Of its 900 ratings, Stubborn Attachments has 300 ratings since Jun 2019 (published in Jul 2016, re-released in Oct 2018). So it seems to have doubled the previous rate.
But these books are relatively unpopular, relative to Superintelligence, which has 12k ratings, and TLYCS, which has 4k. We can see that reprinting can help revive unpopular books. But it’s far from clear that it would help already-thriving ones, if it would cut the flow of that book into physical bookstores. It could just as easily hinder. So it’ll be interesting to see more data.
Nice. We could check how many actually read the book by noting whether the book accumulated Goodreads ratings more quickly after the 10-year anniversary—especially once another 1-2 years have passed.
The key question here, is whether (and if so, to what degree) free download is a more effective means of distribution than regular book sales. So we should ask Peter Singer how the consumption of TLYCS changed with putting his book online. Or, if there are any other books that were distributed simultaneously across typical and unconventional means, then how many people did each distribution method reach?
Hey Catherio, sure, I’ve been puzzled by this for long enough that I’ll probably reach out for a call.
Community effects could still be mediated by the relevance of participants’ research interests. Anyway, I’m also pretty uncertain and interested to see the results as they come in over the coming years.
Here’s an updated ipynb with OpenPhil’s annual spending, showing the breakdown with respect to EA-relevant areas.
My main impressions:
Having Ben Delo’s participation is great.
OpenPhil and its staff working hard on allocating these funds is absolutely great (it’s obvious, yet worth saying over and over again.)
It would be nice to see more new kinds of grants (to longtermist causes) by EA, via OpenPhil and otherwise. The kinds of grants are relatively stagnant over the last few years. e.g. the typical x-risk grant is a few million to an academic research group. Can we also fund more interventions, or projects in other sectors?
The AI OpenPhil Scholarships place substantial weight on the excellence of applicants’ supervision, institutional affiliation and publication record. But there seems to be very little weight on the relevance of work done—I’ve only come across a few papers by any of the 2018-2020 applicants through my work on various aspects of AI x-risk. I’ve heard many people better-informed than me argue that this is likely to be relatively unproductive, in the sense that excellent researchers working in unrelated areas will tend to accept funding without substantially shifting their research direction. I’m as excited about academic excellence as almost anyone in AI safety, yet in the case of the OpenPhil Scholarships, this assessment sounds about right to me, and I haven’t really heard anyone arguing the opposing view—it would be interesting to understand this thinking better.
Larks’ post was one of the best of the year, so it’s nice of him to effectively make a hundreds-of-dollars donation to the EA Forum Prize!
Yep, that’s it.
Have you heard of Neumeir’s naming criteria? It’s designed for businesses, but I think it’s an OK heuristic. I’d agree that there are better available names, e.g.:
CEEALAR. Distinctiveness: 1, Brevity: 1, Appropriateness: 4, Easy spelling and punctuation: 1, Likability: 2, Extendability: 1, Protectability: 4.
Athena Centre. 4,4,4,4,4,4,4
EA Study Centre. 3,3,4,3,3,3,3.
Tom Inglesby on nCoV response is one recent example from just the last few days. I’ve generally known Stefan Schubert, Eliezer Yudkowsky, Julia Galef, and others to make very insightful comments there. I’m sure there are very many other examples.
Generally speaking, though, the philosophy would be to go to the platforms that top contributors are actually using, and offer our services there, rather than trying to push them onto ours, or at least to complement the latter with the former.
Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.
Reasons this might be better than the EA Forum Prize:
1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively
2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.
One would have to check the rules and regulations.
Hmm, but is it good or sustainable to repeatedly switch parties?
Interesting point of comparison: the Conservative Party has ~35% as many members, and had held government ~60% more often over the last 100 years, so the Leverage per member is ~4.5x higher. Although for many people, their ideology would mean they cannot credibly be involved in one or the other party.
The obvious approach would be to by-default invest in the stock market, (or maybe a leveraged ETF?), and only move money from that into other investments when they have higher EV.
I think Pablo is right about points (1) and (3). Community Favorites is quite net-negative for my experience of the forum (because it repeatedly shows the same old content), and probably likewise for users on average. “Community” seems to needlessly complicate the posting experience, whose simplicity should be valued highly.
Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills.
If I’m understanding the categories correctly, I agree here.
While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue… Perhaps you, enlightened reader, can judge that “How to solve AI Ethics: Just use RNNs” is not great. But is it really efficient to require everyone to independently work this out?
I agree. I think part of the equation is that peer review does not just filter papers “in” or “out”—it accepts them to a journal of a certain quality. Many bad papers will get into weak journals, but will usually get read much less. Researchers who read these papers cite them, also taking into account to their quality, thereby boosting the readership of good papers. Finally, some core of elite researchers bats down arguments that due to being weirdly attractive yet misguided, manage to make it through the earlier filters. I think this process works okay in general, and can also work okay in AI safety.
I do have some ideas for improving our process though, basically to establish a steeper incentive gradient for research quality (in the dimensions of quality that we care about): (i) more private and public criticism of misguided work, (ii) stronger filters on papers being published in safety workshops, probably by agreeing to have fewer workshops, with fewer papers, and by largely ignoring any extra workshops from “rogue” creators, and (iii) funding undersupervised talent-pipeline projects a bit more carefully.
Bar guvat V jbhyq yvxr gb frr zber bs va gur shgher vf tenagf sbe CuQ fghqragf jub jnag gb jbex va gur nern. Hasbeghangryl ng cerfrag V nz abg njner bs znal jnlf sbe vaqvivqhny qbabef gb cenpgvpnyyl fhccbeg guvf.
Filtering ~100 applicants down to a few accepted scholarship recipients is not that different to what CHAI and FHI already do in selecting interns. The expected outputs seem at least comparably-high. So I think choosing scholarship recipients would be similarly good value in terms of evaluators’ time, and also a pretty good use of funds.
It’s an impressive effort as in previous years! One meta-thought: if you stop providing this service at some point, it might be worth reaching out to the authors of the alignment newsletter, to ask whether they or anyone they know would jump in to fill the breach.