I currently hold an EA Grant to improve and expand the EA Wiki content. If you have any feedback about my work, you are welcome to submit it, anonymously or otherwise, here:
How about something like beliefs vs. impressions?
What is the easiest, most efficient way to buy the global “agnostic” portfolio?
Coincidentally, I asked this question here a couple of months before your comment and got some useful answers.
Oh, I was assuming this was their first project, but on reflection the assumption was unwarranted. This other post, from August 2021, describes Redwood Research as a “new… organization”, but I wasn’t able to find their launch date. I’ve edited the article to address the issue.
A year ago I suggested the use of randomization to estimate the causal effects of the prize on the parameters of interest. In general, I’m surprised that the EA community (including EA orgs such as CFAR or 80k) doesn’t use randomization more. One of Jeff Kaufman’s policy proposals is to “randomize everything”, and I think this should be an EA policy as much as a government policy.
The website is down. Is this project still active?
Thanks for the answer.
By the way, I relied on your report and some other literature to write a brief EA Wiki entry on think tanks. If you decide to read it and have any critical feedback, feel free to leave a comment or get in touch.
Thank you for taking the time to write this valuable report.
Do you happen to be familiar with systematic attempts to estimate the impact of think tanks on policy, or to identify the main paths to impact? I am aware of various anecdotes and case studies—you discuss the Center for Budget and Policy Priorities, Rob mentions the Center for Global Development and others, and I would add the Institute of Economic Affairs—, but it’s unclear to me what can be inferred from this type of impressionistic evidence.
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups.
I agree with your meta-meta-forecast.
Okay, I went ahead and renamed it.
I tried to incorporate parts of that section, and in the process reorganized and expanded the article. Feel free to edit anything that seems inadequate.
Thanks, David. In light of this comment, I now lean towards renaming the entry resilient food. Michael, what do you think?
Thanks for linking to that article, which I hadn’t seen. I updated the ‘certificates of impact’ entry with a brief summary of the proposal.
Thanks for creating these entries. My sense is that Scheffler doesn’t satisfy the criteria for inclusion. Thoughts?
This may be a good opportunity to mention that although I spent quite a bit of time thinking about these criteria, I’m still rather uncertain and am open to adopting a more inclusivist approach to entries for individual people. If you have any views on what the criteria should be, feel free to share them here.
Sounds good. I haven’t reviewed the relevant posts, so I don’t have a clear sense of whether “management” or “mentoring” is a better choice; the latter seems preferable other things equal, since “management” is quite a vague term, but this is only one consideration. In principle, I could see a case for having two separate entries, depending on how many relevant posts there are and how much they differ. I would suggest that you go ahead and do what makes most sense to you, since you seem to have already looked at this material and probably have better intuitions. Otherwise I can take a closer look myself in the coming days.
Thank you for this very thoughtful and useful comment.
It may help to distinguish two separate claims you make, and address them separately:
“impersonal citation style” is bad for clarity and mutual understanding.
academic style is worse than impersonal style.
Most of your comment focuses on (1), but towards the end you seem to suggest this is part of a much broader argument for (2).
I fully agree with you that this is how citations are often used in academia and that this is bad for the reasons you note.
I don’t think the problem is inherent to either citations or academia: sentences like “The most cited academic article on reference class forecasting is Kahneman & Lovallo 1993” or “The most cited academic article on reference class forecasting (Kahneman & Lovallo 1993)” conform to an academic style equally well. Citations are so often used in the annoying way you describe because doing so requires less effort and perhaps also because it protects authors from criticism, combined with the absence of a strong academic norm requiring citations to be more informative.
The Wiki doesn’t encourage citing in that way: the only requirement is that citations be used instead of hyperlinks. So instead of writing e.g. “Nick Bostrom has discussed the vulnerable world hypothesis in numerous publications”, editors are asked to write “Nick Bostrom has discussed the vulnerable world hypothesis in numerous publications (Bostrom 2019; Bostrom & van der Merwe 2021)”. This is orthogonal to the issues you raise.
In the specific EA Wiki example you mention, the source of the problem was probably just carelesless on my part. I’ve made a note to improve that paragraph and also check for similar problems in other articles. I’ll also revise the style guide to encourage editors to be mindful of this issue and cite in ways that minimize ambiguity and communicate relevant information.
My own view is that academic and informal writing each have their pros and cons, and I don’t have a settled position on which of the two is better on balance. An informal style seems better for many of the reasons you and Eliezer note, while an academic style is better for other reasons, such as requiring certain standards of clarity, precision and concision. I do think academic norms could be revised in a way that mostly retained the positives and avoided the negatives, and I think that revision would constitute a major improvement over what we have today.
With that said, it doesn’t seem to me that the problems with academic writing extend to an encyclopedia like the Wiki. Perhaps I’m not understanding you well, but I don’t quite see how the issues Eliezer complains about apply to a work of reference, which is supposed to offer a neutral summary of existing research rather than produce original research. To make this more concrete: Do you find Wikipedia’s style constraining? If so, in what ways? The EA Wiki is meant to be written in that same style, so any problems you can identify with the former would help me diagnose potential issues with the latter. Alternatively, perhaps you can take a look at a decent EA Wiki article (e.g. the one on iterated embryo selection) and indicate some ways in which you’d wish it was written differently.
I was assuming that “descendant” already carries a certain connotation that excludes these cases, but I agree ideally the definition should rule them out explicitly. Unfortunately, since Holden has dropped the explicit definition in terms of human ability and moral status, it’s not entirely clear what sort of revision would be adequate. Maybe add something like “sufficiently similar to humans in the relevant respects”, though it would later have to be clarified that these entities can also be very different from humans in other respects.
Further to my previous comment, Holden kindly got back to me and provided a helpful answer. In short, his original draft of “Digital people would be an even bigger deal” used (a) and (b) as a definition of “digital person”, but he later revised it (for reasons he cannot currently remember) and instead offered the vaguer statement included as the first quote in the current version of the article as his main characterization of digital personhood.
In light of Holden’s clarification, I propose the current definition:
A digital person is a human-like entity running on digital computing hardware or a descendant of such an entity.
I also think that parts of the rest of the article should be revised. Given Holden’s clarification, it doesn’t seem correct to state that he is “arguing” for the claims in question. I’m inclined to just remove the final two paragraphs (i.e. the text starting with “In particular...”), perhaps expanding the article to include other things Holden has said about digital people that are less open to multiple interpretations.
Thanks for your contributions. To address your points in turn:
After consulting some references, I conclude that you are right in that this is how the terms are commonly used. I’m still confused as to why the terms are used in this way, given that many common definitions of AI do not warrant this use. E.g. Wikipedia defines ‘AI’ as “intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans or animals.” I’ve made a note to look into this more and perhaps add a section on terminology to the ‘AI’ entry.
You say that in the quoted passage Holden is making an argument, but my interpretation of it is rather that he is clarifying the concept of a digital person, and in particular noting two of its central characteristics. Moreover, defining “digital people” as “people that are digital” seems pretty unhelpful, since “people” is a notoriously contested term and Holden explicitly says that digital people may be very different from present-day people (and one of the meanings of “people” is precisely “moral patient”). One consideration favoring this interpretation is the final sentence in the passage: “With enough understanding of how (a) and (b) work, it could be possible to design digital people without imitating human brains.” If (a) and (b) (i.e. moral personhood, and equal or greater than human-level ability) were not central for digital personhood, why would an understanding of these two characteristics be singled out as relevant for designing digital people? I emailed Holden in case he wants to chime in.
I think that quotes should not be used to reduce the risk of mischaracterization; the safeguard against this is provided by the citations. One of the ways in which the Wiki contributes value is by sparing the reader the need to consult the original source, and instead providing a succinct statement of the claims and arguments made in those sources. Quoting from the original, unless the quote itself makes the point succinctly (or is appropriate for other reasons, such as historical interest), is to that extent a failure to realize this ideal. I do agree it would be convenient to have the quotes that support a particular claim handy, and originally my plan was to provide such quotes as footnotes, but currently the editor does not support footnotes, as far as I know. (Sidenotes would be an even better solution.)
Why do you choose an arithmetic mean for aggregating these estimates
FYI, this post by Jaime has an extended discussion of this issue.
Thoughts on how this entry should be used to cover different EA causes related to China? For example, should farmed animal welfare in China be discussed here? One option is to subdivide the article into different sections, each corresponding to a different cause or area, such as “animal welfare”, “artificial intelligence”, “great power wars”, etc. Another approach is to reserve this entry for something like “improving Sino-Western coordination”, which I guess is broadly what most people think of when they see the “China” tag? Though on reflection the latter seems overly restrictive.