CEO of Rethink Priorities
Marcus_A_Davis
We in fact do (1) then (2). However, to continue your example, donations to animal work still end up going to animals. If it were the case, say, that we hit the animal total needed for 2020 before the overall total, additional animal donations would go to animal work for 2021.*
It is true in this scenario that in 2020 we’d end up spending less unrestricted funding on animals, but the total spent on animals that year wouldn’t change and the animal donations for 2020 would not then be spent on non-animal work.
*We would very much state publicly when we have no more room for further donations in general, and by cause area.
Internally, as part of Rethink Charity, we have fairly standard formal anti-harassment, discrimination, and reasonable accommodation policies. That is, we comply with all relevant anti-discrimination laws, including [Title VII of the Civil Rights Act of 1964, Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA.)] We explicitly prohibit offensive behavior (e.g. derogatory comments towards colleagues of a specific gender or ethnicity.)
We also provide a way for any of our staff to offer anonymous feedback and information to senior management (which can help assist someone in the reporting a claim of harassment or discrimination)
Finally, I’d note that during our hiring round last year we pretty actively sought out and promoted our job to a diverse pool of candidates and we tracked performance of hiring on these metrics. We plan to continue this going forward.
Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.
In addition, we have some plans to investigate potentially high value policies for animal welfare.
On CE’s work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.
I honestly don’t know. I’d probably be doing research at another EA charity, or potentially leading (or trying to lead) a slightly different EA charity that doesn’t currently exist. Generally, I have previously seriously considered working at other EA organizations but it’s been some time since I’ve seriously considered this topic.
Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:
Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.
Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invertebrate sentience (though there is, of course, a chance these people were overemphasizing the work to me). A couple of academics have also privately told us they thought our work was worthwhile and useful to them. These people largely aren’t donors though and I doubt many of them have started to give to invertebrate charities.
That said, I think the impact of this project in particular is difficult to judge. The diffuse impact of possibly introducing or normalizing discussion of this topic is difficult to capture in surveys, particularly when the answers are largely anonymous, and the payoffs even if people have been convinced to take them seriously may not occur until there is an actionable intervention to possibly support.
We have raised half his salary for 2020 and 2021 on a grant explicitly for this purpose. If you’d like to talk more about this, I’d be happy for you to shoot me an email: marcus [at] rtcharity.org
Thanks for the question! We do research informed by input from funders, organizations, and researchers that we think will help funders make better grants and help direct work organizations do to higher impact work.
So our plans for distribution vary by the audience in question. For funders and particular researchers we make direct efforts to share our work with them. Additionally, we try to regularly have discussions about our work and priorities with the relevant existing research EA communities (researchers themselves and org leaders). However, as we’ve said recently in our impact and strategy update, we think we can do a better job of this type of communication going forward.
For the wider EA community, we haven’t undertaken significant efforts to drive more discussion on posts but this is something potentially worth considering. I’d say one driver of whether we’d actually decide to do this would be if we came to believe more work here would potentially increase the chances we hit the goals I mentioned above.
Thanks for the question! We do not view our work as necessarily focused on the West. To the extent our work so far has focused on such countries, it’s because that’s where we think our comparative advantage currently has centered but as our team learns, and possibly grows, this won’t necessarily hold over time.
Thanks for the question! To echo Ozzie, I don’t think it’s fair to directly compare the quality of our work to the quality of GPI’s work given we work in overlapping but quite distinct domains with different aims and target audiences.
Additionally, we haven’t prioritized publishing in academic journals, though we have considered it for many projects. We don’t believe publishing in academic journals is necessarily the best path towards impact in the areas we’ve published in given our goals and don’t view it as our comparative advantage.
All this said, we don’t deliberately err more towards quantity over quality, but we do consider the time tradeoff of further research on a given topic during the planning and execution phases of a project (though I don’t think this is in any way unique to us within EA). We do try to publish more frequently because of our desire for (relatively) shorter feedback loops. I’d also say we think our work is high quality but I’ll let the work speak for itself.
Finally, I take no position on whether EA organizations in general ought to err more or less towards academic publications as I think it depends on a huge number of factors specific to the aims and staffs of each organization.
My ranges represent what I think is a reasonable position is on the probability of each creatures sentience given all current input and expected future input. Still, as I said:
...the range is still more of a guideline for my subjective impression than a declaration of what all agents would estimate given their engagement with the literature
I could have made a 90% subjective confidence interval, but I wasn’t confident enough that such an explicit goal in creating or distributing my understanding would be helpful.
I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.
To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.
As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.
*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don’t seem justifiable.
What problem is being solved by giving up to 16 times maximum weight that would not be solved with giving users with high karma “merely” a maximum of 2 times the amount of possible weight? 4 times?
However, we obviously don’t want this to become a tyranny of a few users. There are several users, holding very different viewpoints, who currently have high karma on the Forum, and we hope that this will help maintain a varied discussion, while still ensuring that the Forum has strong discussion standards.
While it may be true now that there are multiple users with high karma with very different viewpoints, any imbalance among competing viewpoints at the start of a weighted system could possibly feedback on itself. That is to say, if viewpoint X has 50% of the top posters (by weight in the new system), Y has 30%, and Z 20%, viewpoint Z could easily see their viewpoint shrink relative to the others because the differential voting will compound itself over time.
Announcing PriorityWiki: A Cause Prioritization Wiki
Lessons for estimating cost-effectiveness (of vaccines) more effectively
How beneficial have vaccines been?
Sorry for the extremely slow reply, but yes. That topic is on our radar.
Announcing Rethink Priorities
Charity Science: Health—A New Direct Poverty Charity Founded on EA Principles
It might be helpful if you elaborated more on what you mean by ‘aim for neutrality’. What >actions would that entail, if you did that, in the real world, yourself?
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An even broader selection tool I think worth considering alongside this is simply “people who know about AI risk” but that’s basically the same as Rob’s original point of “have some association with the general rationality or AI community.”
Edit: Should say “Naturally, we all have priors...”
I’m unclear why you think proportion couldn’t matter in this scenario.
I’ve written a pseudo program in Python below in which proportion does matter, removing neurons that don’t fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I don’t believe consciousness works this way in humans or other animals but I don’t think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.