Have you considered blinded case work / decision making?
Like one person collects the key information annonomises it and then someone else decides the appropriate responce without knowing the names / orgs of the people involved.
Could be good for avoiding some CoIs. Has worked for me in the past for similar situations.
Thank you for the correction
Thank you Saulius. Very helpful to hear. This sounds like a really positive story of good management of a difficult situation. Well done to Marcus.
If I read between the lines a bit I get the impression that maybe more junior (be that less competent or just newer to the org) managers at Rethink with less confidence in their actions not rocking the Rethink<->funder relationship were perhaps more likely to put unwelcome pressure on researchers about what to publish. Just a hypothesis, so might be wrong. But also the kind of thing good internal policies, good onboarding, good senior example setting, or just discussions of this topic, can all help with.
I found this reply made me less confident in Rethink’s ability to address publication bias. Some things that triggered my ‘hmmm not so sure about this’ sense were:
The reply did not directly address the claims in Saulius’s comment. E.g. “I’m sorry you feel that way” not “I’m sorry”. No acknowledgement that if, as Saulius claimed, a senior staff told him that it was wrong to have expressed his views to OpenPhil (when asked by OpenPhil for his views), that this might have been a mistake by that staff. I guess I was expecting more of a ‘huh that sounds bad, let us look into it’ style response than a ‘nothing to see here, all is great in Rethink land’ style response.
The story about Saulius’ WAW post. Judging from Marcus’ comment It sounds like Rethink stopped Saulius from posting a WAW post (within a work context) and it also looks like there was a potential conflict of interest here for senior staff as posting could affect funding. Marcus says that RP senior staff were only deciding based on the quality of the post. Now I notice the post itself (published a few months after Saulius left Rethink) seems to have significantly more upvotes than almost any of Rethink’s posts (and lots of positive feedback). I recognise upvotes are not a great measure of research quality but it does make me worry that about the possibility that this post was actually of sufficient quality and the conflict of interest may have biased senior Rethink staff to stop the post being published as a Rethink post.
The stated donor engagement principles seem problematic. Eg: “there are also times when we think it is important for RP to speak with a unified voice to our most important donors” is exactly the kind of reason I would use if I was censoring staff’s interaction with donors. It is not that “unified organisational voice” policies are wrong, just that without additional safeguards they have a risk of being abused, of facilitating the kinds of conflict of interest driven actions under discussion here. Also as Saulius mentions such policies can also be one-sided where staff are welcome to say anything that aligns with a certain worldview but need to get sign-off to disagree with that worldview, another source of bias.
I really really do like Rethink. And I do not think that this is a huge problem, or enough of a problem to stop donors giving to Rethink in most cases. But I would still be interested in seeing additional evidence of Rethink addressing this risk of bias.
Hi Peter, Rethink Priorities is towards the top of the places I’m considering giving this year. This post was super helpful. And these projects look incredible, and highly valuable.
That said I have a bunch of questions and uncertainties I would love to get answers to you before donating to Rethink.
1. What is your cost/benefit. Specially I would love to know any or all of:
Rethink’s cost per researcher year on average, (i.e. the total org cost divided by total researcher years, not salary).
Rethink’s cost per published research report (again total org cost not amount spent on a specific projects, divided by the number of published reports where a research heavy EA Forum post of typical Rethink quality would count as a published report).
Rethink’s cost per published research report that is not majority funded by “institutional support”
2. Can you assure me that Rethink’s researchers are independent?
Rethink seems to be very heavily reliant for its existence on OpenPhil (and maybe some other very large institutional supporters?). I get the impression from a few people that Rethink’s researchers sometimes may be restricted in their freedom to publish or freedom to say what they want to funders, where doing so would go against the wishes of Rethinks institutional funders (OpenPhil) or would in some way pose a risk to Rethink’s funding stream.
This creates a risk of bias. E.g. a publication bias that skews Rethink to only publish information that OpenPhil are happy with. This is a key reason towards me feeling happier donating to support research from more independent sources (such as EA Infrastructure Fund or HLI or Animal Ask) rather than giving to Rethink.
I would be reassured if I saw evidence of Rethink addressing this risk of bias. This could look like any or all of:
A public policy that researchers have the freedom to publish even where donors disagree, and sets out steps to minimize the risk of bias (e.g. ignoring the views of donors when deciding if something is an info-hazard) and the accountability mechanisms to ensure that the policy is followed (e.g. whistleblowing process).
Some case studies of how Rethink has made this publication bias / upset funders trade-off in the past, especially evidence of Rethink going against OpenPhil’s direct wishes.
An organisational risk register entry from Rethink setting out plans to mitigate these risks or why senior staff are not worried about these risk.
A policy from OpenPhil setting out an aim not to bias Rethink in this way.
3. How will you prioritise amongst the projects listed here with unrestricted funds from small donors?
Most of these projects I find very exciting, but some more than others. Do you have a rough priority ordering or a sense of what you would do in different scenarios, like if you ended up with unrestricted funding of $0.25m/$0.5m/$1m/$2m/$4m etc how you would split it between the projects you list?
4. [Already answered] Can you clarify how much your examples you cite are funded by OpenPhil?
[This was one of my key questions but Marcus answered it on another thread, I included it as I thought it would be relevant to others.]
I noted that you say that some research “invertebrate sentience, moral weights and welfare ranges, the cross-cause model, the CURVE sequence” has “historically not had institutional support”. But OpenPhil suggest here they funded some of this work.
Marcus clarifies here that the Moral Weights work was about ~50% OpenPhil and the CURVE work about ~10% OpenPhil funded.
Thank you for answering these questions. Keep up the wonderful work and keep improving our ability to do good in the world!!! <3
[Minor edits made to q2]
Hi Marcus thanks very helpful to get some numbers and clarification on this. And well done to you and Rethink for driving forward such important research.
(I meant to post a similar question asking for clarification on the rethink post too but my perfectionism ran away with me and I never quite found the wording and then ran out of drafting time, but great to see your reply here)
Hi Emily, Sorry this is a bit off topic but super useful for my end of year donations.I noticed that you said that OpenPhil has supported “Rethink Priorities … research related to moral weights”. But in his post here Peter says that the moral weights work “have historically not had institutional support”.Do you have a rough very quick sense of how much Rethink Priorities moral weights work was funded by OpenPhil?Thank you so much
Hi, Debugging worked. It was a Chrome extension I had installed to hide cookie messages what was killing it. Thank you so much!!
Hi. I started drafting a reply but had to stop and now a week later I cannot find where I was drafting it. I would love to be able to see all the places where I have draft comments/replies autosaved. Thank you!
Can this be updated. This is the default “Contact us” page (if I click the sidebar on the right and click “contact us” it brings me here). But this page seems very out of date. Could be worth updating it.There is no intercom bubble on the right nor is there a “hide intercom” button on the edit profile page. There is a hide intercom button on the account settings page but it does not do anything. There are also a bunch of comments saying similar stuff below but they have not been replied too.
Alternatively, one might adjust ambiguous probability assignments to reduce their variance. For example, in a Bayesian framework, the posterior expectation of some value is a function of both the prior expectation and evidence that one has for the true value. When the evidence is scant, the estimated value will revert to the prior. Therefore, Bayesian posterior probability assignments tend to have less variance than the original ambiguous estimate, assigning lower probabilities to extreme payoffs.
Would you say this is being ambiguity averse in terms of aiming for a lower EV option, or would you say that this is just capturing the EV of the situation more precisely and still aiming for the highest EV option?
Eg. maybe in your story by picking option 2 Pat is not being truly ambiguity averse (aiming for a lower EV option) but has a prior that someone is more likely to be traying to scam her out of $5 than give her free money and is thus still trying to maximise EV.
Sorry to be annoying but after reading the post “Animals of Uncertain Sentience” I am still very confused about the scope of this work
My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).
If I am wrong about the above do let me know!
Basically I find phrases like”EV maximization decision procedure” and “using EV maximisation to make these decisions” etc confusing. EV maximisation is a goal that might or might not be best served with a EV calculation based decision procedure, or by a decision procedure that does not involve any EV calculations. I am sorry I know this is persnickety but thought I would flag the things I a finding confusing. I do think being a bit more concise about this would help readers understand the posts.
Thank you for the work you are doing on this.
“The team is only a few months old and the problems you’re raising are hard”
Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.
But for what it’s worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...
Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research
I am very very excited to see this research it’s the kind of thing that I think EAs should be doing a lot more of and it seems shocking that it takes us more than a decade to get round to such basic fundamental questions on cause prioritisation. Thank you so much for doing this.
I do however have one question and one potential concern.
Question: My understanding from reading the research agenda and plan here is that you are NOT looking into the topic of how best to make decisions under uncertainty (Knightian uncertainty, cluelessness, etc). It looks like you are focusing on resolving the question of WHAT exactly decision making should aim for (e.g. maximise true EV or not) but not the topic of HOW best to make those decisions (e.g. what decision tools to use, to what extent to rely on calculated EV as a tool versus other tools, when practically to satisfice or maximize, etc). It looks like you might touch on the HOW within the specific sub-question of uncertainty over time but not otherwise. Is this a correct reading of your research aims and agenda?
If so, this does puts limits on the conclusions you could draw.
I think that the majority (but by no means all) the people that I know in EA that have a carefully considered view that pushes them to focus on say global health above x-risk issues do so, not because they disagree on the WHAT because they disagree on the HOW. They are not avoiding maximising EV, or non-conseqentalist, or risk averse, they just put less weight on simple EV calculations as a decision tool and the set of tools that they do use to directs them away from x-risk work.
Such a conclusions or models built on just the WHAT question would be of limited use – not just because you need HOW to decide and WHAT too aim* for to make a decision – but specifically it is not hitting what, in my experience, is the primary (although not only) crux of people’s actual disagreement here.
I’d be curious to hear if you agree with this analysis of the limits of the, still very very important, work you are doing.
. * As an aside I actually think in some cases it’s possible to make do with the HOW but not the WHAT but not the other way round. For example you might believe that it has been shown empirically that in deep uncertainty situations a strategy of robust satisficing rather than maximizing allows players to win more war game scenarios or to feel more satisfied with their decision at a later point in time, and therefore believe that adopting such a strategy in situations deep uncertainty is optimal. You could believe this without taking a stance on or knowing whether or not such a strategy maximizes true EV, or is risk averse, etc.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
Thank you Asya for all the time and effort you have put in here and the way you have manged the fund. I’ve interacted with the LTFF a number of times and you have always been wonderful: incredibly helpful and sensible.
Thanks Linch. Agree feedback is time consuming and often not a top priority compared to other goals.
These short summary reasons in this post forwhy grants are not made are great and very interesting to see.
Was wondering do the unsuccessful grant applicants tend to recieve this feedback (of the paragraph summary kind in this post) or do they just get told “sorry no funding”?
I wonder if this could help the situation. I think if applicants have this feedback, and if other granters know that applicants get feedback they can ask for it. I’ve definitely been asked “where else did you apply and what happened” and been like “I applied for x grant and got feedback xyz of which I agree with this bit but not that bit”.
(Or maybe that doesn’t help for some of the reasons in your ” against sharing reasons for rejection” section)
(Also FWIW if there is a private behind the sceens grantmaker feedback channel, I’m not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren’t also willing to share with the applicants.)