Hi there, I was wondering what you mean by “real estate speculation”: what the issue is and in what ways it is tractable? Thank you for any insights you can give, hoping to do some research into housing issues in LMICs :-)
weeatquince
weeatquince’s Quick takes
No this seems more than just semantic. It does seem like I’ve underestimated the ability to influence B2B companies. I stand corrected. Thank you.
Thank you for considering my comments
To be clear I would consider the target of the campaign in those cases to be on the hospital or the university and those to be B2C organizations in some meaningful way.
Additionally if you want to show that you can credibly engage policymakers (which I think you might need to do in order to put pressure on these companies) I would expect transparency of people and funding sources to help a lot.
What are the key leverage points to get these companies to listen to campaigners such as yourself? How does this differ from the animal right space and how will this affect your strategy? What do you have in terms of strategy documents or theory of change?
Some thoughts on my mind are:
-
To the best of my understanding the animal rights corporate campaigning space is unable to exert much or any influence on B2B (business to business) companies. Animal campaigns only appear to have influenced B2C (business to consumer) companies. An autonomous coding agent feels more B2B and by analogy having any influence here could be extremely difficult. That said I don’t think this should be a huge problem as...
-
The leverage points for influencing companies in the AI space is very different to the animal space. In particular AI companies are probably much more concerned about losing employees to other companies than food companies. I expect they are also likely concerned about regulation that could restrict their actions. I expect there much less concerned about public image. As such..
-
This does suggest to somewhat different approach to corporate campaign. Potentially targeting employees more (although probably not picking on individuals) and greater focus on presenting the targeted company negatively to regulators/policymakers or to investors, more than to the public.
This is just quick thoughts and I might be wrong about much of this. I just wanted to flag as your post seemed to suggest that this work would be similar to work in the animal space and in many ways it is but I think there’s a risk of not seeing the differences. I wish you all the best of luck with your campaigning.
-
Hi, Thank you. All good points. Fully agree with ongoing iterative improvement to our CEAs and hopefully you will see such improvements happening over the various research rounds (see also my reply to Nick). I also agree with picking up on specific cases where this might be a bigger issue (see my reply to Larks). I don’t think it is fair to say that we treat those two numbers as zero but it is fair to say we are currently using a fairly crude approximation to get at what those numbers are getting it in our lives saved calculations.
For a source on discounting see here: https://rethinkpriorities.org/publications/a-review-of-givewells-discount-rate#we-recommend-that-givewell-continue-discounting-health-at-a-lower-rate-than-consumption-but-we-are-uncertain-about-the-precise-discount-rate
“Discounting consumption vs. health benefits | Discount health benefits using only the temporal uncertainty component”
Hi Nick, Thank you very much for the comment. These are all good points.
I fully agree with you and Larks that where a specific intervention will have reduced impact due to long run health effects this should be included in our models and I will check this is happening.
I apologise for the defensiveness and made a few minor edits to the post trying to keep content the same.
That’s not a reason not to continuously be improving models.
To be clear, we are always always improving our CEA models. This is an ongoing iterative process, and my hope is they get better year upon year. However, I guess I don’t have confidence right now that a −10% change to this number is actually improving the model or affecting our decision making.
If we dive into these numbers just a bit, I immediately notice that the discount rate in the GBD data is higher than ours and that should suggest that, if we are adjusting these numbers, that probably we want a significant +increase not decrease. But that then raises the question of what discount rate we are using and why, which has a huge effect on some of the models – and this is something there are currently internal debates in the team about, and we are looking at changing. But this then raises a question about how to represent the uncertainty about these numbers in our models and ensure the decision makers and readers are more aware of the inherent estimations that can have big effect on CEA outputs – and improving this is probably towards the top of my list.
Thank you Larks. This is a very good point and I fully agree.
In any cases where this happens it should be incorporated into our current model. That said I will check this for our current research and make sure that in any such cases (such as say pulmonary rehabilitation for COPD where patients are expected to have a lower quality of life if they survive) this is accounted for.
Hi there. I am Research Director at CE/AIM
Note that Charity Entrepreneurship (CE) has now rebranded to AIM to reflect our widening scope of programs[Edited for tone]
Thank you so much for engaging with our work in this level of detail. It is great to get critical feedback and analysis like this. I have made a note of this point on my long list of things to improve about how we do our CEAs, although for the reasons I explain below it is fairly low down on that list.Ultimately what we are using now is a very crude approximation. That said it is one I am extremely loath to start fiddling without putting the effort in to do this well.
You are right that the numbers used for comparing deaths and disability are a fairly crude approximation. A reasonable change in moral weights can lead to a large change in the comparison between YLDs and YLLs. Consider that when GiveWell last reviewed their moral weights (between 2019 and 2020) they increased the value of an under-5 life saved compared to YLDs by +68% (from 100⁄3.3 to 116.9/2.3). Another very valid criticism is that (as you point out) the current numbers we are using are calculated with a 3% discount rate, yet we are now using a 1.5% discount rate for health effects, so perhaps to ensure consistency we should increase the numbers by +42%ish. Or taking the HLI work on the value of death seriously could suggest a huge decrease of −50% or more. The change you suggest would be nice but I think getting this right really needs a lot of work.
Right now I am uncertain how best to update these numbers. A minus −10% change is reasonable but so are many other changes. I would very much like AIM to have our own calculated moral weightings that account for various factors, including life expectancy, a range of ethical views, quality of life, beneficiary preferences, etc. However getting this correct is a complicated and lengthy process. This is on the to-do list but has not happened yet unfortunately.
So what do we do in the meantime:
We use numbers that seem justifiable, close to what I understand as standard and reasonably acceptable within Global Health and Development (from here table 5.1, I believe have been used by GW DCP2 and GBD etc). These numbers are also close to (but somewhat below) a very crude staff survey we did on the moral weight of saving a life. That said I admit I would be interested in updates of what organisations are currently using.
We are aware of the limits of our CEAs and use them cautiously in our decision making process, and would encourage others to be cautious about over relying on them. We have written about this here: https://www.charityentrepreneurship.com/cea. We are well aware in making decisions that some of the numbers used to compare different kinds of interventions rest on a lot of shaky assumptions.
We tend to try to pick a range of interventions across reasonable moral weights and moral views. We will try to pick some interventions that save lives, some that improve health, some that improve lives in other ways. That said I expect that maybe we have over (or under) valued lives saved.
Ultimately I believe that this is sufficient for the level of decision making we need to make.
I hope that someday soon we have the time to work this out in detail.
ACTIONS.
• [Edited: I wont change anything straight away as the model as a bunch of modelling in this research round has already been done, and for now I would rather use numbers I can back up with a source than numbers that are tweaked for one reason but not another reason.]
• I have added a note about the point you raise to our internal list of ways to improve our CEAs. [Edit: I really would like to make some changes here going forward. I expect that if I put a few hours into this the number is more likely to go up than down given the discount rate difference (and the staff survey).]
• I might also do some extra sensitivity analysis on our CEAs to highlight the uncertainty around this factor and ensure it is flagged to decision makers.So thank you for raising this.
Antony, If you are looking for early stage funding and support for your charity or a project if it you could consider applying to the charity entrepreneurship program when applications re-open in a few months. There is an option to apply with your own idea.
See https://www.charityentrepreneurship.com/
(Disclaimer commenting in a personal capacity)
Advocacy for salt intake reduction—AIM top idea 2024
Participatory Learning and Action (PLA) groups for maternal and newborn health—AIM top idea 2024
AIM’s new guide to launching a high-impact non-profit policy organization
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey’s response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pledge).
I don’t have strong evidence for thinking this. Mostly I am going of the amount of errors that incubates find in the reports. In each cohort we have ~10 potential founders digging into ~4-5 reports for a few weeks. I estimate there is on average roughly 0.8 non-trivial non-major errors (i.e. something that would change a CEA by ~20%) and 0 major errors highlighted by the potential founders. This seems in the same order of magnitude to the number of errors GiveWell get on scrutiny (e.g. here).
And ultimately all our reports are tested in the real world by people putting the ideas in practice. If our reports do not line up to reality in any major way we expect to find out when founders do their own research or a charity pivots or shuts down, as MHI has done recently.
One caveat to this is that I am more confident about the reports on the ideas we do recommend than the other reports on non-recommended ideas which receive less oversight internally as they are less decision relevant for founders, and receive less scrutiny from incubates and being put into action.
I note also that in this entire critique and having skimmed over the threads here no-one appears to have pointed out any actual errors in any CE report. So I find it hard to update on anything written here. (The possible exception is me, in this post, pointing to the MHI case which does seem unfortunately to have shut down in part due to an error in the initial research.)
So I think our quality of research is comparable to other orgs, but my evidence for this is weak and I have not done a thorough benchmarking. I would be interested in ways to test this. It could be a good idea for CE to run a change our mind context like GiveWell in order to test the robustness of our research. Something for me to consider. It could also be useful (although I doubt worth the error) to have some external research evaluator review our work and benchmark us against other organisations.
[EDIT: To be clear talking here about quality in terms of number of mistakes/errors. Agree our research is often shorter and as such is more willing to take shortcuts to reach conclusions.]
– –
That said I do agree that we should make it very very clear in all our reports the context of who the report is written for and why and what the reader should take from the report. We do this in the introduction section to all our reports and I will review the introduction for future reports to make sure this is absolutely clear.
I went though the old emails today and I am confident that my description accurately captured what happened and that everything I said can be backed up.
Another animal advocacy research organization supposedly found CE plagiarizing their work extensively including in published reports, and CE failed to address this.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
I believe this refers to an incident that happened in 2021. CE had an ongoing relationship with an animal advocacy policy organisation occasionally providing research to support their policy work. We received a request for some input and over the next 24 hours we helped that policy organisation draft a note on the topic at hand. In doing so a CE staff member copy and pasted text from a private document shared by another animal advocacy research organisation. This was plagiarism and should not have happened. I would like to note: firstly that this did not happen in the course of our business as usual research process but in a rushed piece of work that bypassed our normal review process, and secondly that this report was not directly published by us and it was not made clear to the CE staff member involved that the content was going to be made into a public report (most other work for that policy organisation was just used privately) although we should of course have considered this possibility. These facts of course do not excuse our mistake here but are relevant for assessing the risk that this was any more than a one-off mistake.
I was involved in responding when this issue came to light. On the day the mistake was realised we: acknowledged the mistake, apologised to the injured party, pulled all publicity for the report, drafted an email to the policy org asking to have the person who’s text was copied added as a co-author (the email was not sent until the following day after as we waited for approval from the injured party). The published report was updated. Over the next three weeks we carried out a thorough internal risk assessment including reviewing all past reports by the same author. The other animal rights research organisation acknowledged they were satisfied with the steps taken. We found no cases of plagiarism in any other reports (the other research org agreed with this assessment), although one other tweak was made to a report to make acknowledgment more clear.FWIW I find mildlyanonymous’ description of this event to be somewhat misleading referring to multiple “reports” and claiming “CE failed to address this”.
CE’s reports on animal welfare consistently contain serious factual errors … noticing immediately that it had multiple major errors, sharing that feedback, and having it ignored due to their internal time capping practices.
I don’t know what this is about. I know of no case where we have ignored feedback. We are always open to receiving feedback on any of our reports from anyone at any time. I am very sorry if I or any CE staff ignored you and I open to hearing more about this, and/or hearing about any errors you have spotted in our research. If you can share any more information on this I can look into it, please contact me (I will PM you my email address, note I am about to go on a week’s leave). It is often the case if we receive minor critical feedback after a report is published we do not go back and edit our report but note the feedback in our Implementation Note for future Charity Entrepreneurship founders, maybe that is what happened.
This is a great post and captured something that I feel. Thank you for writing it Michelle!!
Thank you for a nuanced and interesting reply.
I think people working on animal welfare have more incentives to post during debate week than people working on global health.
The animal space feels (when you are in it) very funding constrained, especially compared to working in the global health and development space (and I expect gets a higher % of funding from EA / EA-adjacent sources). So along comes debate week and all the animal folk are very motivated to post and make their case and hopefully shift a few $. This could somewhat bias the balance of the debate. (Of course the fact that one side of the debate feels they needs funding so much more is in itself relevant to the debate.)