I think this can be a useful concept, so thanks for sharing.
I think this post could be usefully expanded on in the following ways:
a bit more detail (vignettes, also, if possible, clear definitions) about what makes a decision important and influencable
what would we have to forecast in order to adjust our credences about whether a crunch time is coming soon
Thank you for your work on this.
I’d be interested in your opinion on the number of people who should be working on this.
I appreciate that this isn’t a straightforward question to answer. The truth is probably that returns diminish as the number of people working on this increase, and there probably isn’t an obvious way to delineate a clear cut off point between “still useful to have another person” and “don’t need any more people”.
I think this useful because I suspect your view is that there should be lots more people working on this, but from reading the problem profile, I don’t think readers would know whether 80k would want the 400 to increase to 500 or 500,000. (I’ve only skimmed it, so sorry if it is explained)
Knowing the difference between “the area is somewhat under-resourced” and “the area is extremely under-resourced” is useful for readers.
Yes, we can arrange via DM
Oh really? I’m no expert on google ads, but I thought it was common to have “conversions”, and to pay more if a certain pre-defined event occurs (and a purchase is an example of a conversion).
I suspect Jeff knows more about google ads than I do, so maybe I should adjust my 60% number down.
I found this clear and reassuring. Thank you for sharing
EDIT: what I wrote here probably isn’t correct (see comments from Jeff below)
My understanding (I can’t remember my source for this) is that it’s less about charitable giving and more motivated by a war against Google for revenue. I’d give a c.60% chance that this accurately describes Amazon’s motivations.
Without Amazon Smile:
Someone googles “Trousers from Amazon” (or whatever)
When the user clicks on an ad on google’s search results and goes to an Amazon page, Amazon gives Google some money
If the customer than goes to make a purchase, Amazon gives google a bit more money
I’m imagining a (fictional) dialogue between two Amazon employees:
“Can we convince the user to go to a copy of this webpage which has a different url? then we don’t pay the money to google?”
“Why would they do they do that?”
“We could pay the customer an amount less than the amount we pay to google?”
“But the amount Amazon would give to the customer would be so paltry”
“What if the money goes to charity instead? People are much more scope insensitive about charitable giving”
My propensity to believe this story is mostly because it seems to explain Amazon’s behaviour in a way that sounds difficult to understand otherwise. My credence in this would be higher than c.60% if it were verified by a high quality source.
So if they’re closing the programme, I’m wondering if the benefits of recouping ad spend from Google is no longer big enough to warrant the costs of the running the Smile system.
In a post this long, most people are probably going to find at least one thing they don’t like about it. I’m trying to approach this post as constructively as I can, i.e. “what I do find helpful here” rather than “how I can most effectively poke holes in this?” I think there’s enough merit in this post that the constructive approach will likely yield something positive for most people as well.
You argue that funding is centralised much more than it appears. I find myself learning that this is the case more and more over time.
I suspect it probably is good to decentralise to some degree, however there is a very real downside to this:
some projects are dangerous and probably shouldn’t happen
the most dangerous of those are ones run by a charismatic leader and appear very good
if we have multiple funders who are not “informally centralised” (i.e. talking to each other) then there’s a risk that dangerous projects will have multiple bites at the cherry, and with enough different funders, someone will fund them
I appreciate that there are counters to this, and I’m not saying this is a slam-dunk argument against decentralisation.
I appreciated “Some ideas we should probably pay more attention to”. I’d be pretty happy to see some more discussion about the specific disciplines mentioned in that section, and also suggestions of other disciplines which might have something to add.
Speaking as someone with an actuarial background, I’m very aware of the Solvency 2 regime, which makes insurers think about extreme/tail events which have a probability of 1-in-200 of occurring within the next year. Solvency 2 probably isn’t the most valuable item to add to that list; I’m sure there are many others.
I think I’m probably sympathetic to your claims in “EA is open to some kinds of critique, but not to others”, but I think it would be helpful for there to be some discussion around Scott Alexander’s post on EA criticism. In it, he argued that “EA is open to some kinds of critique, but not to others” was an inevitable “narrative beat”, and that “shallow” criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.
I was primed to dismiss your claims on the basis of Scott Alexander’s arguments, but on closer consideration I suspect that might be too quick.
I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it’s easier to triangulate what’s really meant when there are more examples.
There are presumably ways in which donating a material amount makes a difference to financial advice, at least in the sense that financial planning should take this into account, and perhaps there are tax implications as well. On this basis I think I’m tentatively favourable to this idea, but I’d be more confident about it if I had seen a bit more detail in your post.
(BTW I’m not criticising you for not having more detail in your post, it’s totally reasonable to jot down something on the forum and hear people’s opinions as a first step)
Pricing: It might be worth considering how much work you have per client. I don’t know about the US, but in the UK and EU the regulatory burden for IFAs has been increasing substantially over the last decade. I haven’t spoken to IFAs much recently, so I don’t know whether they would be able to cope with as many as 100 clients per advisor. If 100 is too many for one person, you may need to increase your price. Having said that, if you know that $2k fees are the norm in the rest of the market, you could simply infer from that the $2k pricing is ok.
Market sizing: you indicate that you would need c. 100 clients for this to work out from a profitability perspective. Sizing this is easier if we have a clearer understanding if your target market. Presumably the defining feature – from the perspective of why the client would want to choose you – is the fact that your clients will be significant donors (as opposed to being EAs? I can’t imagine that the choice of EA-aligned vs non-EA-aligned charity is going to matter from, e.g., a tax perspective). What are the characteristics of donation decisions where getting advice matters? (e.g. is it absolute amount, or something about the relationship with tax thresholds, or something else?) Once that’s more clearly defined, then it’s easier to size (a) the addressable market within EA (b) the addressable market more widely (non-EAs who also donate substantial amounts are presumably also of interest to you).
(Update: I’ve now seen you’ve written a comment where you consider allowing for differing views on x-risks in the next few years. I had assumed that people with short timelines wouldn’t bother getting long term financial advice in the first place, so I imagined that this would not be part of your offering)
Also, I’d certainly see this as a for-profit venture. I’d at least expect you to be donating yourself (presumably that’s linked to your motivations). However doing this as a non-profit means taking scarce donation dollars, when this project, if worth doing, really ought to be fundable without relying on donations.
Lastly, I believe I’ve seen another post on the forum with a very similar idea. I can’t remember much about the post, but you might want to track it down and reach out to the person.
Re item 4, it’s fair to note that I haven’t checked how conservative you’ve been on other assumptions, so if I did a replication of your work and it ended up being similar, then I agree that could be a reason.
Great that you’ve looked into this Akhil! Speaking as someone with a wife and daughter (and a mother, and other female family members, and female friends...) this is close to my heart.
A key problem with all of these is how to assess effectiveness. IPV typically occurs behind closed doors, which makes it hard to know what’s really happening.
Largely because of these considerations, I predict that on further analysis, I will probably be less positive than you.
While this sounds consistent with a generalised GiveWellian sceptical prior, I say this with some sadness, because I would very much like reducing VAWG to be a high impact cause area.
Also, thank you for asking me for comments before publishing.
My main reason for being more pessimistic than you is that your internal and external validity adjustments seem very generous:
Source: your model
For brevity, I’ll focus on Community based social empowerment, since it’s the one you’re most positive about.
You have adjustments of 95% internal validity (aka replicability) adjustment, and 90% external validity (aka generalisability) adjustment. I’d consider these numbers to be high (i.e. more prone to lead to generous cost-effectiveness evaluations).
Your model’s 95% internal validity adjustment is the same internal validity adjustment that GiveWell uses for bednets. For comparison…
… malaria nets do merit a 95% internal validity adjustment. We have seen plenty of positive evidence for the effectiveness of bednets, and I’m told that there is so much evidence that it’s difficult to get ethics approval for more RCTs because ethics boards argue that it’s unethical to do studies with controls on something that is such a robustly proven intervention.
… cash transfers do merit a 95% internal validity adjustment. They are a robustly effective way of reducing poverty.
… Community Based Social Empowerment does not merit a 95% internal validity adjustment, in my view. Gathering this sort of evidence from surveys is very difficult, and I’d be surprised if the protocols are robust enough to give us the same confidence we have about the effect of malaria nets on mortality (deaths are relatively easy to count).
I also suspect the external validity adjustment is too generous. The intervention relies heavily on cultural context; several GiveWell external adjustments are high too, but human bodies are pretty consistent from one place to the next, whereas cultures vary a lot with geography.
Therefore I predict that:
in 90% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments lower than yours (i.e. lower than 95% and 90%).
in 50% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments substantially lower than yours (i.e. lower than 50%).
In summary, I think there’s a 75% chance that we conclude with a >2x worse cost-effectiveness than you, and a 25% chance of a greater than >4x worse cost-effectiveness than you for Community Based Social Empowerment.
This would be unlikely to be at the levels of cost-effectiveness where we would deem the intervention high impact.
I haven’t thought enough about the other interventions apart from Self-defence (IMPower, which has been done by No Means No). As Matt has alluded to, SoGive has done some work on this topic, and received some information which is not in the public domain. I can’t say too much about this, but I can discuss privately and guide you to the relevant researchers. SoGive’s plans are to press for permission to publish on this, and finalise within the next few months.
For clarity, I’ve alluded to SoGive in this comment, but this is not an official SoGive comment. Content written in a SoGive capacity has to gone through a certain level of review which has not happened here, so this is written in a personal capacity.
For those less familiar with these models, they are applied in a straightforward, intuitive way. It’s roughly equivalent to (Step 1) Calculate the benefit assuming full trust in the evidence; (Step 2) Multiply the benefit by the validity adjustments; (Step 3) divide by costs.
For those who want access to data to help them form their own view on whether these adjustment are high are not: In SoGive, we have pulled together a spreadsheet with GiveWell’s internal and external validity adjustments (we’re supposed to also add in SoGive’s own adjustments at the bottom, not just GiveWell’s, but have been less diligent at doing that). It’s meant to be a (not-rigorously vetted) internal resource, but I’m sharing it here in case it helps. It’s also probably a couple of years out of date now, but I’d from memory I don’t think there are changes material enough to matter in the last couple of years.
I’ll just add that from SoGive’s perspective, this proposal would work. We have various views on charities, but only the ones which are in the public domain are robustly thought through enough that we would want an independent group like GWWC to pick them up.
The publication process forces us to think carefully about our claims and be sure that we stand by them.
(I appreciate that Sjir has made a number of other points, and I’m not claiming to answer this from every perspective)
SoGive is not currently on GWWC’s list of evaluators—GWWC plans to look into us in 2023.
Thank you for this. It’s a useful contribution, and I upvoted it.
I’d be interested in some discussion about when we’d expect this mathematics to be materially useful, especially when compared with other hard elements of doing this sort of forecast.
Example: if I want to estimate the extent to which averting a gigatonne of greenhouse gas (GHG) emissions influences the probability of human extinction, I suspect that the Fisher-Tippett-Gnedenko theorem isn’t very important (shout if you disagree). Other considerations (like: “have I considered all the roundabout/indirect ways that GHG emissions could influence the chance of human extinction?”) are probably more important.
I agree this is valuable, thank you for doing this.
I’ll just echo something Matt said about possible lack of independence...
Prior to doing our formal Delphi process for determining our moral weights, we at SoGive had been using a placeholder set of moral weights. The placeholder was heavily influenced by GiveWell’s moral weights.
Our process did then incorporate lots of other perspectives, including a survey of the EA community, and a survey of the wider population, as well as explicit exhortations to think things through independently. Despite all these things, I think it’s possible that our process might have ended up anchoring on the previous placeholder weights, i.e. indirectly anchoring on GiveWell’s moral weights. I don’t think anyone in the team was looking at or aware of FP’s or HLI’s moral weights, so I don’t expect there was any direct influence there.
I don’t understand how that’s possible.
Ishaan’s work isn’t finished yet, and he has not yet converted his findings into the SoGive framework, or applied the SoGive moral weights to the problem. (Note that we generally try to express our findings in terms of the SoGive framework and other frameworks, such as multiples of cash, so that our results are meaningful to multiple audiences).
Just to reiterate, neither Ishaan nor I have made very strong statements about cost-effectiveness, because our work isn’t finished yet.
I would be very happy to speak to you (or Ishaan) on the academic literature.
That sounds great, I’ll message you directly. Definitely not wishing to misunderstand or misinterpret—thank you for your engagement on this topic :-)
Thanks for your question Simon, and it was very eagle-eyed of you to notice the difference in moral weights. Good sleuthing! (and more generally, thank you for provoking a very valuable discussion about StrongMinds)
I run SoGive and oversaw the work (then led by Alex Lawsen) to produce our moral weights. I’d be happy to provide further comment on our moral weights, however that might not be the most helpful thing. Here’s my interpretation of (the essence of) your very reasonable question:
“SoGive has a tendency to put a quite high value on tackling depression. Is this enough to explain why SoGive sounds like they might be more positive about StrongMinds than Simon M is?”
I have a simple answer to this: no, it isn’t.
Let me flesh that out. We have (at least) two sources of information:
Data from StrongMinds (e.g. their own evaluation report on themselves, or their regular reporting)
And we have (at least) two things we might ask about:
(a) How effective is the intervention that StrongMinds does, including the quality of evidence for it?
(b) How effective is the management team at StrongMinds?
I’d say that the main crux is the fact that our assessment of the quality of evidence for the intervention (item (a)) is based mostly on item 1 (the academic literature) and not on item 2 (data from StrongMinds).
This is the driver of the comments made by Ishaan above, not the moral weights.
And just to avoid any misunderstandings, I have not here said that the evidence base from the academic literature is really robust—we haven’t finished our assessment yet. I am saying that (unless our remaining work throws up some surprises) it will warrant a more positive tone than your post, and that it may well demonstrate a strong enough evidence base + good enough cost-effectiveness that it’s in the same ballpark as other charities in the GWWC list.
This all makes sense, thank you :-)
I’ve argued before that the EA community should be paying more attention to for-profit investing, so I’m glad to see this, thank you :-)
A few comments from me:
Your title leads with “Safety sells”, but I was unclear from this write-up whether you actually believe that companies which promote civilisational resilience genuinely are getting more investor interest than would be predicted purely based on their financials. E.g. I’m sure there are some investors who ticking a box about this in their ESG frameworks, but I’d imagine this a mere box ticking exercise which has minimal influence on actual decisions for about 99% of investors. In short, I suspect that safety does not, in fact, sell. If you disagree with this, I’d be interested to hear it.
Thank you for your comments on the bioeconomy. I would have liked to see more on how this compares with other options. A couple of examples off the top of my head...
… better PPE is probably well-suited to a for-profit company, (presumably?) doesn’t count as being part of the bioeconomy, but could well be a valuable step to keeping the world safer from pandemics (as set out in the Apollo programme). Similarly for indoor air quality. I would expect these to be no worse than the bioeconomy, i.e. the bioeconomy may well be legitimate focus area, but it doesn’t appear to have outsized value, as far as I can tell.
Your post is focusing just on food security and biosecurity. It would be good to compare this with other cause areas. Some gut feel thoughts (without having researched this much):
… creating aligned AI is suitable to a for-profit company, and there are precedents for this.
… tackling global conflict seems quite difficult in a for-profit company. This might be an imagination failure on my part, but I suspect that this needs to be left to diplomats and governments.
… climate change is believed by many in EA to be less important than other risk areas and it also is much less neglected.
I agree that investors have a broad toolbox. I think it would be useful for someone to cast a more critical eye on the tools in that toolbox.
… I agree that civilisational resilience could fit into existing ESG and impact investing frameworks. However it’s useful to consider to what extent this actually leads to greater civilisational resilience. I.e. if I fund a company which does good work to keep humanity safe, would that work have been funded anyway? And if it would be funded anyway, does flooding the field with funding actually help to attract more talent to work on it, or are so few people aware of how much money there is in one sector that they won’t redirect their careers based on that?
… you mention governance options like an ethics advisory board. How effective are these measures? E.g. I could imagine that an ethics advisory board could spend a lot of time on various topics (D&I, slavery in the supply chain, climate change) without paying much attention to civilisational resilience (not I have anything against D&I, etc)