When I spoke to Eric Gastfriend about the Harvard EA report a while ago I asked why Strong Minds and Basic Needs weren’t on the list. As far as I recall they just hadn’t looked at them, rather than that they’d looked at them and then decided they were bad options.
I’d also be really curious to have someone do a cost-effectiveness comparison for Action for Happiness. http://www.actionforhappiness.org/ The thought is that it might be more effective, if happiness is your goal, to fund broad but shallow happiness education programmes for the general public, rather than funding deep mental health interventions for a few people.
I have no idea how the numbers would come out and would probably be biased (disclaimer: I know some of the people at both orgs and might do some work for Action for Happiness at some point). Hence it would be great to get some fresh eyes on the topic.
I would deprioritise looking at BasicNeeds (in favour of StrongMinds). They use a franchised model and aren’t able to provide financials for all their franchisees. This makes it very difficult to estimate cost-effectiveness for the organisation as a whole.
The GWWC research page is out of date (it was written before StrongMinds’ internal RCT was released) and I would now recommend StrongMinds above BasicNeeds on the basis of greater levels of transparency, and focus on cost-effectiveness.
Very interesting you say this. I recently suggested to Basic Needs’ CEO that he get in contact with GW and hopefully this will lead to BN focusing more on cost-effectiveness and transparency.
Did you and I not discuss the Strong Mind’s RCT ages ago? I thought we agreed it was too good to be true and we really wanted to see something independent, but maybe I misremember/was talking to someone else. If it’s the case the best evidence for mental health in the developing world is an internal RCT that shows 1. how far behind mental health is and 2. the urgent need for a better evidence base.
Thanks for this. It was great, particularly hearing about how people think about things rather than just the outcomes reached.
A couple of comments.
Could you explain what you meant about beliefs? I’m unclear of what you think a belief is and what would be a good account of forming, having or reporting beliefs. This isn’t a critical comment asking you to produce a full theory of mind, more that what you say sounds interesting but is unclear and I’d like you to expand.
Reading this, I got a sense you were having to reinvent the philosophical wheel whilst trying to avoid doing so. You seem to be dong lots of what is straightforward and implicitly moral philosophy but making this explicity. Whilst that has an appeal—people don’t agree in moral philosophy and educating people is not really what OxPrio is about—I think it might just be easier to get people’s assumptions on the table so you can see what follows from them.
As a couple of examples, if you’re comparing GiveDirectly to MIRI you have to making implicit assumptions about population axiology (i.e. how much future people matter). It’s not your view on future people is one part of the calculation, it basically is the whole calculation. Alternatively, if you’re looking at AMF to GiveDirectly and just comparing present people that’s going to be very substantially determined by your view about the badness of death.
I wonder if it would help to run through some candidate theories in moral philosophy so that people can use that to form part of their model, rather than having to generate a new theory for themselves on the fly.
A further thought: it would be really nice to get a handle on which prioritisation were truly empirical questions and which philosophical.
When I and Tom came up with that, I don’t think we meant “belief” to be imbued with the usual philosophical connotations. Rather, we intended it to mean something like “action-guiding, introspectively accessible representation of a state of affairs existing independently of whether it is queried”.
When people ask me what I think about the world, I can often come up with lots of intelligent sounding answers—but it is unfortunately more rare that my actual actions, plans and normative evaluations are somehow suitably hooked up to, and crucially depend upon, those answers.
Oh. It’s also ambiguous if you want us to discuss the blog posts you linked on this part of the EA forum or on the oxprio website where they are posted.
Lovisa, have you looked into Basic Needs? http://www.basicneeds.org/
When I spoke to Eric Gastfriend about the Harvard EA report a while ago I asked why Strong Minds and Basic Needs weren’t on the list. As far as I recall they just hadn’t looked at them, rather than that they’d looked at them and then decided they were bad options.
I’d also be really curious to have someone do a cost-effectiveness comparison for Action for Happiness. http://www.actionforhappiness.org/ The thought is that it might be more effective, if happiness is your goal, to fund broad but shallow happiness education programmes for the general public, rather than funding deep mental health interventions for a few people.
I have no idea how the numbers would come out and would probably be biased (disclaimer: I know some of the people at both orgs and might do some work for Action for Happiness at some point). Hence it would be great to get some fresh eyes on the topic.
I would deprioritise looking at BasicNeeds (in favour of StrongMinds). They use a franchised model and aren’t able to provide financials for all their franchisees. This makes it very difficult to estimate cost-effectiveness for the organisation as a whole.
The GWWC research page is out of date (it was written before StrongMinds’ internal RCT was released) and I would now recommend StrongMinds above BasicNeeds on the basis of greater levels of transparency, and focus on cost-effectiveness.
Very interesting you say this. I recently suggested to Basic Needs’ CEO that he get in contact with GW and hopefully this will lead to BN focusing more on cost-effectiveness and transparency.
Did you and I not discuss the Strong Mind’s RCT ages ago? I thought we agreed it was too good to be true and we really wanted to see something independent, but maybe I misremember/was talking to someone else. If it’s the case the best evidence for mental health in the developing world is an internal RCT that shows 1. how far behind mental health is and 2. the urgent need for a better evidence base.
Thanks for this. It was great, particularly hearing about how people think about things rather than just the outcomes reached.
A couple of comments.
Could you explain what you meant about beliefs? I’m unclear of what you think a belief is and what would be a good account of forming, having or reporting beliefs. This isn’t a critical comment asking you to produce a full theory of mind, more that what you say sounds interesting but is unclear and I’d like you to expand.
Reading this, I got a sense you were having to reinvent the philosophical wheel whilst trying to avoid doing so. You seem to be dong lots of what is straightforward and implicitly moral philosophy but making this explicity. Whilst that has an appeal—people don’t agree in moral philosophy and educating people is not really what OxPrio is about—I think it might just be easier to get people’s assumptions on the table so you can see what follows from them.
As a couple of examples, if you’re comparing GiveDirectly to MIRI you have to making implicit assumptions about population axiology (i.e. how much future people matter). It’s not your view on future people is one part of the calculation, it basically is the whole calculation. Alternatively, if you’re looking at AMF to GiveDirectly and just comparing present people that’s going to be very substantially determined by your view about the badness of death.
I wonder if it would help to run through some candidate theories in moral philosophy so that people can use that to form part of their model, rather than having to generate a new theory for themselves on the fly.
A further thought: it would be really nice to get a handle on which prioritisation were truly empirical questions and which philosophical.
Excellent stuff, look forward to reading more.
When I and Tom came up with that, I don’t think we meant “belief” to be imbued with the usual philosophical connotations. Rather, we intended it to mean something like “action-guiding, introspectively accessible representation of a state of affairs existing independently of whether it is queried”.
When people ask me what I think about the world, I can often come up with lots of intelligent sounding answers—but it is unfortunately more rare that my actual actions, plans and normative evaluations are somehow suitably hooked up to, and crucially depend upon, those answers.
Thanks for putting StrongMinds on my radar!
Oh. It’s also ambiguous if you want us to discuss the blog posts you linked on this part of the EA forum or on the oxprio website where they are posted.
A great post