And it seems to me you do not really address the value or specifics of this proposal? Your post reads to me more as ‘we should fund HLI’s research’ but the proposal asks for funding for a grants specialist and seed money. And it’s strange to me that you mostly recommend funding them based on prior work (which again, I also see as work of quality and importance) rather than also evaluating the proposal at hand.
For instance, HLI are requesting $100,000 as a seed fund to e.g. ‘make some early-stage grants’. This would effectively be a regrant of a regrant. People in the comments have expressed skepticism of this (e.g. Nuño’s comment: “FTX which chooses regrantors which give money to Clearthinking which gives money to HLI which gives money to their projects. It’s possible I’m adding or forgetting a level, but it seems like too many levels of recursion, regardless of whether the grant is good or bad.”) There’s a lot of dilution and I wonder what you think of this?
Other people on Manifold (John and Rina) have pointed out how non-specific this proposal is, how lacking of a plan it appears in the current way it’s written, that there might be harm risks that aren’t considered at all. I understand there might have been word limits but other proposals are much more concrete.
It would be great if Clearer Thinking publish more information on how they evaluated all of these final proposals.
Thanks for highlighting these concerns! Here is what I think about these topics:
1.
I focused on doing an overview of the HLI and the problem area because compared to other teams it seemed as one of the most established and highest quality orgs within the Clearer Thinking regranting round. I thought this may be missed by some and is a good predictor of the outcomes.
2.
I focused on the big-picture lens because the project they are looking for funding for is pretty open-ended.
So far, we’ve looked quite narrowly at GiveWell-style ‘micro-interventions’ in low-income countries to see how taking a happiness approach changes the priorities. This sort of analysis is quite straightforward—it’s standard quantitative economic cost-effectiveness - but we’re not convinced that these sorts of interventions are going to be the best way to improve global wellbeing. We’ve hired Lily to expand our analysis more broadly: Are there systemic changes that would move the world in the right direction, not just benefit one group? What should be done to improve wellbeing in high-income countries? A world without poverty isn’t a world of maximum wellbeing, so how could moving towards a more flourishing society today impact the long-term? These are harder, more qualitative analyses, but no one has tried to tackle them before and we think this could be extremely valuable.
I think the prior performance and the quality of the methodology they are using are good predictors of the expected value of this grant.
3.
I didn’t get the impression that the application lacks specific examples. Perhaps could be improved though. They listed three specific projects they want to investigate the impact of:
For example, the World Happiness Report has only been running for ten years but its annual league table of national wellbeing is now well known and sparks discussion amongst policymakers. Further funding to promote the report could substantially raise the profile of wellbeing. Other examples include the World Wellbeing Movement which aims to incorporate employee wellbeing into ESG investing scores and Action for Happiness which promotes societal change in attitudes towards happiness.
That said, I wish they listed a couple of more organizations/projects/policies they would like to investigate. Otherwise, communicate something along the line: We don’t have more specifics this time as the nature of this project is to task Dr Lily Yu to identify potential interventions worth funding. We, therefore focus more on describing methodology, direction, and our relevant experience.
4.
I am not sure how much support HLI gets from the whole EA ecosystem. It may be low. In their EA forum profile, it appears low “As of July 2022, HLI has received $55,000 in funding from Effective Altruism Funds”. Because of that, I thought discussing this topic on a higher level may be helpful.
5.
I also think the SWB framework aspect wasn’t highlighted enough in the application. I focused on this as I see a very high expected value in supporting this grant application as it will help HLI stress test SWB methodology further.
6.
As for Nuño’s comment. I don’t see a problem that money is passed further through a number of orgs. I sympathize with Austin’s fragment of the comment (please read the whole comment as this fragment is a little misleading on what Austin meant there)
I’m wondering, why doesn’t this logic apply for regular capitalism? It seems like money when you buy eg a pencil goes through many more layers than here, but that seems to be generally good in getting firms to specialize and create competitive products. The world is very complex, each individual/firm can only hold so much know-how, so each abstraction layer allows for much more complex and better production.
Initially, FTX decided on the regrant dynamic – perhaps to distribute the intelligence and responsibility to more actors. What if adding more steps actually adds quality to the grants? I think the main question here is whether this particular step adds value.
A lot of this posts reads as an intro to HLI, what they do, why wellbeing matters. And this is important and I also agree, neglected.
At the same time, you write that this post is about why HLI should receive a grant for a specific proposal of theirs: https://docs.google.com/document/d/1zANITg1HuKAn5uEe7nzepTZXxyMDy44vowsdVcMFiHo/edit
And it seems to me you do not really address the value or specifics of this proposal? Your post reads to me more as ‘we should fund HLI’s research’ but the proposal asks for funding for a grants specialist and seed money. And it’s strange to me that you mostly recommend funding them based on prior work (which again, I also see as work of quality and importance) rather than also evaluating the proposal at hand.
For instance, HLI are requesting $100,000 as a seed fund to e.g. ‘make some early-stage grants’. This would effectively be a regrant of a regrant. People in the comments have expressed skepticism of this (e.g. Nuño’s comment: “FTX which chooses regrantors which give money to Clearthinking which gives money to HLI which gives money to their projects. It’s possible I’m adding or forgetting a level, but it seems like too many levels of recursion, regardless of whether the grant is good or bad.”) There’s a lot of dilution and I wonder what you think of this?
Other people on Manifold (John and Rina) have pointed out how non-specific this proposal is, how lacking of a plan it appears in the current way it’s written, that there might be harm risks that aren’t considered at all. I understand there might have been word limits but other proposals are much more concrete.
It would be great if Clearer Thinking publish more information on how they evaluated all of these final proposals.
Thanks for highlighting these concerns! Here is what I think about these topics:
1.
I focused on doing an overview of the HLI and the problem area because compared to other teams it seemed as one of the most established and highest quality orgs within the Clearer Thinking regranting round. I thought this may be missed by some and is a good predictor of the outcomes.
2.
I focused on the big-picture lens because the project they are looking for funding for is pretty open-ended.
I think the prior performance and the quality of the methodology they are using are good predictors of the expected value of this grant.
3.
I didn’t get the impression that the application lacks specific examples. Perhaps could be improved though. They listed three specific projects they want to investigate the impact of:
That said, I wish they listed a couple of more organizations/projects/policies they would like to investigate. Otherwise, communicate something along the line: We don’t have more specifics this time as the nature of this project is to task Dr Lily Yu to identify potential interventions worth funding. We, therefore focus more on describing methodology, direction, and our relevant experience.
4.
I am not sure how much support HLI gets from the whole EA ecosystem. It may be low. In their EA forum profile, it appears low “As of July 2022, HLI has received $55,000 in funding from Effective Altruism Funds”. Because of that, I thought discussing this topic on a higher level may be helpful.
5.
I also think the SWB framework aspect wasn’t highlighted enough in the application. I focused on this as I see a very high expected value in supporting this grant application as it will help HLI stress test SWB methodology further.
6.
As for Nuño’s comment. I don’t see a problem that money is passed further through a number of orgs. I sympathize with Austin’s fragment of the comment (please read the whole comment as this fragment is a little misleading on what Austin meant there)
Initially, FTX decided on the regrant dynamic – perhaps to distribute the intelligence and responsibility to more actors. What if adding more steps actually adds quality to the grants? I think the main question here is whether this particular step adds value.