Can you say more about how this / your future plans solve the adverse selection problems? (I imagine you’re already familiar with this post, but in case other readers aren’t, I recommend it!)
Hey Trevor! One of the neat things about Manival is the idea that you can create custom criteria to try and find supporting information that you as a grantmaker want to weigh heavily, such as for adverse selection. So for example, one could create their own scoring system that includes a data fetcher node or a synthesizer node, which looks for signals like “OpenPhil funded this two years ago, but has declined to fund this now”.
Re: adverse selection in particular, I still believe what I wrote a couple years ago: adverse selection seems like a relevant consideration for longtermist/xrisk grantmaking, but not one of the most important problems to tackle (which, off the top of my head, I might identify as “not enough great projects”, “not enough activated money”, “long and unclear feedback loops”). Or: my intuition is that the amount of money wasted, or impact lost, due to adverse selection problems is pretty negligible, compared to upside potential in growing the field. I’m not super confident in this though and curious if you have different takes!
Yeah interesting. To be clear, I’m not saying e.g. Manifund/Manival are net negative because of adverse selection. I do think additional grant evaluation capacity seems useful, and the AI tooling here seems at least more useful than feeding grants into ChatGPT. I suppose I agree that adverse selection is a smaller problem in general than those issues, though once you consider tractability, it seems deserving of some attention.
Cases where I’d be more worried about adverse selection, and would therefore more strongly encourage potential donors:
The amount you’re planning to give is big. Downside risks from funding one person to do a project are usually pretty low; empowering them to run an org is a different story. (Also, smaller grants are more likely to have totally flown under the radar of the big funders.)
The org/person has been around for a while.
The project is risky.
In those cases, especially for six-figure-and-up donations, people should feel free to supplement their own evaluation (via Manival or otherwise!) by checking in with professional grantmakers; Open Phil now has a donor advisory function that you can contact at donoradvisory@openphilanthropy.org.
(For some random feedback: I picked an applicant I was familiar with, was surprised by its low score, ran it through the “Austin config,” and it turns out it was losing a bunch of points for not having any information about the team’s background; only problem is, it had plenty of information about the team’s background! Not sure what’s goin on there. Also, weakly held, but I think when you run a config it should probably open a new tab rather than taking you away from the main page?)
Can you say more about how this / your future plans solve the adverse selection problems? (I imagine you’re already familiar with this post, but in case other readers aren’t, I recommend it!)
Hey Trevor! One of the neat things about Manival is the idea that you can create custom criteria to try and find supporting information that you as a grantmaker want to weigh heavily, such as for adverse selection. So for example, one could create their own scoring system that includes a data fetcher node or a synthesizer node, which looks for signals like “OpenPhil funded this two years ago, but has declined to fund this now”.
Re: adverse selection in particular, I still believe what I wrote a couple years ago: adverse selection seems like a relevant consideration for longtermist/xrisk grantmaking, but not one of the most important problems to tackle (which, off the top of my head, I might identify as “not enough great projects”, “not enough activated money”, “long and unclear feedback loops”). Or: my intuition is that the amount of money wasted, or impact lost, due to adverse selection problems is pretty negligible, compared to upside potential in growing the field. I’m not super confident in this though and curious if you have different takes!
Yeah interesting. To be clear, I’m not saying e.g. Manifund/Manival are net negative because of adverse selection. I do think additional grant evaluation capacity seems useful, and the AI tooling here seems at least more useful than feeding grants into ChatGPT. I suppose I agree that adverse selection is a smaller problem in general than those issues, though once you consider tractability, it seems deserving of some attention.
Cases where I’d be more worried about adverse selection, and would therefore more strongly encourage potential donors:
The amount you’re planning to give is big. Downside risks from funding one person to do a project are usually pretty low; empowering them to run an org is a different story. (Also, smaller grants are more likely to have totally flown under the radar of the big funders.)
The org/person has been around for a while.
The project is risky.
In those cases, especially for six-figure-and-up donations, people should feel free to supplement their own evaluation (via Manival or otherwise!) by checking in with professional grantmakers; Open Phil now has a donor advisory function that you can contact at donoradvisory@openphilanthropy.org.
(For some random feedback: I picked an applicant I was familiar with, was surprised by its low score, ran it through the “Austin config,” and it turns out it was losing a bunch of points for not having any information about the team’s background; only problem is, it had plenty of information about the team’s background! Not sure what’s goin on there. Also, weakly held, but I think when you run a config it should probably open a new tab rather than taking you away from the main page?)