I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it’s a bit technically challenging to implement—but the obvious MVP (as you point out) is to have a bunch of stats on our site. I’ll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don’t want to commit to doing this).
In the meantime, here are a few numbers that I quickly put together (across all of our funds).
Grant decision turnaround times (mean, median):
applied in the last 30 days = 14 days, 15 days
this is pretty volatile as it includes applications that haven’t yet closed
applied in the last 60 days = 23 days, 20 days
applied in the last 90 days = 25 days, 20 days
When I last checked our (anonymous) feedback form, the average score for [satisfaction of evaluation process] (I can’t quite remember the exact question) was ~4.5/5.
(edit: just found the stats—these are all out of 5)
Overall satisfaction with application process: 4.67
Overall satisfaction with processing time: 4.58
Evaluation time: 4.3
Communications with evaluators: 4.7
I’m not sure that these stats tell the whole story. There are cases where we (or applicants) miss emails or miscommunicate—but the frequency of events like this is difficult to report quickly and also accounts for the majority of negative experiences (according to our feedback form and my own analysis).
On (b), I really would like us to be quicker—and more importantly, more reliable. A few very long-tail applications make the general grantee experience much worse. The general stages in our application process are:
Applicant submits application → application is assigned to a fund manager → fund manager evaluates the application (which often involves back and forth with the applicant, checking references etc.) → other fund managers vote on the application → fund chair reviews evaluation → application is reviewed by external advisors → fund chair gives decision to grantee (pending legal review)
There’s also a really high volume of grants and increasingly few “obvious” rejections. E.g. the LTFF right now has over > 100 applications in its pipeline, and in the last 30 days < 10% of applications were obvious rejections).
Thanks for engaging with my criticism in a positive way.
Regarding how timely the data ought to be, I don’t think live data is necessary at all—it would be sufficient in my view to post updated information every year or two.
I don’t think “applied in the last 30 days” is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:
Restrict to applications from n to n+m months ago, where n>=3
Make a note of what percentage of these applicants haven’t received a response
Give a few different percentiles for decision-timelines, e.g. 20th, 50th, 80th, 95th percentiles.
Include a clear explanation of which applications are being included, or excluded, for example, are you including applications that were not at all realistic, and so were rejected as soon as they landed on your desk?
With such statistics on the website, applications would have a much better sense of what they can expect from the process.
Is there (or might it be worthwhile for there to be) a business process to identify aged applications and review them at intervals to make sure they are not “stuck” and that the applicant is being kept up to date? Perhaps “aged” in this context would operationalize as ~2x the median decision time and/or ~>90-95th percentile of wait times? Maybe someone looks at the aged list every ~2 weeks, makes sure the application isn’t “stuck” in a reasonably fixable way, and reviews the last correspondence to/from the applicant to make sure their information about timeframes is not outdated?
We do have a few processes that are designed to do this (some of which are doing some of the things you mentioned above). Most of the long delays are fairly uncorrelated (e.g. complicated legal issue, a bug in our application tracker …).
How are these included? Is it that in you count ones that haven’t closed as if they had closed today?
(A really rough way of dealing with this would be to count ones that haven’t closed as if they will close in as many days from now as they’ve been open so far, on the assumption that you’re on average halfway through their open lifetime.)
Empirically, I don’t think that this has happened very much. We have a “withdrawn by applicant status”, which would include this, but the status is very rarely used.
In any case, the numbers above will factor those applications in, but I would guess that if we didn’t, the numbers would decrease by less than a day.
My point is more around the fact that if a person withdraws their application, then they never received a decision and so the time till decision is unknown/infinite, it’s not the time until they withdrew.
Oh, right—I was counting “never receiving a decision but letting us know” as a decision. In this case, the number we’d give is days until the application was withdrawn.
We don’t track the reason for withdrawals in our KPIs, but I am pretty sure that process length is a reason for a withdrawal 0-5% of the time.
I might be missing why this is important, I would have thought that if we were making an error it would overestimate those times—not underestimate them.
My point was that if someone withdraws their application because you were taking so long to get back to them, and you count that as the date you gave them your decision, you’re artificially lowering the average time-till-decision metric.
Actually the reason I asked if you’d factored in withdrawn application not how was to make sure my criticism was relevant before bringing it up—but that probably made the criticism less clear
Hmm so I currently think the default should be that withdrawals without a decision aren’t included in the time-till-_decision_ metric, as otherwise you’re reporting a time-till-closure metric. (I weakly think that if the withdrawal is due to the decision taking too long and that time is above the average (as an attempt to exclude cases where the applicant is just unusually impatient), then it should be encorporated in some capacity, though this has obvious issues.)
I answered the first questions above in an edit of the original comment. I’m pretty sure when I re-ran the analysis with decided in last 30 days it didn’t change the results significantly (though I’ll try and recheck this later this week—in our current setup it’s a bit more complicated to work out than the stats I gave above).
I also checked to make sure that only looking at resolved applications and only looking at open applications didn’t make a large difference to the numbers I gave above (in general, the differences were 0-10 days).
I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it’s a bit technically challenging to implement—but the obvious MVP (as you point out) is to have a bunch of stats on our site. I’ll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don’t want to commit to doing this).
In the meantime, here are a few numbers that I quickly put together (across all of our funds).
Grant decision turnaround times (mean, median):
applied in the last 30 days = 14 days, 15 days
this is pretty volatile as it includes applications that haven’t yet closed
applied in the last 60 days = 23 days, 20 days
applied in the last 90 days = 25 days, 20 days
When I last checked our (anonymous) feedback form, the average score for [satisfaction of evaluation process] (I can’t quite remember the exact question) was ~4.5/5.
(edit: just found the stats—these are all out of 5)
Overall satisfaction with application process: 4.67
Overall satisfaction with processing time: 4.58
Evaluation time: 4.3
Communications with evaluators: 4.7
I’m not sure that these stats tell the whole story. There are cases where we (or applicants) miss emails or miscommunicate—but the frequency of events like this is difficult to report quickly and also accounts for the majority of negative experiences (according to our feedback form and my own analysis).
On (b), I really would like us to be quicker—and more importantly, more reliable. A few very long-tail applications make the general grantee experience much worse. The general stages in our application process are:
Applicant submits application → application is assigned to a fund manager → fund manager evaluates the application (which often involves back and forth with the applicant, checking references etc.) → other fund managers vote on the application → fund chair reviews evaluation → application is reviewed by external advisors → fund chair gives decision to grantee (pending legal review)
There’s also a really high volume of grants and increasingly few “obvious” rejections. E.g. the LTFF right now has over > 100 applications in its pipeline, and in the last 30 days < 10% of applications were obvious rejections).
Thanks for engaging with my criticism in a positive way.
Regarding how timely the data ought to be, I don’t think live data is necessary at all—it would be sufficient in my view to post updated information every year or two.
I don’t think “applied in the last 30 days” is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:
Restrict to applications from n to n+m months ago, where n>=3
Make a note of what percentage of these applicants haven’t received a response
Give a few different percentiles for decision-timelines, e.g. 20th, 50th, 80th, 95th percentiles.
Include a clear explanation of which applications are being included, or excluded, for example, are you including applications that were not at all realistic, and so were rejected as soon as they landed on your desk?
With such statistics on the website, applications would have a much better sense of what they can expect from the process.
Oh, I thought you might have suggested the live thing before, my mistake. Maybe I should have just given the 90-day figure above.
(That approach seems reasonable to me)
Do you know what proportion of applicants fill out the feedback form?
I’m not sure sorry, I don’t have that stat in front of me. I may be able to find it in a few days.
Is there (or might it be worthwhile for there to be) a business process to identify aged applications and review them at intervals to make sure they are not “stuck” and that the applicant is being kept up to date? Perhaps “aged” in this context would operationalize as ~2x the median decision time and/or ~>90-95th percentile of wait times? Maybe someone looks at the aged list every ~2 weeks, makes sure the application isn’t “stuck” in a reasonably fixable way, and reviews the last correspondence to/from the applicant to make sure their information about timeframes is not outdated?
We do have a few processes that are designed to do this (some of which are doing some of the things you mentioned above). Most of the long delays are fairly uncorrelated (e.g. complicated legal issue, a bug in our application tracker …).
How are these included? Is it that in you count ones that haven’t closed as if they had closed today?
(A really rough way of dealing with this would be to count ones that haven’t closed as if they will close in as many days from now as they’ve been open so far, on the assumption that you’re on average halfway through their open lifetime.)
Is the repetition of “applied in the last 30 days” possibly a typo?
oops, fixed—thank you
Are you factoring in people who withdraw their application because of how long the process was taking?
Empirically, I don’t think that this has happened very much. We have a “withdrawn by applicant status”, which would include this, but the status is very rarely used.
In any case, the numbers above will factor those applications in, but I would guess that if we didn’t, the numbers would decrease by less than a day.
My point is more around the fact that if a person withdraws their application, then they never received a decision and so the time till decision is unknown/infinite, it’s not the time until they withdrew.
Oh, right—I was counting “never receiving a decision but letting us know” as a decision. In this case, the number we’d give is days until the application was withdrawn.
We don’t track the reason for withdrawals in our KPIs, but I am pretty sure that process length is a reason for a withdrawal 0-5% of the time.
I might be missing why this is important, I would have thought that if we were making an error it would overestimate those times—not underestimate them.
My point was that if someone withdraws their application because you were taking so long to get back to them, and you count that as the date you gave them your decision, you’re artificially lowering the average time-till-decision metric.
Actually the reason I asked if you’d factored in withdrawn application not how was to make sure my criticism was relevant before bringing it up—but that probably made the criticism less clear
What would you consider the non-artificial “average time-till-decision metric” in this case?
Hmm so I currently think the default should be that withdrawals without a decision aren’t included in the time-till-_decision_ metric, as otherwise you’re reporting a time-till-closure metric. (I weakly think that if the withdrawal is due to the decision taking too long and that time is above the average (as an attempt to exclude cases where the applicant is just unusually impatient), then it should be encorporated in some capacity, though this has obvious issues.)
what does 30/60/90 days mean? Grants applied to in the last N days? Grants decided on in the last N?
How do the numbers differ for acceptances and rejections?
What percent of decisions (especially acceptances) were made within the timeline given on the website?
Can you share more about the anonymous survey? How has the satisfaction varied over time?
The question relating to website timelines would be hard to answer as it was changed a few times I believe
I answered the first questions above in an edit of the original comment. I’m pretty sure when I re-ran the analysis with decided in last 30 days it didn’t change the results significantly (though I’ll try and recheck this later this week—in our current setup it’s a bit more complicated to work out than the stats I gave above).
I also checked to make sure that only looking at resolved applications and only looking at open applications didn’t make a large difference to the numbers I gave above (in general, the differences were 0-10 days).
I’m not following- what does it mean to say you’ve calculated resolution time to applications that haven’t been resolved?