This is entirely consistent with two other applications I know of from 2023, both of which were funded but experienced severe delays and poor/absent/straightforwardly unprofessional communication
I had a similar experience with 4 months of wait (uncalibrated grant decision timelines on the website) and unresponsiveness to email with LTFF, and I know a couple of people who had similar problems. I also found it pretty “disrespectful”.
Its hard to understand why a) they wouldn’t list the empirical grant timelines on their website, and b) why they would have to be so long.
I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it’s a bit technically challenging to implement—but the obvious MVP (as you point out) is to have a bunch of stats on our site. I’ll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don’t want to commit to doing this).
In the meantime, here are a few numbers that I quickly put together (across all of our funds).
Grant decision turnaround times (mean, median):
applied in the last 30 days = 14 days, 15 days
this is pretty volatile as it includes applications that haven’t yet closed
applied in the last 60 days = 23 days, 20 days
applied in the last 90 days = 25 days, 20 days
When I last checked our (anonymous) feedback form, the average score for [satisfaction of evaluation process] (I can’t quite remember the exact question) was ~4.5/5.
(edit: just found the stats—these are all out of 5)
Overall satisfaction with application process: 4.67
Overall satisfaction with processing time: 4.58
Evaluation time: 4.3
Communications with evaluators: 4.7
I’m not sure that these stats tell the whole story. There are cases where we (or applicants) miss emails or miscommunicate—but the frequency of events like this is difficult to report quickly and also accounts for the majority of negative experiences (according to our feedback form and my own analysis).
On (b), I really would like us to be quicker—and more importantly, more reliable. A few very long-tail applications make the general grantee experience much worse. The general stages in our application process are:
Applicant submits application → application is assigned to a fund manager → fund manager evaluates the application (which often involves back and forth with the applicant, checking references etc.) → other fund managers vote on the application → fund chair reviews evaluation → application is reviewed by external advisors → fund chair gives decision to grantee (pending legal review)
There’s also a really high volume of grants and increasingly few “obvious” rejections. E.g. the LTFF right now has over > 100 applications in its pipeline, and in the last 30 days < 10% of applications were obvious rejections).
Thanks for engaging with my criticism in a positive way.
Regarding how timely the data ought to be, I don’t think live data is necessary at all—it would be sufficient in my view to post updated information every year or two.
I don’t think “applied in the last 30 days” is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:
Restrict to applications from n to n+m months ago, where n>=3
Make a note of what percentage of these applicants haven’t received a response
Give a few different percentiles for decision-timelines, e.g. 20th, 50th, 80th, 95th percentiles.
Include a clear explanation of which applications are being included, or excluded, for example, are you including applications that were not at all realistic, and so were rejected as soon as they landed on your desk?
With such statistics on the website, applications would have a much better sense of what they can expect from the process.
Is there (or might it be worthwhile for there to be) a business process to identify aged applications and review them at intervals to make sure they are not “stuck” and that the applicant is being kept up to date? Perhaps “aged” in this context would operationalize as ~2x the median decision time and/or ~>90-95th percentile of wait times? Maybe someone looks at the aged list every ~2 weeks, makes sure the application isn’t “stuck” in a reasonably fixable way, and reviews the last correspondence to/from the applicant to make sure their information about timeframes is not outdated?
We do have a few processes that are designed to do this (some of which are doing some of the things you mentioned above). Most of the long delays are fairly uncorrelated (e.g. complicated legal issue, a bug in our application tracker …).
How are these included? Is it that in you count ones that haven’t closed as if they had closed today?
(A really rough way of dealing with this would be to count ones that haven’t closed as if they will close in as many days from now as they’ve been open so far, on the assumption that you’re on average halfway through their open lifetime.)
Empirically, I don’t think that this has happened very much. We have a “withdrawn by applicant status”, which would include this, but the status is very rarely used.
In any case, the numbers above will factor those applications in, but I would guess that if we didn’t, the numbers would decrease by less than a day.
My point is more around the fact that if a person withdraws their application, then they never received a decision and so the time till decision is unknown/infinite, it’s not the time until they withdrew.
Oh, right—I was counting “never receiving a decision but letting us know” as a decision. In this case, the number we’d give is days until the application was withdrawn.
We don’t track the reason for withdrawals in our KPIs, but I am pretty sure that process length is a reason for a withdrawal 0-5% of the time.
I might be missing why this is important, I would have thought that if we were making an error it would overestimate those times—not underestimate them.
My point was that if someone withdraws their application because you were taking so long to get back to them, and you count that as the date you gave them your decision, you’re artificially lowering the average time-till-decision metric.
Actually the reason I asked if you’d factored in withdrawn application not how was to make sure my criticism was relevant before bringing it up—but that probably made the criticism less clear
Hmm so I currently think the default should be that withdrawals without a decision aren’t included in the time-till-_decision_ metric, as otherwise you’re reporting a time-till-closure metric. (I weakly think that if the withdrawal is due to the decision taking too long and that time is above the average (as an attempt to exclude cases where the applicant is just unusually impatient), then it should be encorporated in some capacity, though this has obvious issues.)
I answered the first questions above in an edit of the original comment. I’m pretty sure when I re-ran the analysis with decided in last 30 days it didn’t change the results significantly (though I’ll try and recheck this later this week—in our current setup it’s a bit more complicated to work out than the stats I gave above).
I also checked to make sure that only looking at resolved applications and only looking at open applications didn’t make a large difference to the numbers I gave above (in general, the differences were 0-10 days).
I had a similar experience in spring 2023, with an application to EAIF. The fundamental issue was the very slow process from application to decision. This was made worse by poor communication.
Yes, this is consistent with my experience too. Bad calibration of expected timelines, unresponsiveness to (two) emails asking for updates or if they needed anything (over one month), and something I would also qualify as somewhat disrespectful: they asked for additional information that was already available in the initial application.
For me it means that they probably didn’t read through completely before asking for more, besides the application being less than a dozen sentences long, one of them being “here are the relevant links” which contained all the information the follow-up email was asking for. I agree that it was not obvious that the requested info was there in the application, but I would expect a grant manager to actually skim or even read everything before asking for additional details.
In my perspective, it felt like a disregard for my time in an attempt to compensate for a longer turnaround than they wished they would have.
(Opinions my own)
PS: We received a decision a bit less than 3 months after applying.
We also had feedback with very clear inconsistencies (e.g. saying we had closed accounting, even when it was publicly available and clearly linked. Saying our application had not changed from last rejection, even though we applied with a completely different project). Disrespectful.
I was funded with long delays. I wouldn’t have said “straightforwardly unprofessional” communication in my case.
It was a fairly stressful experience, but seemed consistent with “overworked people dealing with a tough legal situation”, both for EVF in general and my specific grant.
I did suggest on their feedback form that misleading language about timeframes on the application form be removed. It looks like they’ve done that now, although I have no idea when the change was made. (In my case this was essentially the only issue; the turnaround wasn’t necessarily super slow in itself—a few months doesn’t seem unreasonable—it’s just that it was much slower than the form suggested it should be.)
I believe we changed the text a bunch in August/early September. I think there were a few places we didn’t catch the first time, and we made more updates in ~the following month (September). AFAIK we no longer have any (implicit or explicit) commitments for response times anywhere, we only mention predictions and aspirations.
Eg here’s the text at near the beginning of the application form:
The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks. However, due to an unprecedentedly high load, we are currently unable to achieve our desired speedy turnarounds. If you need to hear back sooner (e.g., within a few weeks), you can let us know in the application form, and we will see what we can do. Please note that: EA Funds is low on capacity and may not be able to get back to you by either your stated deadline or the above aims—we encourage you to apply to other funders as well if you have a time-sensitive ask.
This is entirely consistent with two other applications I know of from 2023, both of which were funded but experienced severe delays and poor/absent/straightforwardly unprofessional communication
I had a similar experience with 4 months of wait (uncalibrated grant decision timelines on the website) and unresponsiveness to email with LTFF, and I know a couple of people who had similar problems. I also found it pretty “disrespectful”.
Its hard to understand why a) they wouldn’t list the empirical grant timelines on their website, and b) why they would have to be so long.
I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it’s a bit technically challenging to implement—but the obvious MVP (as you point out) is to have a bunch of stats on our site. I’ll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don’t want to commit to doing this).
In the meantime, here are a few numbers that I quickly put together (across all of our funds).
Grant decision turnaround times (mean, median):
applied in the last 30 days = 14 days, 15 days
this is pretty volatile as it includes applications that haven’t yet closed
applied in the last 60 days = 23 days, 20 days
applied in the last 90 days = 25 days, 20 days
When I last checked our (anonymous) feedback form, the average score for [satisfaction of evaluation process] (I can’t quite remember the exact question) was ~4.5/5.
(edit: just found the stats—these are all out of 5)
Overall satisfaction with application process: 4.67
Overall satisfaction with processing time: 4.58
Evaluation time: 4.3
Communications with evaluators: 4.7
I’m not sure that these stats tell the whole story. There are cases where we (or applicants) miss emails or miscommunicate—but the frequency of events like this is difficult to report quickly and also accounts for the majority of negative experiences (according to our feedback form and my own analysis).
On (b), I really would like us to be quicker—and more importantly, more reliable. A few very long-tail applications make the general grantee experience much worse. The general stages in our application process are:
Applicant submits application → application is assigned to a fund manager → fund manager evaluates the application (which often involves back and forth with the applicant, checking references etc.) → other fund managers vote on the application → fund chair reviews evaluation → application is reviewed by external advisors → fund chair gives decision to grantee (pending legal review)
There’s also a really high volume of grants and increasingly few “obvious” rejections. E.g. the LTFF right now has over > 100 applications in its pipeline, and in the last 30 days < 10% of applications were obvious rejections).
Thanks for engaging with my criticism in a positive way.
Regarding how timely the data ought to be, I don’t think live data is necessary at all—it would be sufficient in my view to post updated information every year or two.
I don’t think “applied in the last 30 days” is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:
Restrict to applications from n to n+m months ago, where n>=3
Make a note of what percentage of these applicants haven’t received a response
Give a few different percentiles for decision-timelines, e.g. 20th, 50th, 80th, 95th percentiles.
Include a clear explanation of which applications are being included, or excluded, for example, are you including applications that were not at all realistic, and so were rejected as soon as they landed on your desk?
With such statistics on the website, applications would have a much better sense of what they can expect from the process.
Oh, I thought you might have suggested the live thing before, my mistake. Maybe I should have just given the 90-day figure above.
(That approach seems reasonable to me)
Do you know what proportion of applicants fill out the feedback form?
I’m not sure sorry, I don’t have that stat in front of me. I may be able to find it in a few days.
Is there (or might it be worthwhile for there to be) a business process to identify aged applications and review them at intervals to make sure they are not “stuck” and that the applicant is being kept up to date? Perhaps “aged” in this context would operationalize as ~2x the median decision time and/or ~>90-95th percentile of wait times? Maybe someone looks at the aged list every ~2 weeks, makes sure the application isn’t “stuck” in a reasonably fixable way, and reviews the last correspondence to/from the applicant to make sure their information about timeframes is not outdated?
We do have a few processes that are designed to do this (some of which are doing some of the things you mentioned above). Most of the long delays are fairly uncorrelated (e.g. complicated legal issue, a bug in our application tracker …).
How are these included? Is it that in you count ones that haven’t closed as if they had closed today?
(A really rough way of dealing with this would be to count ones that haven’t closed as if they will close in as many days from now as they’ve been open so far, on the assumption that you’re on average halfway through their open lifetime.)
Is the repetition of “applied in the last 30 days” possibly a typo?
oops, fixed—thank you
Are you factoring in people who withdraw their application because of how long the process was taking?
Empirically, I don’t think that this has happened very much. We have a “withdrawn by applicant status”, which would include this, but the status is very rarely used.
In any case, the numbers above will factor those applications in, but I would guess that if we didn’t, the numbers would decrease by less than a day.
My point is more around the fact that if a person withdraws their application, then they never received a decision and so the time till decision is unknown/infinite, it’s not the time until they withdrew.
Oh, right—I was counting “never receiving a decision but letting us know” as a decision. In this case, the number we’d give is days until the application was withdrawn.
We don’t track the reason for withdrawals in our KPIs, but I am pretty sure that process length is a reason for a withdrawal 0-5% of the time.
I might be missing why this is important, I would have thought that if we were making an error it would overestimate those times—not underestimate them.
My point was that if someone withdraws their application because you were taking so long to get back to them, and you count that as the date you gave them your decision, you’re artificially lowering the average time-till-decision metric.
Actually the reason I asked if you’d factored in withdrawn application not how was to make sure my criticism was relevant before bringing it up—but that probably made the criticism less clear
What would you consider the non-artificial “average time-till-decision metric” in this case?
Hmm so I currently think the default should be that withdrawals without a decision aren’t included in the time-till-_decision_ metric, as otherwise you’re reporting a time-till-closure metric. (I weakly think that if the withdrawal is due to the decision taking too long and that time is above the average (as an attempt to exclude cases where the applicant is just unusually impatient), then it should be encorporated in some capacity, though this has obvious issues.)
what does 30/60/90 days mean? Grants applied to in the last N days? Grants decided on in the last N?
How do the numbers differ for acceptances and rejections?
What percent of decisions (especially acceptances) were made within the timeline given on the website?
Can you share more about the anonymous survey? How has the satisfaction varied over time?
The question relating to website timelines would be hard to answer as it was changed a few times I believe
I answered the first questions above in an edit of the original comment. I’m pretty sure when I re-ran the analysis with decided in last 30 days it didn’t change the results significantly (though I’ll try and recheck this later this week—in our current setup it’s a bit more complicated to work out than the stats I gave above).
I also checked to make sure that only looking at resolved applications and only looking at open applications didn’t make a large difference to the numbers I gave above (in general, the differences were 0-10 days).
I’m not following- what does it mean to say you’ve calculated resolution time to applications that haven’t been resolved?
I had a similar experience in spring 2023, with an application to EAIF. The fundamental issue was the very slow process from application to decision. This was made worse by poor communication.
Yes, this is consistent with my experience too. Bad calibration of expected timelines, unresponsiveness to (two) emails asking for updates or if they needed anything (over one month), and something I would also qualify as somewhat disrespectful: they asked for additional information that was already available in the initial application.
For me it means that they probably didn’t read through completely before asking for more, besides the application being less than a dozen sentences long, one of them being “here are the relevant links” which contained all the information the follow-up email was asking for. I agree that it was not obvious that the requested info was there in the application, but I would expect a grant manager to actually skim or even read everything before asking for additional details.
In my perspective, it felt like a disregard for my time in an attempt to compensate for a longer turnaround than they wished they would have.
(Opinions my own)
PS: We received a decision a bit less than 3 months after applying.
We also had feedback with very clear inconsistencies (e.g. saying we had closed accounting, even when it was publicly available and clearly linked. Saying our application had not changed from last rejection, even though we applied with a completely different project). Disrespectful.
Me too
Same here
I was funded with long delays. I wouldn’t have said “straightforwardly unprofessional” communication in my case.
It was a fairly stressful experience, but seemed consistent with “overworked people dealing with a tough legal situation”, both for EVF in general and my specific grant.
I did suggest on their feedback form that misleading language about timeframes on the application form be removed. It looks like they’ve done that now, although I have no idea when the change was made. (In my case this was essentially the only issue; the turnaround wasn’t necessarily super slow in itself—a few months doesn’t seem unreasonable—it’s just that it was much slower than the form suggested it should be.)
I believe we changed the text a bunch in August/early September. I think there were a few places we didn’t catch the first time, and we made more updates in ~the following month (September). AFAIK we no longer have any (implicit or explicit) commitments for response times anywhere, we only mention predictions and aspirations.
Eg here’s the text at near the beginning of the application form: