Here is a rough summary of the process, it’s hard to explain spreadsheets in words so this might end up sounding a bit confusing:
We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from −5 to +5 for each application
Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes
However, some fund members weren’t super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.
We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.
Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.
Here is a rough summary of the process, it’s hard to explain spreadsheets in words so this might end up sounding a bit confusing:
We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from −5 to +5 for each application
Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes
However, some fund members weren’t super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.
We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.
Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.