Note that the Grant Rationale text is basically the same for both as Nick has summarised his thinking in one document, but the payout totals reflect the amount disbursed from each fund
It seems that Nick has not been able to leverage his position as EA fund manager to outperform his Open Phil grants (or at least meaningfully distinguish his EA fund grants from his Open Phil grants). This means that we can think of donating to the far future and community funds as having similar cost-effectiveness to individual donations to Open Phil earmarked for those causes. This seems like a problem, since the best individual donations should be able to outperform Open Phil, at least when you account for the benefits of not centralizing donations on too few decisionmakers. I don’t see anyone calling for Open Phil to accept/solicit money from small donors.
The case for finding another manager seems pretty strong. EA funds is a fundamentally sound idea—we should be trying to consolidate donation decisions somewhat to take advantage of different levels of expertise and save small donors’ time and mental energy, but this doesn’t seem like the best way to do it.
Below is a comment I received an anonymous request to post on behalf of an EA community member regarding these grant payout reports.
When I read the perfunctory grant rationale for the Long-Term Future Fund and Community Fund grants, I wondered whether this was a joke or a calculated insult to the EA community.
The one paragraph and 4 bullet points to justify the disbursement of over $1,000,000 to 5 organisations from across the two funds seems like it could have been written up in 3 minutes, with nothing more than a passing knowledge of some of the most well known EA(ish) orgs. This, coming after months of speculation about what the Grant Evaluator entrusted with EA funds for the long term future and the EA community might actually be doing, gives the impression that they weren’t actually doing anything.
Perhaps what is most disappointing is the desultory explanation that all these funds are disbursed in the vague hope that the >$1million might “subsidiz[e] electronics upgrades or childcare” for the charity’s staff or pay them higher salaries and “increase staff satisfaction” and that this might boost productivity. This seems a clear signal, among other things, that the funding space in this area is totally full and grant manager can’t even manage to come up with plausible sounding explanations for how their funds they are disbursing to EA-insiders might increase impact.
All the organizations involved definitely are self-identified effective altruism organizations. Both MIRI and CFAR have origins in the rationality community and LessWrong, with a focus on AI safety/alignment, predating their significant involvement with EA. But AI safety/alignment has been a priority focus area within EA since its inception, and as flagship organizations which have both benefited from an association within EA and have stuck with it through trials, for the purposes of this discussion MIRI and CFAR should be thought of as EA organizations, unless a representative steps forward to clarify these organizations’ relationship(s) to the EA movement.
That the $1 million might subsidize electronics upgrades or childcare or pay them higher salaries doesn’t necessarily strike me as a bad thing. If someone prioritizes anti-malarial bednets, the number of children who could have received treatment with that $1 million looks really bad when it’s being casually used to make office jobs comfier. But there are already outside critics of the EA movement who malign how global poverty interventions can look bad, and most of those criticisms fall flat. So if we’re going to make mountains out of molehills on every issue, no progress will ever be made in EA. Additionally, it’s unrealistic to think what ‘effectiveness’ looks like at every organization and in every area at EA will or should be the same as the Against Malaria Foundation with extremely low overhead. The AMF is essentially a supply and distribution chain management organization run full-time by a founder who doesn’t need to take a salary from running the organization. But most of the time, effective altruists come from more typical backgrounds where most of us can’t do something like that. We’re also human. Staff at EA organizations are usually talented people who by working in a small non-profit sector are forgoing career and personal opportunities they otherwise would have had, working on what to the outside world are all niche causes that don’t attract much attention. So that salaries and benefits for staff at EA organizations go up over time like they do at any regular organization at other NPOs and in other economic sectors makes sense. Additionally, due to the nature of the work as based on a cumulative research base, to keep talent whose value for the important work EA organizations do only grows as they advance the organization’s mission is also important.That relative to many other projects in this space that might have received seed funding, expanding benefits and salaries for existing staff and well-established EA organizations they could have fundraised more themselves, is another question entirely. However I would stress when doing so, we remember we’re talking about real people with real needs, and to act outraged at effective altruists not living up to some image we had of them as ascetics would be only unfair and damaging.
Regarding Nick’s response, it appears there was at least some miscommunication within the CEA regarding how the EA Community and Long-Term Future Funds would be disbursed. While there was a public update in April from Nick regarding the funds, the discrepancy with how the funds were initially presented to the community and donors, and Nick’s way of running the funds, has not been addressed. I intend to follow up to find out more.
What several others have mentioned by now, and I agree, is that it’s not necessarily apparent there is no room for more funding in these areas. Having worked on EA projects myself, I’m well aware of a few organizations working both on EA project development and are focused on the long-term future, respectively, that are seeking to expand. As Nick Beckstead mentioned, he hopes in the near future the EA Grants will be able to fill this role. It’s unclear when the EA Grants might open for applications again.
Couldn’t agree more. What is worse, (as I mention in another comment) university grants were disqualified for no clear reason. I don’t know which university projects were at all considered, but the underlying assumption seems to be that irrespective of how good they would be, the other projects will perform more effectively and more efficiently, even if they are already funded, i.e. by giving them some more cash.
I think this a symptom of an anti-academic tendencies that I’ve noticed on this form and in this particular domain of research, which I think would be healthy to discuss. The importance of the issue is easy to understand if we think of any other domain of research: just imagine that we’d start arguing that non-academic climate research centers should be financed instead of the academic ones. Or that research in medicine should be redirected from academic institutions towards non-academic ones. I’d be surprised if anyone here would defend such a policy. There are good reasons why academic institutions—with all their tedious procedures, peer-review processes, etc.-- are important sources of reliable scientific knowledge production. Perhaps we are dealing here with an in-group bias, which needs an open and detailed discussion.
I’m Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI’s largest donor, and Nick Beckstead was the program officer who made that grant decision.
I can’t speak for other universities, but I agree with his assessment that Oxford’s regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more child care (since there is a standard pay scale). I therefore believe that the reason he gave for ruling out university-based grantees is the true reason, and one which is justified in at least some cases.
But what about paying for teaching duties (i.e. using the finding to cover the teaching load of a given researcher)? Teaching is one of the main issues when it comes to time spent on research, and this would mean that OU can’t accept the funding framework within quite common ERC grants that have this issue covered. This was my point all along.
Second, what about the payment for a better equipment? That was another issue mentioned in Nick’s post.
Finally, the underlying assumption of Nick’s explanation is that the output of non-academic workers will be better within the given projects than the output of the non-academic workers, which is a bold claim and insufficiently explicated in the text he provided. Again: I don’t know which projects we are assessing here and without that knowledge we cannot make an adequate assessment. Anything else would be a mere speculation. I am just making a plea for higher transparency given the complexity of these issues.
Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.
Disclosure: I am working at OpenPhil over the summer. (I don’t have any particular private information, both of the above facts are publicly available.)
EDIT: I don’t intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.
Thanks for the input! But I didn’t claim that Nick is biased against academia—I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory.
As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system.
Nevertheless, I am not concluding from this that Nick is biased (nor that he isn’t) - we just don’t have evidence for either of these claims, and at the end of the day, this shouldn’t matter. The procedure for grants awarding should be robust enough to prevent such biases to kick in. I am not sure if any such measures have been undertaken in this case though, which is why I raising this point.
My guess would be because EA is still a niche community favouring unpopular causes, and existing effective altruists outside academia will be more willing to pursue effective ideas within uncommon areas EAs favour, while university projects typically have more opportunity for funding outside EA, it makes sense to prioritize funding non-academic projects. Of course, that’s only heuristic reasoning. These aren’t the most solid assumptions for EA as a movement to make. I agree this should be addressed with more open and detailed discussion on this forum.
Arguably life extension or anti-ageing research institutions are doing medical research outside academia. Indeed it’s the case most organizations in this space I’ve heard effective altruists tout are either for-profit companies, or NPOs, to which they donate, such as SENS and the newly opened Longevity Research Institute. So while I don’t know about climate research centres, there are in fact a lot of people in EA who might defend a policy of redirecting resources for medical research towards non-academic institutions.
Nick Beckstead stated why he didn’t pay as much attention to the EA Community and Long-Term Future Funds is because they were redundant with grants he would have already made to the Open Philanthropy Project. Of course that still raises the question of why the EA Funds were presented differently to donors and the community, and why this wasn’t better addressed, which I intend to follow up on with the CEA. Regarding the long-term future, I’m aware Nick is correct the Open Philanthropy Project has been making many smaller grants to many small academic projects in AI safety/alignment, biosecurity and other areas. I expect this trend will only increase in the near future. Having looked into it myself, and talked to academics in EA who know the area from the inside better than I, there are indeed fewer opportunities for academic research on effective ‘EA Community-Building’ than there will be for other areas EAs focus on. But projects run by effective altruists working at universities have received grants from EA Grants to build bridges into academia, such as the Effective Thesis Project jointly run the Czech EA Foundation, and the Effective Altruists of Berkeley, at the University of California, Berkeley.
Hi Evan, Here’s my response to your comments (including another post of yours from above). By the way, that’s a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.
Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I’ve misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one’s motivation for the given research topic, I don’t see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.
Whether a certain group of people can conduct a given project in an effective and efficient way shouldn’t primarily depend on their ethical and political mindset (though this may play a motivating role as I’ve mentioned above), but on the methodological prospects of the given project, on its programmatic character and the capacity of the given scientific group to make an impact. I don’t see why EAs—as such—would qualify for such values anymore than an expert in the given domain can, when placed within the framework of the given project. It is important to keep in mind that we are not talking here about a political activity of spreading EA ideas, but about scientific research which has to be conducted with a necessary rigor in order to make an impact in the scientific community and wider (otherwise nobody will care about the output of the given researchers). This is the kind of criteria that I wished would be present in the assessment of the given grants, rather than who is an EA and who not.
Second, by prioritizing a certain type of group in the given domain of research, the danger of confirmation bias gets increased. This is why feminist epistemologists have been arguing for diversity across the scientific community (rather than for the claim that only feminists should do feminist-compatible scientific research).
Finally, if there is a worry that academic projects focus too much on other issues, the call for funding can always be formulated in such a way that it specifies the desired topics. In this way, academic project proposals can be formulated having EA goals in mind.
Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I’ve misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one’s motivation for the given research topic, I don’t see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.
I think it’s a common perception in EA effective altruists can often do work as efficiently and effectively as academics not explicitly affiliated with EA. Often EAs also think academics can do some if not most EA work than a random non-academic EA. AI safety is more populated with and stems from the rationality community. On average it’s more ambivalent towards academia than EA. It’s my personal opinion there are a variety of reasons why EA may often have a comparative advantage of doing the research in-house. There are a number of reasons for this.
One is practical. Academics would often have to divide their time doing EA-relevant research with teaching duties. EA tends to focus on unsexy research topics, so academics may be likelier to get grants for focusing on irrelevant research. Depending on the field, the politics of research can distort the epistemology of academia so it won’t work for EA’s purposes. These are constraints effective altruists working full-time at NPOs funded by other effective altruists don’t face, allowing them to dedicate all their attention to their organization’s mission.
Personally, my confidence in EA to make progress on research and other projects for a wide variety of goals is bolstered by some original research in multiple causes being lauded by academics as some of the best on the subject they’ve seen. Of course, these are NPOs focused on addressing neglected problems in global poverty, animal advocacy campaigns, and other niche areas. Some of the biggest successes in EA come from close collaborations with academia. I think most EAs would encourage more cooperation between academia and EA. I’ve pushed in the past for EA making more grants to academics doing sympathetic research. Attracting talent with an academic research background to EA can be difficult. I agree with you overall EA’s current approach doesn’t make sense.
I think you’ve got a lot of good points. I’d encourage you to make a post out of some of the comments I made here. I think one reason your posts might be poorly received is because some causes in EA, especially AI safety/alignment, have received a lot of poor criticism in the past merely for trying to do formal research outside of academia. I could review a post before you post it to the EA Forum to suggest edits so it would be better received. Either way, I think EA integrating more with academia is a great idea.
Hey Evan, thanks for the detailed reply and the encouragement! :) I’d love to write a longer post on this and I’ll try to do so as soon as I catch some more time! Let me just briefly reply to some of your worries concerning academia, which may be shared by others across the board.
Efficiency in terms of time—the idea that academics can’t do research as much as non-academic due to teaching duties is not necessarily the case. I am speaking here for EU, where in many cases both pre-docs and post-docs don’t have much (or any) teaching duties (e.g. I did my PhD in Belgium where the agreement was that PhDs focus only on research). Moreover, even if you do have teaching dutues, it may often inform your research and as such it’s usually not a “wasted time” (when it comes to research results). As for professors, this largely depends on a country, but there are many examples of academics with a prof. title whose productivity is super high in spite of the teaching duties.
Focusing on sexy topics—there is this misconception that sexy topics won’t pass through academia, while actually the opposite is the case: the sexier your topic is, the more likely it is that your project gets funded. The primary issue with any topic, whatsoever, is that the project proposal shows how the topic will be investigated, i.e. the basic methodology. I don’t know where exactly this myth comes from, to be honest. I work in philosophy of science, and the more relevant your topic is for real-world problems, the more attractive your project proposal will be (at least in the current funding atmosphere). One reason why this myth is so entrenched among EAs could be the experience of EAs within research projects which already had pre-determined goals and so each researcher had to focus on whatever their boss asked them to. However, there are numerous possibilities across EU to apply for one’s own project proposals, in which case you will do precisely what you propose. Another reason could be that EAs don’t have much experience with applications for funding, and have submitted project proposals that don’t seem convincing in terms of methodology (writing projects is a skill which needs to learned like any other), leading them to conclude that academics don’t care about the given topics.
Using public funding for EA purposes—this point relates to what you mention above and I think it would be really great if this direction could be improved. For instance, if academics within EA formed a sort of counseling body, helping EAs with their project proposals, choice of a PhD supervisor, etc. This would be a win-win situation for all kinds of reasons: from integrating EA relevant research goals into academia, to using public funding sources (rather than EA donations) for research. This could proceed e.g. in terms of real-life workshops, online discussions, etc. I’d be happy to participate in such a body so maybe we should seriously consider this option.
Nick says these latest grants “disburse all the EA Funds under my management.” However, the grant amounts are ~10-15% less than the available cash the funds were reported as holding at the end of March, and the funds have presumably raised more money since then. Can Nick or someone from CEA please clarify?
Hi Jon, yes this is due to the numbers reported in March including the accounts payable—money not yet held in cash but expected to come in. We later realised that some of the transactions we were expecting to come in were not real donations, but rather several people making large ‘testing’ donations which then did not get paid. We have resolved these issues, and will be reporting Fund balances in cash terms going forward, however it did mean that the March numbers ended up being inflated.
We will be publishing a post in the coming weeks going into detail on the work we have been doing in the back end of Funds and releasing an update to the site which automatically pulls the Fund balances from our accounting system.
Hey Nick,
I’m excited to hear you’ve made a bunch of grants. Do you know when they’ll be publicly announced?
The grant payout reports are now up on the EA Funds site:
Long Term Future - $917,000.00
EA Community - $526,000.00
Note that the Grant Rationale text is basically the same for both as Nick has summarised his thinking in one document, but the payout totals reflect the amount disbursed from each fund
It seems that Nick has not been able to leverage his position as EA fund manager to outperform his Open Phil grants (or at least meaningfully distinguish his EA fund grants from his Open Phil grants). This means that we can think of donating to the far future and community funds as having similar cost-effectiveness to individual donations to Open Phil earmarked for those causes. This seems like a problem, since the best individual donations should be able to outperform Open Phil, at least when you account for the benefits of not centralizing donations on too few decisionmakers. I don’t see anyone calling for Open Phil to accept/solicit money from small donors.
The case for finding another manager seems pretty strong. EA funds is a fundamentally sound idea—we should be trying to consolidate donation decisions somewhat to take advantage of different levels of expertise and save small donors’ time and mental energy, but this doesn’t seem like the best way to do it.
Below is a comment I received an anonymous request to post on behalf of an EA community member regarding these grant payout reports.
When I read the perfunctory grant rationale for the Long-Term Future Fund and Community Fund grants, I wondered whether this was a joke or a calculated insult to the EA community.
The one paragraph and 4 bullet points to justify the disbursement of over $1,000,000 to 5 organisations from across the two funds seems like it could have been written up in 3 minutes, with nothing more than a passing knowledge of some of the most well known EA(ish) orgs. This, coming after months of speculation about what the Grant Evaluator entrusted with EA funds for the long term future and the EA community might actually be doing, gives the impression that they weren’t actually doing anything.
Perhaps what is most disappointing is the desultory explanation that all these funds are disbursed in the vague hope that the >$1million might “subsidiz[e] electronics upgrades or childcare” for the charity’s staff or pay them higher salaries and “increase staff satisfaction” and that this might boost productivity. This seems a clear signal, among other things, that the funding space in this area is totally full and grant manager can’t even manage to come up with plausible sounding explanations for how their funds they are disbursing to EA-insiders might increase impact.
Here is my own response to these comments.
All the organizations involved definitely are self-identified effective altruism organizations. Both MIRI and CFAR have origins in the rationality community and LessWrong, with a focus on AI safety/alignment, predating their significant involvement with EA. But AI safety/alignment has been a priority focus area within EA since its inception, and as flagship organizations which have both benefited from an association within EA and have stuck with it through trials, for the purposes of this discussion MIRI and CFAR should be thought of as EA organizations, unless a representative steps forward to clarify these organizations’ relationship(s) to the EA movement.
That the $1 million might subsidize electronics upgrades or childcare or pay them higher salaries doesn’t necessarily strike me as a bad thing. If someone prioritizes anti-malarial bednets, the number of children who could have received treatment with that $1 million looks really bad when it’s being casually used to make office jobs comfier. But there are already outside critics of the EA movement who malign how global poverty interventions can look bad, and most of those criticisms fall flat. So if we’re going to make mountains out of molehills on every issue, no progress will ever be made in EA. Additionally, it’s unrealistic to think what ‘effectiveness’ looks like at every organization and in every area at EA will or should be the same as the Against Malaria Foundation with extremely low overhead. The AMF is essentially a supply and distribution chain management organization run full-time by a founder who doesn’t need to take a salary from running the organization. But most of the time, effective altruists come from more typical backgrounds where most of us can’t do something like that. We’re also human. Staff at EA organizations are usually talented people who by working in a small non-profit sector are forgoing career and personal opportunities they otherwise would have had, working on what to the outside world are all niche causes that don’t attract much attention. So that salaries and benefits for staff at EA organizations go up over time like they do at any regular organization at other NPOs and in other economic sectors makes sense. Additionally, due to the nature of the work as based on a cumulative research base, to keep talent whose value for the important work EA organizations do only grows as they advance the organization’s mission is also important.That relative to many other projects in this space that might have received seed funding, expanding benefits and salaries for existing staff and well-established EA organizations they could have fundraised more themselves, is another question entirely. However I would stress when doing so, we remember we’re talking about real people with real needs, and to act outraged at effective altruists not living up to some image we had of them as ascetics would be only unfair and damaging.
Regarding Nick’s response, it appears there was at least some miscommunication within the CEA regarding how the EA Community and Long-Term Future Funds would be disbursed. While there was a public update in April from Nick regarding the funds, the discrepancy with how the funds were initially presented to the community and donors, and Nick’s way of running the funds, has not been addressed. I intend to follow up to find out more.
What several others have mentioned by now, and I agree, is that it’s not necessarily apparent there is no room for more funding in these areas. Having worked on EA projects myself, I’m well aware of a few organizations working both on EA project development and are focused on the long-term future, respectively, that are seeking to expand. As Nick Beckstead mentioned, he hopes in the near future the EA Grants will be able to fill this role. It’s unclear when the EA Grants might open for applications again.
Couldn’t agree more. What is worse, (as I mention in another comment) university grants were disqualified for no clear reason. I don’t know which university projects were at all considered, but the underlying assumption seems to be that irrespective of how good they would be, the other projects will perform more effectively and more efficiently, even if they are already funded, i.e. by giving them some more cash.
I think this a symptom of an anti-academic tendencies that I’ve noticed on this form and in this particular domain of research, which I think would be healthy to discuss. The importance of the issue is easy to understand if we think of any other domain of research: just imagine that we’d start arguing that non-academic climate research centers should be financed instead of the academic ones. Or that research in medicine should be redirected from academic institutions towards non-academic ones. I’d be surprised if anyone here would defend such a policy. There are good reasons why academic institutions—with all their tedious procedures, peer-review processes, etc.-- are important sources of reliable scientific knowledge production. Perhaps we are dealing here with an in-group bias, which needs an open and detailed discussion.
I’m Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI’s largest donor, and Nick Beckstead was the program officer who made that grant decision.
I can’t speak for other universities, but I agree with his assessment that Oxford’s regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more child care (since there is a standard pay scale). I therefore believe that the reason he gave for ruling out university-based grantees is the true reason, and one which is justified in at least some cases.
But what about paying for teaching duties (i.e. using the finding to cover the teaching load of a given researcher)? Teaching is one of the main issues when it comes to time spent on research, and this would mean that OU can’t accept the funding framework within quite common ERC grants that have this issue covered. This was my point all along.
Second, what about the payment for a better equipment? That was another issue mentioned in Nick’s post.
Finally, the underlying assumption of Nick’s explanation is that the output of non-academic workers will be better within the given projects than the output of the non-academic workers, which is a bold claim and insufficiently explicated in the text he provided. Again: I don’t know which projects we are assessing here and without that knowledge we cannot make an adequate assessment. Anything else would be a mere speculation. I am just making a plea for higher transparency given the complexity of these issues.
Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.
Disclosure: I am working at OpenPhil over the summer. (I don’t have any particular private information, both of the above facts are publicly available.)
EDIT: I don’t intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.
Thanks for the input! But I didn’t claim that Nick is biased against academia—I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory.
As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system.
Nevertheless, I am not concluding from this that Nick is biased (nor that he isn’t) - we just don’t have evidence for either of these claims, and at the end of the day, this shouldn’t matter. The procedure for grants awarding should be robust enough to prevent such biases to kick in. I am not sure if any such measures have been undertaken in this case though, which is why I raising this point.
My guess would be because EA is still a niche community favouring unpopular causes, and existing effective altruists outside academia will be more willing to pursue effective ideas within uncommon areas EAs favour, while university projects typically have more opportunity for funding outside EA, it makes sense to prioritize funding non-academic projects. Of course, that’s only heuristic reasoning. These aren’t the most solid assumptions for EA as a movement to make. I agree this should be addressed with more open and detailed discussion on this forum.
Arguably life extension or anti-ageing research institutions are doing medical research outside academia. Indeed it’s the case most organizations in this space I’ve heard effective altruists tout are either for-profit companies, or NPOs, to which they donate, such as SENS and the newly opened Longevity Research Institute. So while I don’t know about climate research centres, there are in fact a lot of people in EA who might defend a policy of redirecting resources for medical research towards non-academic institutions.
Nick Beckstead stated why he didn’t pay as much attention to the EA Community and Long-Term Future Funds is because they were redundant with grants he would have already made to the Open Philanthropy Project. Of course that still raises the question of why the EA Funds were presented differently to donors and the community, and why this wasn’t better addressed, which I intend to follow up on with the CEA. Regarding the long-term future, I’m aware Nick is correct the Open Philanthropy Project has been making many smaller grants to many small academic projects in AI safety/alignment, biosecurity and other areas. I expect this trend will only increase in the near future. Having looked into it myself, and talked to academics in EA who know the area from the inside better than I, there are indeed fewer opportunities for academic research on effective ‘EA Community-Building’ than there will be for other areas EAs focus on. But projects run by effective altruists working at universities have received grants from EA Grants to build bridges into academia, such as the Effective Thesis Project jointly run the Czech EA Foundation, and the Effective Altruists of Berkeley, at the University of California, Berkeley.
Hi Evan, Here’s my response to your comments (including another post of yours from above). By the way, that’s a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.
Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I’ve misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one’s motivation for the given research topic, I don’t see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.
Whether a certain group of people can conduct a given project in an effective and efficient way shouldn’t primarily depend on their ethical and political mindset (though this may play a motivating role as I’ve mentioned above), but on the methodological prospects of the given project, on its programmatic character and the capacity of the given scientific group to make an impact. I don’t see why EAs—as such—would qualify for such values anymore than an expert in the given domain can, when placed within the framework of the given project. It is important to keep in mind that we are not talking here about a political activity of spreading EA ideas, but about scientific research which has to be conducted with a necessary rigor in order to make an impact in the scientific community and wider (otherwise nobody will care about the output of the given researchers). This is the kind of criteria that I wished would be present in the assessment of the given grants, rather than who is an EA and who not.
Second, by prioritizing a certain type of group in the given domain of research, the danger of confirmation bias gets increased. This is why feminist epistemologists have been arguing for diversity across the scientific community (rather than for the claim that only feminists should do feminist-compatible scientific research).
Finally, if there is a worry that academic projects focus too much on other issues, the call for funding can always be formulated in such a way that it specifies the desired topics. In this way, academic project proposals can be formulated having EA goals in mind.
I think it’s a common perception in EA effective altruists can often do work as efficiently and effectively as academics not explicitly affiliated with EA. Often EAs also think academics can do some if not most EA work than a random non-academic EA. AI safety is more populated with and stems from the rationality community. On average it’s more ambivalent towards academia than EA. It’s my personal opinion there are a variety of reasons why EA may often have a comparative advantage of doing the research in-house. There are a number of reasons for this.
One is practical. Academics would often have to divide their time doing EA-relevant research with teaching duties. EA tends to focus on unsexy research topics, so academics may be likelier to get grants for focusing on irrelevant research. Depending on the field, the politics of research can distort the epistemology of academia so it won’t work for EA’s purposes. These are constraints effective altruists working full-time at NPOs funded by other effective altruists don’t face, allowing them to dedicate all their attention to their organization’s mission.
Personally, my confidence in EA to make progress on research and other projects for a wide variety of goals is bolstered by some original research in multiple causes being lauded by academics as some of the best on the subject they’ve seen. Of course, these are NPOs focused on addressing neglected problems in global poverty, animal advocacy campaigns, and other niche areas. Some of the biggest successes in EA come from close collaborations with academia. I think most EAs would encourage more cooperation between academia and EA. I’ve pushed in the past for EA making more grants to academics doing sympathetic research. Attracting talent with an academic research background to EA can be difficult. I agree with you overall EA’s current approach doesn’t make sense.
I think you’ve got a lot of good points. I’d encourage you to make a post out of some of the comments I made here. I think one reason your posts might be poorly received is because some causes in EA, especially AI safety/alignment, have received a lot of poor criticism in the past merely for trying to do formal research outside of academia. I could review a post before you post it to the EA Forum to suggest edits so it would be better received. Either way, I think EA integrating more with academia is a great idea.
Hey Evan, thanks for the detailed reply and the encouragement! :) I’d love to write a longer post on this and I’ll try to do so as soon as I catch some more time! Let me just briefly reply to some of your worries concerning academia, which may be shared by others across the board.
Efficiency in terms of time—the idea that academics can’t do research as much as non-academic due to teaching duties is not necessarily the case. I am speaking here for EU, where in many cases both pre-docs and post-docs don’t have much (or any) teaching duties (e.g. I did my PhD in Belgium where the agreement was that PhDs focus only on research). Moreover, even if you do have teaching dutues, it may often inform your research and as such it’s usually not a “wasted time” (when it comes to research results). As for professors, this largely depends on a country, but there are many examples of academics with a prof. title whose productivity is super high in spite of the teaching duties.
Focusing on sexy topics—there is this misconception that sexy topics won’t pass through academia, while actually the opposite is the case: the sexier your topic is, the more likely it is that your project gets funded. The primary issue with any topic, whatsoever, is that the project proposal shows how the topic will be investigated, i.e. the basic methodology. I don’t know where exactly this myth comes from, to be honest. I work in philosophy of science, and the more relevant your topic is for real-world problems, the more attractive your project proposal will be (at least in the current funding atmosphere). One reason why this myth is so entrenched among EAs could be the experience of EAs within research projects which already had pre-determined goals and so each researcher had to focus on whatever their boss asked them to. However, there are numerous possibilities across EU to apply for one’s own project proposals, in which case you will do precisely what you propose. Another reason could be that EAs don’t have much experience with applications for funding, and have submitted project proposals that don’t seem convincing in terms of methodology (writing projects is a skill which needs to learned like any other), leading them to conclude that academics don’t care about the given topics.
Using public funding for EA purposes—this point relates to what you mention above and I think it would be really great if this direction could be improved. For instance, if academics within EA formed a sort of counseling body, helping EAs with their project proposals, choice of a PhD supervisor, etc. This would be a win-win situation for all kinds of reasons: from integrating EA relevant research goals into academia, to using public funding sources (rather than EA donations) for research. This could proceed e.g. in terms of real-life workshops, online discussions, etc. I’d be happy to participate in such a body so maybe we should seriously consider this option.
Nick says these latest grants “disburse all the EA Funds under my management.” However, the grant amounts are ~10-15% less than the available cash the funds were reported as holding at the end of March, and the funds have presumably raised more money since then. Can Nick or someone from CEA please clarify?
Hi Jon, yes this is due to the numbers reported in March including the accounts payable—money not yet held in cash but expected to come in. We later realised that some of the transactions we were expecting to come in were not real donations, but rather several people making large ‘testing’ donations which then did not get paid. We have resolved these issues, and will be reporting Fund balances in cash terms going forward, however it did mean that the March numbers ended up being inflated.
We will be publishing a post in the coming weeks going into detail on the work we have been doing in the back end of Funds and releasing an update to the site which automatically pulls the Fund balances from our accounting system.
Thanks for clarifying!
Looking forward to seeing the upcoming post, it would be great if it could include a chart/table of donations (in cash terms) to each fund over time.
There are links missing from the EA Community Fund post to the OpenPhil writeups on 80k and CEA.
Fixed. Thanks, Markus!
Hi Peter, should be in the next few days, we’re just finalising the details on CEA side.
Perfect, thanks!