I notice until now I’ve been conflating whether the OpenPhil grant-makers themselves should be a committee, versus whether they should bring in a committee to assess the researchers they fund. I realise you’re talking about the latter, while I was talking about the former. Regarding the latter (in this situation) here is what my model of a senior staff member at OpenPhil thinks in this particular case of AI.
If they were attempting to make grants in a fairly mainstream area of research (e.g. transfer learning on racing games) then they would have absolutely wanted to use a panel if they were considering some research. However, OpenPhil is attempting to build a novel research field, that is not very similar to existing fields. One of the big things that OpenPhil has changed their mind about in the past few years, is going from believing there was expert consensus in AI that AGI would not be a big problem, to believing that there is not relevant expert class on the topic of forecasting AGI capabilities and timelines; the expert class most people think about (ML researchers) is much better at assessing the near-term practicality of ML research.
As such, there was not a relevant expert class in this case, and OpenPhil picked an unusual method of determining whether to give the grant (that heavily included variables such as the fact that MIRI has a strong track record of thinking carefully about long-term AGI related issues). I daresay MIRI and OpenPhil would not expect MIRI to pass the test you are proposing, because they are trying to do something qualitatively different than anything currently going on in the field.
Does that feel like it hits the core point you care about?
If that does resolve your confusion about OpenPhil’s decision, I will further add:
If your goal is to try to identify good funding opportunities, then we are in agreement: the fact that OpenPhil has funded an organisation (plus the associated write-up about why) is commonly not sufficient information to persuade me that it’s sufficiently cost-effective that I should donate to it over, say, a GiveWell top charity.
If your goal however is to figure out whether OpenPhil’s organisation in general is epistemically sound, I would look to other variables than the specific grants where the reasoning is least transparent and looks the most wrong. The main reasons I have an unusually high amount of trust in OpenPhil’s decisions is from seeing other positive epistemic signs from its leadership key research staff, not from assessing single grant datapoint. My model of OpenPhil’s competence instead weights more:
When assessing the grant making in a particular cause, I think look to the particular program manager and see what their output has been like.
Personally in the first four cases, I’ve seen remarkably strong positive evidence. Regarding the latter I actually haven’t got much evidence, the individual program managers do not tend to publish much. Overall I’m very impressed with OpenPhil as an org.
(I’m about to fly on a plane, can find more links to back up some claims later.)
Thanks, Benito, there are quite some issues we agree on, I think. Let me give names to some points in this discussion :)
General work of OpenPhil. First, let me state clearly that my post in no way challenges (or aimed to do so) overall OpenPhil as an organization. To the contrary: I thought this one hick-up is a rather bad example and poses a danger to the otherwise great stuff. Why? Because the explication is extremely poor and the money extremely large. So this is my general worry concerning their PR (taking into account their notes on not needing to justify their decision etc. - in this case I think this should have been done, just as they did it in the case of their previous (much smaller) grant to MIRI.
Funding a novel research field. I do understand their idea was to fund a novel a new approach to this topic or even a novel research field. Nevertheless, I still don’t see why this was a good way to go about it, since less risky paths are easily available. Consider the following:
OpenPhil makes an open call for research projects targeting the novel domain: the call specifies precisely which questions the projects should tackle;
OpenPhil selects a panel of experts who can evaluate both the given projects as well as the competence of the applicants to carry the project;
OpenPhil provides milestone criteria, in view of which the grant would be extended: e.g. the grant may initially be for the period of 5 years (e.g. 1.5 mil EUR is usually considered sufficient to fund a team of 5 members over the course of 5 years) , after which the project participants have to show the effectiveness of their project and apply for additional funding.
The benefits of such a procedure would be numerous:
avoiding confirmation bias: as we all here very well know, confirmation bias can be easily present when it comes to controversial topics, which is why a second opinion is extremely important. This doesn’t mean we shouldn’t allow for hard-headed researchers to pursue their provocative ideas, nor that only dominant-theory-compatible ideas should be considered worthy of pursuit. Instead, what needs to be assured is that prospective values, suggesting the promising character of the project, are satisfied. Take for instance Wegener’s hypothesis of continental drift, which he proposed in 1912. Wegener was way too confident of the acceptability of his theory, which is why many prematurely rejected his ideas (where my coauthor and I argue that such a rejection was clearly unjustified). Nevertheless, his ideas were indeed worthy of pursuit, and the whole research program had clear paths that could have been pursued (despite the numerous problems and anomalies). So, a novel surprising idea, challenging an established one isn’t the same as a junk-scientific hypothesis which shows no prospective values whatsoever. We can assess its promise, no matter how risky it is. For that we need experts who can check its methodology.
And since MIRI’s current work concerns decision theory and ML, it’s not as if their methodology can’t be checked in this way, and in view of the goals of the project set in advance by OpenPhil (so the check-up would have to concern the question: how well does this method satisfy the required goals?).
Another benefit of the above procedure is assuring that the most competent scholars lead the given project. MIRI may have good intentions, but how do we know that some other scholars wouldn’t perform the same job even better? There must be some kind of competence check-up, and a time-sensitive effectiveness measure. Number of publications is one possible measure, but not the most fortunate one (I agree on this with others here). But then we need something else, for example: a single publication with a decent impact. Or a few publications over the course of a few years, each of which exhibits a strong impact. Otherwise, how do we know there’ll be anything effective done within the project? How do we know these scholars rather than some others will do the job? Even if we like their enthusiasm, unless they reach the scientific community (or the community of science policy makers), how will they be effective? And unless they manage to publish in high-impact venues (say, conference proceedings), how will they reach these communities?
Financing more than one project and thus hedging one’s bets: why give all 3.75 mil USD to one project instead of awarding them to, say, two different groups (or as suggested above, to one group, but in phases)?
While I agree that funding risky, potentially ground-breaking research is important and may follow different standards than the regular academic paths, we still need some standards, and those I just suggested seem to me strictly better than the ones employed by OpenPhil in the case of this particular grant. Right now, it all just seems like a buddy system: my buddies are working on this ground-breaking stuff and I trust them, so I’ll give them cash for that. Doesn’t sound very effective to me :p
Gotcha. I’ll probably wrap up with this comment, here’s my few last thoughts (all on the topic of building a research field):
(I’m commenting on phone, sorry if paragraphs are unusually long, if they are I’ll try to add more breaks later.)
Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions—it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
My broad strokes thoughts again are that, when you choose to make grants that your models say have the chance of being massive hits, you just will look like you’re occasionally making silly mistakes, even once people take into account that this is what to expect you to look like. Given my personally having spent a bunch of time thinking about MIRI’s work, I have an idea of what models OpenPhil has built that are hard to convey, but it seems reasonable to me that in your epistemic position this looks like a blunder. I think that OpenPhil probably knew it would look like this to some, and decided to make the call anyway.
Final note: of your initial list of three things, the open call for research is the one I think is least useful for OpenPhil. When you’re funding at this scale in any field, the thought is not “what current ideas do people have that I should fund”, but “what new incentives can I add to this field”? And when you’re adding new incentives that are not those that already exist, it’s useful to spend time initially talking a lot with the grandees to make sure they truly understand your models (and you theirs) so that the correct models and incentives are propagated.
For example, I think if OpenPhil has announced a $100 grant scheme for Alignment research, many existing teams would’ve explained why their research already is this, and started using these terms, and it would’ve impeded the ability to build the intended field. I think this is why, even in cause areas like criminal justice and farm animal welfare, OpenPhil has chosen to advertise less and instead open 1-1 lines of communication with orgs they think are promising.
Letting e.g. a criminal justice org truly understand what you care about, and what sorts of projects you are and aren’t willing to fund, helps them plan accordingly for the future (as opposed to going along as usual and then suddenly finding out you aren’t interested in funding them any more). I think the notion that they’d be able to succeed by announcing a call for grants to solve a problem X, is too simplistic a view of how models propagate; in general to cross significant inferential gaps you need (on the short end) several extensive 1-1 conversations, and (on the longer end) textbooks with exercises.
Added: More generally, how many people you can fund quickly to do work is a function of how inferentially far you are away from the work that the people you hope to fund are already doing.
(On the other hand, you want to fund them well to signal to the rest of a field that there is real funding here if they provide what you’re looking for. I’m not sure exactly how to make that tradoeff.)
Re: Pre-paradigmatic science: see the above example of Wegener. If you want to discuss pre-paradigmatic research let’s discuss them seriously. Let’s go into historical examples (or contemporary ones, all the same to me), and analyze the relevant evaluative criteria. You haven’t given me a single reason why my proposed criteria wouldn’t work in the case of such research. Just because there is a scientific disagreement in the given field doesn’t imply that no experts can be consulted (except for a singel one) to evaluate the promise of the given innovative idea. Moreover, you haven’t shown at all why MIRI should be taken as effective in this domain. Again, my question is very simple: in view of which criteria? Check again the explanation given by OpenPhil: they call upon the old explanation, when they were hardly certain of giving them 0.5 mil USD, and the reviewer’s conviction that a non-peer-reviewed paper is great. And then they give them 7 times the same amount of money.
All that you’re telling me in your post is that we should trust them. Not a single standard’ has been offered as for why* this should count as effective/efficient research funding.
But, let me go through your points in order:
Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
Sorry, this is no argument. Do explain why. If the next point is why, see the response to it below.
I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions—it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
So are you saying that because we have a pre-paradigmatic stage there are no epistemic standards we can call upon? So, anything goes? Sorry, but not even Kuhn would agree with that. We still have shared epistemic values even though we may interpret them differently. Again: communication breakdown is not necessary despite potential incommensurabilities between the approaches. The least that can be done is that within the given novel proposal, the epistemic standards are explicated and justified. Otherwise, you are equating novel scientific research with any nonsense approaches. No assessment means anything goes, and I don’t think you wanna go that path (or next we’ll have pseudo-scientific crackpots running wild, arguing their research agenda is simply in a “pre-paradigmatic state”).
However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
This is just your personal opinion, hardly an argument (unless you’re an expert in the field of AI, in which case it could count as higher order evidence, but then please provide some explanation as for why their research is promising, and why we can expect it to be effective).
I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
Their grant is way higher than the most prestigious ERC grants, so no… it’s not a usual amount of money. And the justification given for their initial grant can hardly count for this one with no added explication.
I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
Precisely: which is why it may very well be the case that at this point there is hardly anything that can be done (the research program has no positive and negative heuristics, to use Lakatosian terms), which is why I wonder why is it worthy of pursuit to begin with? Again, we need criteria and currently there is nothing. Just hope that some research will result in something. And why assume others couldn’t do the same job? This is extremely poor view on an extremely broad scientific community. It almost sounds as if you’re saying “scientific community thinks X, but my buddies think X is not the case, so we need to fund my buddies.” I don’t think you wanna take that road or we’ll again slip into junk science.
Ah, I see. Thanks for responding.
I notice until now I’ve been conflating whether the OpenPhil grant-makers themselves should be a committee, versus whether they should bring in a committee to assess the researchers they fund. I realise you’re talking about the latter, while I was talking about the former. Regarding the latter (in this situation) here is what my model of a senior staff member at OpenPhil thinks in this particular case of AI.
If they were attempting to make grants in a fairly mainstream area of research (e.g. transfer learning on racing games) then they would have absolutely wanted to use a panel if they were considering some research. However, OpenPhil is attempting to build a novel research field, that is not very similar to existing fields. One of the big things that OpenPhil has changed their mind about in the past few years, is going from believing there was expert consensus in AI that AGI would not be a big problem, to believing that there is not relevant expert class on the topic of forecasting AGI capabilities and timelines; the expert class most people think about (ML researchers) is much better at assessing the near-term practicality of ML research.
As such, there was not a relevant expert class in this case, and OpenPhil picked an unusual method of determining whether to give the grant (that heavily included variables such as the fact that MIRI has a strong track record of thinking carefully about long-term AGI related issues). I daresay MIRI and OpenPhil would not expect MIRI to pass the test you are proposing, because they are trying to do something qualitatively different than anything currently going on in the field.
Does that feel like it hits the core point you care about?
If that does resolve your confusion about OpenPhil’s decision, I will further add:
If your goal is to try to identify good funding opportunities, then we are in agreement: the fact that OpenPhil has funded an organisation (plus the associated write-up about why) is commonly not sufficient information to persuade me that it’s sufficiently cost-effective that I should donate to it over, say, a GiveWell top charity.
If your goal however is to figure out whether OpenPhil’s organisation in general is epistemically sound, I would look to other variables than the specific grants where the reasoning is least transparent and looks the most wrong. The main reasons I have an unusually high amount of trust in OpenPhil’s decisions is from seeing other positive epistemic signs from its leadership key research staff, not from assessing single grant datapoint. My model of OpenPhil’s competence instead weights more:
Their hiring process
Their cause selection process
The research I’ve seen from their key researchers (e.g. Moral Patienthood, Crime Stats Relpication)
Significant epistemic signs from the leadership (e.g. Three Key Things I’ve Changed My Mind About, building GiveWell)
When assessing the grant making in a particular cause, I think look to the particular program manager and see what their output has been like.
Personally in the first four cases, I’ve seen remarkably strong positive evidence. Regarding the latter I actually haven’t got much evidence, the individual program managers do not tend to publish much. Overall I’m very impressed with OpenPhil as an org.
(I’m about to fly on a plane, can find more links to back up some claims later.)
Thanks, Benito, there are quite some issues we agree on, I think. Let me give names to some points in this discussion :)
General work of OpenPhil. First, let me state clearly that my post in no way challenges (or aimed to do so) overall OpenPhil as an organization. To the contrary: I thought this one hick-up is a rather bad example and poses a danger to the otherwise great stuff. Why? Because the explication is extremely poor and the money extremely large. So this is my general worry concerning their PR (taking into account their notes on not needing to justify their decision etc. - in this case I think this should have been done, just as they did it in the case of their previous (much smaller) grant to MIRI.
Funding a novel research field. I do understand their idea was to fund a novel a new approach to this topic or even a novel research field. Nevertheless, I still don’t see why this was a good way to go about it, since less risky paths are easily available. Consider the following:
OpenPhil makes an open call for research projects targeting the novel domain: the call specifies precisely which questions the projects should tackle;
OpenPhil selects a panel of experts who can evaluate both the given projects as well as the competence of the applicants to carry the project;
OpenPhil provides milestone criteria, in view of which the grant would be extended: e.g. the grant may initially be for the period of 5 years (e.g. 1.5 mil EUR is usually considered sufficient to fund a team of 5 members over the course of 5 years) , after which the project participants have to show the effectiveness of their project and apply for additional funding.
The benefits of such a procedure would be numerous:
avoiding confirmation bias: as we all here very well know, confirmation bias can be easily present when it comes to controversial topics, which is why a second opinion is extremely important. This doesn’t mean we shouldn’t allow for hard-headed researchers to pursue their provocative ideas, nor that only dominant-theory-compatible ideas should be considered worthy of pursuit. Instead, what needs to be assured is that prospective values, suggesting the promising character of the project, are satisfied. Take for instance Wegener’s hypothesis of continental drift, which he proposed in 1912. Wegener was way too confident of the acceptability of his theory, which is why many prematurely rejected his ideas (where my coauthor and I argue that such a rejection was clearly unjustified). Nevertheless, his ideas were indeed worthy of pursuit, and the whole research program had clear paths that could have been pursued (despite the numerous problems and anomalies). So, a novel surprising idea, challenging an established one isn’t the same as a junk-scientific hypothesis which shows no prospective values whatsoever. We can assess its promise, no matter how risky it is. For that we need experts who can check its methodology. And since MIRI’s current work concerns decision theory and ML, it’s not as if their methodology can’t be checked in this way, and in view of the goals of the project set in advance by OpenPhil (so the check-up would have to concern the question: how well does this method satisfy the required goals?).
Another benefit of the above procedure is assuring that the most competent scholars lead the given project. MIRI may have good intentions, but how do we know that some other scholars wouldn’t perform the same job even better? There must be some kind of competence check-up, and a time-sensitive effectiveness measure. Number of publications is one possible measure, but not the most fortunate one (I agree on this with others here). But then we need something else, for example: a single publication with a decent impact. Or a few publications over the course of a few years, each of which exhibits a strong impact. Otherwise, how do we know there’ll be anything effective done within the project? How do we know these scholars rather than some others will do the job? Even if we like their enthusiasm, unless they reach the scientific community (or the community of science policy makers), how will they be effective? And unless they manage to publish in high-impact venues (say, conference proceedings), how will they reach these communities?
Financing more than one project and thus hedging one’s bets: why give all 3.75 mil USD to one project instead of awarding them to, say, two different groups (or as suggested above, to one group, but in phases)?
While I agree that funding risky, potentially ground-breaking research is important and may follow different standards than the regular academic paths, we still need some standards, and those I just suggested seem to me strictly better than the ones employed by OpenPhil in the case of this particular grant. Right now, it all just seems like a buddy system: my buddies are working on this ground-breaking stuff and I trust them, so I’ll give them cash for that. Doesn’t sound very effective to me :p
Gotcha. I’ll probably wrap up with this comment, here’s my few last thoughts (all on the topic of building a research field):
(I’m commenting on phone, sorry if paragraphs are unusually long, if they are I’ll try to add more breaks later.)
Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions—it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
My broad strokes thoughts again are that, when you choose to make grants that your models say have the chance of being massive hits, you just will look like you’re occasionally making silly mistakes, even once people take into account that this is what to expect you to look like. Given my personally having spent a bunch of time thinking about MIRI’s work, I have an idea of what models OpenPhil has built that are hard to convey, but it seems reasonable to me that in your epistemic position this looks like a blunder. I think that OpenPhil probably knew it would look like this to some, and decided to make the call anyway.
Final note: of your initial list of three things, the open call for research is the one I think is least useful for OpenPhil. When you’re funding at this scale in any field, the thought is not “what current ideas do people have that I should fund”, but “what new incentives can I add to this field”? And when you’re adding new incentives that are not those that already exist, it’s useful to spend time initially talking a lot with the grandees to make sure they truly understand your models (and you theirs) so that the correct models and incentives are propagated.
For example, I think if OpenPhil has announced a $100 grant scheme for Alignment research, many existing teams would’ve explained why their research already is this, and started using these terms, and it would’ve impeded the ability to build the intended field. I think this is why, even in cause areas like criminal justice and farm animal welfare, OpenPhil has chosen to advertise less and instead open 1-1 lines of communication with orgs they think are promising.
Letting e.g. a criminal justice org truly understand what you care about, and what sorts of projects you are and aren’t willing to fund, helps them plan accordingly for the future (as opposed to going along as usual and then suddenly finding out you aren’t interested in funding them any more). I think the notion that they’d be able to succeed by announcing a call for grants to solve a problem X, is too simplistic a view of how models propagate; in general to cross significant inferential gaps you need (on the short end) several extensive 1-1 conversations, and (on the longer end) textbooks with exercises.
Added: More generally, how many people you can fund quickly to do work is a function of how inferentially far you are away from the work that the people you hope to fund are already doing.
(On the other hand, you want to fund them well to signal to the rest of a field that there is real funding here if they provide what you’re looking for. I’m not sure exactly how to make that tradoeff.)
Re: Pre-paradigmatic science: see the above example of Wegener. If you want to discuss pre-paradigmatic research let’s discuss them seriously. Let’s go into historical examples (or contemporary ones, all the same to me), and analyze the relevant evaluative criteria. You haven’t given me a single reason why my proposed criteria wouldn’t work in the case of such research. Just because there is a scientific disagreement in the given field doesn’t imply that no experts can be consulted (except for a singel one) to evaluate the promise of the given innovative idea. Moreover, you haven’t shown at all why MIRI should be taken as effective in this domain. Again, my question is very simple: in view of which criteria? Check again the explanation given by OpenPhil: they call upon the old explanation, when they were hardly certain of giving them 0.5 mil USD, and the reviewer’s conviction that a non-peer-reviewed paper is great. And then they give them 7 times the same amount of money.
All that you’re telling me in your post is that we should trust them. Not a single standard’ has been offered as for why* this should count as effective/efficient research funding.
But, let me go through your points in order:
Sorry, this is no argument. Do explain why. If the next point is why, see the response to it below.
So are you saying that because we have a pre-paradigmatic stage there are no epistemic standards we can call upon? So, anything goes? Sorry, but not even Kuhn would agree with that. We still have shared epistemic values even though we may interpret them differently. Again: communication breakdown is not necessary despite potential incommensurabilities between the approaches. The least that can be done is that within the given novel proposal, the epistemic standards are explicated and justified. Otherwise, you are equating novel scientific research with any nonsense approaches. No assessment means anything goes, and I don’t think you wanna go that path (or next we’ll have pseudo-scientific crackpots running wild, arguing their research agenda is simply in a “pre-paradigmatic state”).
This is just your personal opinion, hardly an argument (unless you’re an expert in the field of AI, in which case it could count as higher order evidence, but then please provide some explanation as for why their research is promising, and why we can expect it to be effective).
Their grant is way higher than the most prestigious ERC grants, so no… it’s not a usual amount of money. And the justification given for their initial grant can hardly count for this one with no added explication.
Precisely: which is why it may very well be the case that at this point there is hardly anything that can be done (the research program has no positive and negative heuristics, to use Lakatosian terms), which is why I wonder why is it worthy of pursuit to begin with? Again, we need criteria and currently there is nothing. Just hope that some research will result in something. And why assume others couldn’t do the same job? This is extremely poor view on an extremely broad scientific community. It almost sounds as if you’re saying “scientific community thinks X, but my buddies think X is not the case, so we need to fund my buddies.” I don’t think you wanna take that road or we’ll again slip into junk science.