Re: Pre-paradigmatic science: see the above example of Wegener. If you want to discuss pre-paradigmatic research let’s discuss them seriously. Let’s go into historical examples (or contemporary ones, all the same to me), and analyze the relevant evaluative criteria. You haven’t given me a single reason why my proposed criteria wouldn’t work in the case of such research. Just because there is a scientific disagreement in the given field doesn’t imply that no experts can be consulted (except for a singel one) to evaluate the promise of the given innovative idea. Moreover, you haven’t shown at all why MIRI should be taken as effective in this domain. Again, my question is very simple: in view of which criteria? Check again the explanation given by OpenPhil: they call upon the old explanation, when they were hardly certain of giving them 0.5 mil USD, and the reviewer’s conviction that a non-peer-reviewed paper is great. And then they give them 7 times the same amount of money.
All that you’re telling me in your post is that we should trust them. Not a single standard’ has been offered as for why* this should count as effective/efficient research funding.
But, let me go through your points in order:
Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
Sorry, this is no argument. Do explain why. If the next point is why, see the response to it below.
I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions—it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
So are you saying that because we have a pre-paradigmatic stage there are no epistemic standards we can call upon? So, anything goes? Sorry, but not even Kuhn would agree with that. We still have shared epistemic values even though we may interpret them differently. Again: communication breakdown is not necessary despite potential incommensurabilities between the approaches. The least that can be done is that within the given novel proposal, the epistemic standards are explicated and justified. Otherwise, you are equating novel scientific research with any nonsense approaches. No assessment means anything goes, and I don’t think you wanna go that path (or next we’ll have pseudo-scientific crackpots running wild, arguing their research agenda is simply in a “pre-paradigmatic state”).
However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
This is just your personal opinion, hardly an argument (unless you’re an expert in the field of AI, in which case it could count as higher order evidence, but then please provide some explanation as for why their research is promising, and why we can expect it to be effective).
I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
Their grant is way higher than the most prestigious ERC grants, so no… it’s not a usual amount of money. And the justification given for their initial grant can hardly count for this one with no added explication.
I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
Precisely: which is why it may very well be the case that at this point there is hardly anything that can be done (the research program has no positive and negative heuristics, to use Lakatosian terms), which is why I wonder why is it worthy of pursuit to begin with? Again, we need criteria and currently there is nothing. Just hope that some research will result in something. And why assume others couldn’t do the same job? This is extremely poor view on an extremely broad scientific community. It almost sounds as if you’re saying “scientific community thinks X, but my buddies think X is not the case, so we need to fund my buddies.” I don’t think you wanna take that road or we’ll again slip into junk science.
Re: Pre-paradigmatic science: see the above example of Wegener. If you want to discuss pre-paradigmatic research let’s discuss them seriously. Let’s go into historical examples (or contemporary ones, all the same to me), and analyze the relevant evaluative criteria. You haven’t given me a single reason why my proposed criteria wouldn’t work in the case of such research. Just because there is a scientific disagreement in the given field doesn’t imply that no experts can be consulted (except for a singel one) to evaluate the promise of the given innovative idea. Moreover, you haven’t shown at all why MIRI should be taken as effective in this domain. Again, my question is very simple: in view of which criteria? Check again the explanation given by OpenPhil: they call upon the old explanation, when they were hardly certain of giving them 0.5 mil USD, and the reviewer’s conviction that a non-peer-reviewed paper is great. And then they give them 7 times the same amount of money.
All that you’re telling me in your post is that we should trust them. Not a single standard’ has been offered as for why* this should count as effective/efficient research funding.
But, let me go through your points in order:
Sorry, this is no argument. Do explain why. If the next point is why, see the response to it below.
So are you saying that because we have a pre-paradigmatic stage there are no epistemic standards we can call upon? So, anything goes? Sorry, but not even Kuhn would agree with that. We still have shared epistemic values even though we may interpret them differently. Again: communication breakdown is not necessary despite potential incommensurabilities between the approaches. The least that can be done is that within the given novel proposal, the epistemic standards are explicated and justified. Otherwise, you are equating novel scientific research with any nonsense approaches. No assessment means anything goes, and I don’t think you wanna go that path (or next we’ll have pseudo-scientific crackpots running wild, arguing their research agenda is simply in a “pre-paradigmatic state”).
This is just your personal opinion, hardly an argument (unless you’re an expert in the field of AI, in which case it could count as higher order evidence, but then please provide some explanation as for why their research is promising, and why we can expect it to be effective).
Their grant is way higher than the most prestigious ERC grants, so no… it’s not a usual amount of money. And the justification given for their initial grant can hardly count for this one with no added explication.
Precisely: which is why it may very well be the case that at this point there is hardly anything that can be done (the research program has no positive and negative heuristics, to use Lakatosian terms), which is why I wonder why is it worthy of pursuit to begin with? Again, we need criteria and currently there is nothing. Just hope that some research will result in something. And why assume others couldn’t do the same job? This is extremely poor view on an extremely broad scientific community. It almost sounds as if you’re saying “scientific community thinks X, but my buddies think X is not the case, so we need to fund my buddies.” I don’t think you wanna take that road or we’ll again slip into junk science.