Minor points, (1) I think it is standard practice for peer review to be kept anonymous, (2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context, (3) you’re just looking at one grant out of all that Open Phil has done, (4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.
More importantly, once MIRI’s publication record is treated with the appropriate nuance, your post doesn’t show how they should be viewed as inferior to any unfunded alternatives. Open Phil has funded other AI safety projects besides MIRI, and there is not much being done in this field, so the grants don’t commit them to the claim that MIRI is better than most AI safety projects. So we don’t have an empirical basis for doubting their loose, hits-based-giving approach. We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is “disturbing”. Those policies are costly—they take more time and people to implement.
(1) I think it is standard practice for peer review to be kept anonymous,
Problem wasn’t in the reviewer being anonymous, but in the lack of access to the report
(2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context,
Sure, but that doesn’t mean no criteria should be available.
(3) you’re just looking at one grant out of all that Open Phil has done,
Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same can happen with future funding strategies.
(4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.
I was raising an issue concerning journal articles, which are nonetheless important even in computer science to solidify the research results. Proceedings are important for novel results, but the actual rigor of reviews comes through in journal publications (otherwise, journals would be pointless in this domain).
As for the rest of your post, I advice comparing an output of groups of smaller or similar size that have been funded via prestigious grants, you’ll notice a difference.
Open Phil gave $5.6MM to Berkeley for AI, even though Russell’s group is new and its staff/faculty are still fewer than the staff of MIRI. They gave $30MM to OpenAI. And $1-2MM for many other groups. Of course EAs can give more to a particular groups, that’s because we’re EAs, we’re willing to give a lot of money to wherever it will do the most good in expectation.
Again, you are missing the point: my argument concerns the criteria in view which projects are assessed as worthy of funding. These criteria exist and are employed by various funding institutions across academia. I haven’t seen any such criteria (and the justification thereof, such that they are conducive to effective and efficient research) in this case, which is why I’ve raised the issue.
we’re willing to give a lot of money to wherever it will do the most good in expectation.
And my focus is on: which criteria are used/should be used in order to decide which research projects will do the most good in expectation. Currently such criteria are lacking, including their justification in terms of effectiveness and efficiency.
Open Phil has a more subjective approach, others have talked about their philosophy here. That means it’s not easily verifiable to outsiders, but that’s of no concern to Open Phil, because it is their own money.
are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?
As I stated already, “We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is “disturbing”. Those policies are costly—they take more time and people to implement.” It is, in short, your conceptual argument about how to do EA. So, people disagree. Welcome to EA.
Subjective, unverifiable, etc. has nothing to do with such standards
It has something to do with the difficulty of showing that a group is not conforming to the standards of EA.
Oh no, this is not just a matter of opinion. There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research. So there is actually quite some scholarly work on this (and it is a topic of my research, as a matter of fact).
So yes, I’d argue that the situation is disturbing since immense amount of money is going into research for which there is no good reason to suppose that it is effective or efficient.
Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It’s not a matter of opinion, but it is a matter of debate.
There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research
Oh, there have been numerous articles, in your field, claimed by you. That’s all well and good, but it should be clear why people will have reasons for doubts on the topic.
Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It’s not a matter of opinion, but it is a matter of debate.
Sure! Which is why I’ve been exchanging arguments with you.
Oh, there have been numerous articles, in your field, claimed by you.
Now what on earth is that supposed to mean?
What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/
That’s all well and good, but it should be clear why people will have reasons for doubts on the topic.
Sure, and so far you haven’t given me a single good reason. The only thing you’ve done is reiterate the lack of transparency on the side of OpenPhil.
Sure! Which is why I’ve been exchanging arguments with you.
And, therefore, you would be wise to treat Open Phil in the same manner, i.e. something to disagree with, not something to attack as not being Good Enough for EA.
Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/
It means that you haven’t argued your point with the sufficient rigor and comprehensiveness that is required for you to convince every reasonable person. (no, stating “experts in my field agree with me” does not count here, even though it’s a big part of it)
Sure, and so far you haven’t given me a single good reason.
Other people have discussed and linked Open Phil’s philosophy, I see no point in rehashing it.
I don’t have the time to join the debate, but I’m pretty sure Dunja’s point isn’t “I know that OpenPhil’s strategy is bad” but “Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?” It seems like people act OpenPhil’s strategy is good and aren’t massively confused / explicitly clear that they don’t have the info that is required to assess the strategy.
Dunja, is that accurate?
(Small note: I’d been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn’t get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by ‘benitopace’, I’d appreciate that.)
It’s really about the transparency of the criteria, and that’s all I’m arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.
As for my papers—crap, that’s embarrassing that I’ve linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website… have to think of some proper free solution here. But in any case: please don’t feel obliged to read my papers, there’s for sure lots of other more interesting stuff out there! If you are interested in the topic it’s enough the scan to check the criteria I use in these assessments :) I’ll email them in any case.
Yeah that’s a worthy point, but people are not really making decisions on this basis. It’s not like Givewell, which recommends where other people should give. Open Phil has always ultimately been Holden doing what he wants and not caring about what other people think. It’s like those “where I donated this year” blogs from the Givewell staff. Yeah, people might well be giving too much credence to their views, but that’s a rather secondary thing to worry about.
Minor points, (1) I think it is standard practice for peer review to be kept anonymous, (2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context, (3) you’re just looking at one grant out of all that Open Phil has done, (4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.
More importantly, once MIRI’s publication record is treated with the appropriate nuance, your post doesn’t show how they should be viewed as inferior to any unfunded alternatives. Open Phil has funded other AI safety projects besides MIRI, and there is not much being done in this field, so the grants don’t commit them to the claim that MIRI is better than most AI safety projects. So we don’t have an empirical basis for doubting their loose, hits-based-giving approach. We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is “disturbing”. Those policies are costly—they take more time and people to implement.
Problem wasn’t in the reviewer being anonymous, but in the lack of access to the report
Sure, but that doesn’t mean no criteria should be available.
Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same can happen with future funding strategies.
I was raising an issue concerning journal articles, which are nonetheless important even in computer science to solidify the research results. Proceedings are important for novel results, but the actual rigor of reviews comes through in journal publications (otherwise, journals would be pointless in this domain).
As for the rest of your post, I advice comparing an output of groups of smaller or similar size that have been funded via prestigious grants, you’ll notice a difference.
Open Phil gave $5.6MM to Berkeley for AI, even though Russell’s group is new and its staff/faculty are still fewer than the staff of MIRI. They gave $30MM to OpenAI. And $1-2MM for many other groups. Of course EAs can give more to a particular groups, that’s because we’re EAs, we’re willing to give a lot of money to wherever it will do the most good in expectation.
Again, you are missing the point: my argument concerns the criteria in view which projects are assessed as worthy of funding. These criteria exist and are employed by various funding institutions across academia. I haven’t seen any such criteria (and the justification thereof, such that they are conducive to effective and efficient research) in this case, which is why I’ve raised the issue.
And my focus is on: which criteria are used/should be used in order to decide which research projects will do the most good in expectation. Currently such criteria are lacking, including their justification in terms of effectiveness and efficiency.
Open Phil has a more subjective approach, others have talked about their philosophy here. That means it’s not easily verifiable to outsiders, but that’s of no concern to Open Phil, because it is their own money.
Again: you are missing my point :) I don’t care if it’s their money or not, that’s beside my point.
What I care about is: are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?
Otherwise, makes no sense to label them as an organization that’s conforming to the standards of EA, at least in the case of such practices.
Subjective, unverifiable, etc. has nothing to do with such standards (= conducive to effective & efficient scientific research).
As I stated already, “We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is “disturbing”. Those policies are costly—they take more time and people to implement.” It is, in short, your conceptual argument about how to do EA. So, people disagree. Welcome to EA.
It has something to do with the difficulty of showing that a group is not conforming to the standards of EA.
Oh no, this is not just a matter of opinion. There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research. So there is actually quite some scholarly work on this (and it is a topic of my research, as a matter of fact).
So yes, I’d argue that the situation is disturbing since immense amount of money is going into research for which there is no good reason to suppose that it is effective or efficient.
Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It’s not a matter of opinion, but it is a matter of debate.
Oh, there have been numerous articles, in your field, claimed by you. That’s all well and good, but it should be clear why people will have reasons for doubts on the topic.
Sure! Which is why I’ve been exchanging arguments with you.
Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/
Sure, and so far you haven’t given me a single good reason. The only thing you’ve done is reiterate the lack of transparency on the side of OpenPhil.
And, therefore, you would be wise to treat Open Phil in the same manner, i.e. something to disagree with, not something to attack as not being Good Enough for EA.
It means that you haven’t argued your point with the sufficient rigor and comprehensiveness that is required for you to convince every reasonable person. (no, stating “experts in my field agree with me” does not count here, even though it’s a big part of it)
Other people have discussed and linked Open Phil’s philosophy, I see no point in rehashing it.
I don’t have the time to join the debate, but I’m pretty sure Dunja’s point isn’t “I know that OpenPhil’s strategy is bad” but “Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?” It seems like people act OpenPhil’s strategy is good and aren’t massively confused / explicitly clear that they don’t have the info that is required to assess the strategy.
Dunja, is that accurate?
(Small note: I’d been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn’t get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by ‘benitopace’, I’d appreciate that.)
Thanks, Benito, that sums it up nicely!
It’s really about the transparency of the criteria, and that’s all I’m arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.
As for my papers—crap, that’s embarrassing that I’ve linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website… have to think of some proper free solution here. But in any case: please don’t feel obliged to read my papers, there’s for sure lots of other more interesting stuff out there! If you are interested in the topic it’s enough the scan to check the criteria I use in these assessments :) I’ll email them in any case.
Yeah that’s a worthy point, but people are not really making decisions on this basis. It’s not like Givewell, which recommends where other people should give. Open Phil has always ultimately been Holden doing what he wants and not caring about what other people think. It’s like those “where I donated this year” blogs from the Givewell staff. Yeah, people might well be giving too much credence to their views, but that’s a rather secondary thing to worry about.