My 2 cents, as a scientist, currently in a PhD program:
Scientists will largely resist this. They don’t want all their data to be out in the open, mostly from fear that they made a mistake that will be picked up on. “Imposter syndrome” is very common in science (especially for new scientists, who run most of the actual experiments—more established scientists spend more time writing grants for more funding). It’s also just a pain in the ass to gather all your data and format it, etc.
That said, I think this would be a very good thing (for scientific progress, not for scientists themselves). In particular, I think it would be very useful for building off other work. There have been tons of times where I’ve wanted to know exactly how some group gathered some data, and their paper didn’t quite specify.
Since this seems like something very good that vested interests will likely oppose, I agree it is a great cause to push for—it likely won’t happen on its own but if we can build the proper incentive structures then we could, in theory, alter how the game is played.
Really enjoying the Oxford Prioritisation Project!
One of my favourite comments from the Anonymous EA comments was the wish that EAs would post “little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets.” (http://effective-altruism.com/ea/16g/anonymous_comments/)
I expect average OPP posts take over 5 hours, and 5 hours might be an underestimate of the amount of time it would take for a useful overview without prior subject knowledge. But both that comment and the OPP seem to be of the same spirit, and it’s great to see all this information shared through an EA lens.
I don’t follow the argument that only one funder implies little RFMF:
I agree with gwern that it is concerning that the small number of Open Science orgs are mostly all funded by LJAF, despite the general awareness of the problems. For our purposes, this probably means that OS has little RFMF right now, because the opportunities are already filled by LJAF.
Could you say more about this?
I think Catherio’s response to Gwern on that LessWrong thread is also worth reading. The gist is that the replication effort was already underway but unfunded when Arnold decided to fund it. So if you’re looking for things to help with funding, you might want to ask around about other attempts within a high-value profession to coordinate around better standards.
I’d second that—it’s not the most wieldy text editor. Not sure how easy it would be to remedy. Going into the HTML gets you what you want in the end, but it’s undue effort.
Always pleased to see people collating information like this!
My 2 cents, as a scientist, currently in a PhD program: Scientists will largely resist this. They don’t want all their data to be out in the open, mostly from fear that they made a mistake that will be picked up on. “Imposter syndrome” is very common in science (especially for new scientists, who run most of the actual experiments—more established scientists spend more time writing grants for more funding). It’s also just a pain in the ass to gather all your data and format it, etc.
That said, I think this would be a very good thing (for scientific progress, not for scientists themselves). In particular, I think it would be very useful for building off other work. There have been tons of times where I’ve wanted to know exactly how some group gathered some data, and their paper didn’t quite specify.
Since this seems like something very good that vested interests will likely oppose, I agree it is a great cause to push for—it likely won’t happen on its own but if we can build the proper incentive structures then we could, in theory, alter how the game is played.
Really enjoying the Oxford Prioritisation Project!
One of my favourite comments from the Anonymous EA comments was the wish that EAs would post “little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets.” (http://effective-altruism.com/ea/16g/anonymous_comments/)
I expect average OPP posts take over 5 hours, and 5 hours might be an underestimate of the amount of time it would take for a useful overview without prior subject knowledge. But both that comment and the OPP seem to be of the same spirit, and it’s great to see all this information shared through an EA lens.
Thanks for doing this!
I don’t follow the argument that only one funder implies little RFMF:
Could you say more about this?
I think Catherio’s response to Gwern on that LessWrong thread is also worth reading. The gist is that the replication effort was already underway but unfunded when Arnold decided to fund it. So if you’re looking for things to help with funding, you might want to ask around about other attempts within a high-value profession to coordinate around better standards.
Formatting seems readable to me. It would be nice to not include the entire article in a blockquote, though.
I’d second that—it’s not the most wieldy text editor. Not sure how easy it would be to remedy. Going into the HTML gets you what you want in the end, but it’s undue effort.
Would it make sense to donate to the LJAF for promoting open science?
If you were trying to mimic them, I’d give more to some of their grantees, like METRICS or COS.