Regardless, I think that there would be a lot of value in these sort of reports getting peer reviewed by academics/experts, especially where they are influential in the EA community.
I agree, but I don’t think this is what the Unjournal should handle right now. It should be done, but maybe with a different vehicle and approach.
I’d prefer maybe 10x as much research, at .1x the quality.
I tend to disagree with this. My concern is that most/much research is often ‘vomited out’ to satisfy tenure requirements and other needs to “publish something”. There is just so much research and writing out their to wade through.
I typically question...
Are the empirical results trustworthy? Can we have confidence in their validity and generalizability … and the interpretation the authors give?
Is anyone reading these papers (and if so, are they understanding them or just skimming and getting the wrong impressions)?
Are these being combined with other work, and with replication, to give a general picture of ‘what we know and with what confidence’? Is it entering into a general building of our knowledge and ability to learn more and use the work?
When I say I’d prefer maybe 10x as much research, at .1x the quality, I don’t want to miss out on quality overall. Instead, I’d like more small scale incremental and iterative research, where the rigour and the length, increase in proportion to the expected ROI. For instance, this could involve a range of small studies that increase in quality as they show evidence, followed by a rigorous review and replication process.
I also think that the reason for a lot of the current research vomit is that we don’t let people publish short and simple articles. I think that if you took most articles and pulled out their method, results and conclusion, you would give the reader about 95% of the value of the article in maybe 1/10th the space/words of the full article.
If a researcher just had to write these sections and a wrapper rather than plan and coordinate a whole document, they might produce and disseminate their insights in 2-5% of the time that it currently takes.
I agree on most of your counts.
I agree, but I don’t think this is what the Unjournal should handle right now. It should be done, but maybe with a different vehicle and approach.
I tend to disagree with this. My concern is that most/much research is often ‘vomited out’ to satisfy tenure requirements and other needs to “publish something”. There is just so much research and writing out their to wade through.
I typically question...
Are the empirical results trustworthy? Can we have confidence in their validity and generalizability … and the interpretation the authors give?
Is anyone reading these papers (and if so, are they understanding them or just skimming and getting the wrong impressions)?
Are these being combined with other work, and with replication, to give a general picture of ‘what we know and with what confidence’? Is it entering into a general building of our knowledge and ability to learn more and use the work?
Thanks for replying.
When I say I’d prefer maybe 10x as much research, at .1x the quality, I don’t want to miss out on quality overall. Instead, I’d like more small scale incremental and iterative research, where the rigour and the length, increase in proportion to the expected ROI. For instance, this could involve a range of small studies that increase in quality as they show evidence, followed by a rigorous review and replication process.
I also think that the reason for a lot of the current research vomit is that we don’t let people publish short and simple articles. I think that if you took most articles and pulled out their method, results and conclusion, you would give the reader about 95% of the value of the article in maybe 1/10th the space/words of the full article.
If a researcher just had to write these sections and a wrapper rather than plan and coordinate a whole document, they might produce and disseminate their insights in 2-5% of the time that it currently takes.