Executive summary: The Unjournal’s evaluations of a meta-analysis on reducing meat/animal-product consumption found the project ambitious but methodologically limited; the author argues meta-analysis can still be valuable in this heterogeneous area if future work builds on the shared dataset with more systematic protocols, robustness checks, and clearer bias handling—while noting open cruxes and incentive barriers to actually doing that follow-up (exploratory, cautiously optimistic).
Key points:
The original meta-analysis reports consistently small effects and no well-validated intervention class for reducing meat/animal-product consumption, but Unjournal evaluators judged the methods insufficiently rigorous to support strong conclusions.
Substantive critiques include: biased missing-data imputation (e.g., fixed near-zero effects), discarding multiple outcomes per study despite multilevel capacity, inadequate risk-of-bias assessment (e.g., selective reporting, attrition), and a non-reproducible or only partially systematic search strategy.
One author’s response defends pragmatic choices in a vast, heterogeneous literature (prior-reviews-first search; strict inclusion criteria in lieu of formal RoB; many transparent judgment calls) and invites others to re-analyze—though this stance was itself critiqued as treating “innovation” as self-justifying without validating reliability.
The post’s author is sympathetic to pragmatism but calls for explicit engagement with the critiques and a more systematic, buildable approach (clear protocols, reproducible searches, formal bias assessment alongside strict inclusion, and robustness/multiverse analyses).
Core cruxes: whether meta-analysis is useful amid high heterogeneity; whether to follow academic standards or a distinct, decision-focused paradigm; and whether there are incentives/funding to sustain rigorous, iterative synthesis beyond the first publication.
Recommendation/implication: pursue follow-up work using the shared database, improve transparency and methods, and consider alternative incentive structures (e.g., Unjournal’s continuous evaluation) so the animal-welfare/EA community can progressively refine answers to a few pivotal questions.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The Unjournal’s evaluations of a meta-analysis on reducing meat/animal-product consumption found the project ambitious but methodologically limited; the author argues meta-analysis can still be valuable in this heterogeneous area if future work builds on the shared dataset with more systematic protocols, robustness checks, and clearer bias handling—while noting open cruxes and incentive barriers to actually doing that follow-up (exploratory, cautiously optimistic).
Key points:
The original meta-analysis reports consistently small effects and no well-validated intervention class for reducing meat/animal-product consumption, but Unjournal evaluators judged the methods insufficiently rigorous to support strong conclusions.
Substantive critiques include: biased missing-data imputation (e.g., fixed near-zero effects), discarding multiple outcomes per study despite multilevel capacity, inadequate risk-of-bias assessment (e.g., selective reporting, attrition), and a non-reproducible or only partially systematic search strategy.
One author’s response defends pragmatic choices in a vast, heterogeneous literature (prior-reviews-first search; strict inclusion criteria in lieu of formal RoB; many transparent judgment calls) and invites others to re-analyze—though this stance was itself critiqued as treating “innovation” as self-justifying without validating reliability.
The post’s author is sympathetic to pragmatism but calls for explicit engagement with the critiques and a more systematic, buildable approach (clear protocols, reproducible searches, formal bias assessment alongside strict inclusion, and robustness/multiverse analyses).
Core cruxes: whether meta-analysis is useful amid high heterogeneity; whether to follow academic standards or a distinct, decision-focused paradigm; and whether there are incentives/funding to sustain rigorous, iterative synthesis beyond the first publication.
Recommendation/implication: pursue follow-up work using the shared database, improve transparency and methods, and consider alternative incentive structures (e.g., Unjournal’s continuous evaluation) so the animal-welfare/EA community can progressively refine answers to a few pivotal questions.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.