I think “an unsolved problem” could indicate several things. it could be
We have evidence that all of the commonly tried approaches are ineffective, i.e., we have measured all of their effects and they are tightly bounded as being very small
We have a lack of evidence, thus very wide credible intervals over the impact of each of the common approaches.
To me, the distinction is important. Do you agree?
You say above
meaningful reductions either have not been discovered yet or do not have substantial evidence in support
But even “do not have substantial evidence in support” could mean either of the above … a lack of evidence, or strong evidence that the effects are close to zero. At least to my ears.
As for ‘hedge this’, I was referring to the paper not to the response, but I can check this again.
For what it’s worth, I read that abstract as saying something like, “within the class of interventions studied so far, the literature has yet to settle onto any intervention that can reliably reduce animal product consumption by a meaningful amount, where meaningful amount might be a 1% reduction at Costco scale or long-term 10% reduction at a single cafeteria. The class of interventions being studied tends to be informational and nudge-style interventions like advertising, menu design, and media pamphlets. When effect sizes differ for a given type of intervention, the literature has not offered a convincing reason why a menu-design choice works in one setting versus another.”
Okay, now that I’ve typed that up, I can see why “unsolved problem” is unclear.
And I’m probably taking a lot of leaps of faith in interpretation here
From the POV of our core contention -- that we don’t currently have a validated, reliable intervention to deploy at scale—whether this is because of absence of evidence (AoE) or evidence of absence (EoA) is hard to say. I don’t have an overall answer, and ultimately both roads lead to “unsolved problem.”
We can cite good arguments for EoA (these studies are stronger than the norm in the field but show weaker effects, and that relationship should be troubling for advocates) or AoE (we’re not talking about very many studies at all), and ultimately I think the line between the two is in the eye of the beholder.
Going approach by approach, my personal answers are
choice architecture is probably AoE, it might work better than expected but we just don’t learn very much from 2 studies (I am working on something about this separately)
the animal welfare appeals are more EoA, esp. those from animal advocacy orgs
social psych approaches, I’m skeptical of but there weren’t a lot of high-quality papers so I’m not so sure (see here for a subsequent meta-analysis of dynamic norms approaches).
I would recommend health for older folks, environmental appeals for Gen Z. So there I’d say we have evidence of efficacy, but to expect effects to be on the order of a few percentage points.
Were I discussing this specifically with a funder, I would say, if you’re going to do one of the meta-analyzed approaches—psych, nudge, environment, health, or animal welfare, or some hybrid thereof—you should expect small effect sizes unless you have some strong reason to believe that your intervention is meaningfully better than the category average. For instance, animal welfare appeals might not work in general, but maybe watching Dominion is unusually effective. However, as we say in our paper, there are a lot of cool ideas that haven’t been tested rigorously yet, and from the point of view of knowledge, I’d like to see those get funded first.
I think “an unsolved problem” could indicate several things. it could be
We have evidence that all of the commonly tried approaches are ineffective, i.e., we have measured all of their effects and they are tightly bounded as being very small
We have a lack of evidence, thus very wide credible intervals over the impact of each of the common approaches.
To me, the distinction is important. Do you agree?
You say above
But even “do not have substantial evidence in support” could mean either of the above … a lack of evidence, or strong evidence that the effects are close to zero. At least to my ears.
As for ‘hedge this’, I was referring to the paper not to the response, but I can check this again.
For what it’s worth, I read that abstract as saying something like, “within the class of interventions studied so far, the literature has yet to settle onto any intervention that can reliably reduce animal product consumption by a meaningful amount, where meaningful amount might be a 1% reduction at Costco scale or long-term 10% reduction at a single cafeteria. The class of interventions being studied tends to be informational and nudge-style interventions like advertising, menu design, and media pamphlets. When effect sizes differ for a given type of intervention, the literature has not offered a convincing reason why a menu-design choice works in one setting versus another.”
Okay, now that I’ve typed that up, I can see why “unsolved problem” is unclear.
And I’m probably taking a lot of leaps of faith in interpretation here
It’s an interesting question.
From the POV of our core contention -- that we don’t currently have a validated, reliable intervention to deploy at scale—whether this is because of absence of evidence (AoE) or evidence of absence (EoA) is hard to say. I don’t have an overall answer, and ultimately both roads lead to “unsolved problem.”
We can cite good arguments for EoA (these studies are stronger than the norm in the field but show weaker effects, and that relationship should be troubling for advocates) or AoE (we’re not talking about very many studies at all), and ultimately I think the line between the two is in the eye of the beholder.
Going approach by approach, my personal answers are
choice architecture is probably AoE, it might work better than expected but we just don’t learn very much from 2 studies (I am working on something about this separately)
the animal welfare appeals are more EoA, esp. those from animal advocacy orgs
social psych approaches, I’m skeptical of but there weren’t a lot of high-quality papers so I’m not so sure (see here for a subsequent meta-analysis of dynamic norms approaches).
I would recommend health for older folks, environmental appeals for Gen Z. So there I’d say we have evidence of efficacy, but to expect effects to be on the order of a few percentage points.
Were I discussing this specifically with a funder, I would say, if you’re going to do one of the meta-analyzed approaches—psych, nudge, environment, health, or animal welfare, or some hybrid thereof—you should expect small effect sizes unless you have some strong reason to believe that your intervention is meaningfully better than the category average. For instance, animal welfare appeals might not work in general, but maybe watching Dominion is unusually effective. However, as we say in our paper, there are a lot of cool ideas that haven’t been tested rigorously yet, and from the point of view of knowledge, I’d like to see those get funded first.