[previous comment is deleted, because I accidentally sent an unfinished one]
Thanks for the example! That makes sense and makes me wonder if part of the disagreement came from thinking about different reference classes. I agree that, in general, the research we did in our first year of operations, so 2018/â2019, is well below the quality standard we expect of ourselves now, or what we expected of ourselves even in 2020. I agree it is easy to find a lot of errors (that werenât decision-relevant) in our research from that year. That is part of the reason they are not on the website anymore.
That being said, I still broadly support our decision not to spend more time on research that year. Thatâs because spending more time on it would have come at a cost of significant tradeoff. At the time, there was no other organization whose research we could have relied on, and the alternative to the assessment you mention was either to not compare interventions across species (or reduce it to a simplistic metric like âthe number of animals affectedâ metric) or to spend more time on research and run Incubation Program a year later in which case we would have lost a year of impact and might not have started the charities we did. That would have been a big loss because for example, that year we incubated Suvita whose impact and promise were recently recognized by GiveWell that, provided Suvita with $3.3M to scale up, or we incubated Fish Welfare Initiative (FWI) and Animal Advocacy Careers a decision I still consider to be a good one (FWI is an ACE Recommended Charity, and even though I agree with its co-founders that their impact could be higher, Iâm glad they exist). We also couldnât simply hire more staff and do things more in-depth because it was our first year of operation, and there was not enough funding and other resources available for, at the time, an unproven project.
I wouldnât want to spend more time on that, especially because one of the main principles of our research is âdecision-relevance,â and the âwild bugâ one-pager you mention or similar ones were not relevant. If it were, we would not have settled on something of that quality, and we would have put more time into it.
For what it is worth, I think there are things we could have done better. Specifically, we could have put more effort into communicating how little weight others should put on some of that research. We did that by stating at the top (for example, as in the wild bug one-pager you link), âthese reports were 1-5 hours time-limited, depending on the animal, and thus are not fully comprehensive.â and at the time, we thought it was sufficient. But we could have stressed epistemic status even more strongly and in more places so it is clear to others that we put very little weight on it. For full transparency, we also made another mistake. We didnât recommend working on banning/âreducing bait fish as an idea at the time because, from our shallow research, it looked less promising, and later, upon researching it more in-depth, we decided to recommend it. It wouldnât have made a difference then because there were not enough potential co-founders in year 1 to start more charities, but it was a mistake, nevertheless.
[previous comment is deleted, because I accidentally sent an unfinished one]
Thanks for the example! That makes sense and makes me wonder if part of the disagreement came from thinking about different reference classes. I agree that, in general, the research we did in our first year of operations, so 2018/â2019, is well below the quality standard we expect of ourselves now, or what we expected of ourselves even in 2020. I agree it is easy to find a lot of errors (that werenât decision-relevant) in our research from that year. That is part of the reason they are not on the website anymore.
That being said, I still broadly support our decision not to spend more time on research that year. Thatâs because spending more time on it would have come at a cost of significant tradeoff. At the time, there was no other organization whose research we could have relied on, and the alternative to the assessment you mention was either to not compare interventions across species (or reduce it to a simplistic metric like âthe number of animals affectedâ metric) or to spend more time on research and run Incubation Program a year later in which case we would have lost a year of impact and might not have started the charities we did. That would have been a big loss because for example, that year we incubated Suvita whose impact and promise were recently recognized by GiveWell that, provided Suvita with $3.3M to scale up, or we incubated Fish Welfare Initiative (FWI) and Animal Advocacy Careers a decision I still consider to be a good one (FWI is an ACE Recommended Charity, and even though I agree with its co-founders that their impact could be higher, Iâm glad they exist). We also couldnât simply hire more staff and do things more in-depth because it was our first year of operation, and there was not enough funding and other resources available for, at the time, an unproven project.
I wouldnât want to spend more time on that, especially because one of the main principles of our research is âdecision-relevance,â and the âwild bugâ one-pager you mention or similar ones were not relevant. If it were, we would not have settled on something of that quality, and we would have put more time into it.
For what it is worth, I think there are things we could have done better. Specifically, we could have put more effort into communicating how little weight others should put on some of that research. We did that by stating at the top (for example, as in the wild bug one-pager you link), âthese reports were 1-5 hours time-limited, depending on the animal, and thus are not fully comprehensive.â and at the time, we thought it was sufficient. But we could have stressed epistemic status even more strongly and in more places so it is clear to others that we put very little weight on it. For full transparency, we also made another mistake. We didnât recommend working on banning/âreducing bait fish as an idea at the time because, from our shallow research, it looked less promising, and later, upon researching it more in-depth, we decided to recommend it. It wouldnât have made a difference then because there were not enough potential co-founders in year 1 to start more charities, but it was a mistake, nevertheless.