Doctor from NZ, independent researcher (grand futures / macrostrategy) collaborating with FHI / Anders Sandberg. Previously: Global Health & Development research @ Rethink Priorities.
Feel free to reach out if you think there’s anything I can do to help you or your work, or if you have any Qs about Rethink Priorities! If you’re a medical student / junior doctor reconsidering your clinical future, or if you’re quite new to EA / feel uncertain about how you fit in the EA space, have an especially low bar for reaching out.
Outside of EA, I do a bit of end of life care research and climate change advocacy, and outside of work I enjoy some casual basketball, board games and good indie films. (Very) washed up classical violinist and Oly-lifter.
All comments in personal capacity unless otherwise stated.
bruce
The claim isn’t that your answers don’t fit your definitions/methdologies, but that given highly unintuitive conclusions, one should more strongly consider questioning the methodology / definitions you use.
For example, the worst death imaginable for a human is, to a first approximation, capped at a couple of minutes of excruciating pain (or a couple of factors of this), since you value excruciating pain at 10,000 times as bad as the next category, and say that by definition excruciating pain can’t exist for more than a few minutes. But this methodology will be unlikely to accurately capture a lot of extremely bad states of suffering that humans can have. On the other hand, it is much easier to scale even short periods of excruciating suffering with high numbers of animals, especially when you’re happy to consider ~8 million mosquitos killed per human life saved by a bednet—I don’t have empirical evidence to the contrary, but this seems rather high.
Here’s another sense check to illustrate this (please check if I’ve got the maths right here!):
-GiveWell estimate “5.53 deaths averted per 1000 children protected per year” or 0.00553 lives saved per year of protection for a child, or 1 life saved per 180.8 children protected per year.
-They model 1.8 children under each bednet, on average.This means it requires approximately 100 bednets over the course of 1 year to save 1 life/~50 DALYs.
At your preferred rate of 1 mosquito death per hour per net[1] this comes to approximately 880,000 mosquito deaths per life saved,[2] which is
3 OOMs1 OOM lower than the ~8 million you would reach if you do the “excruciating pain” calculation, assuming your 763x claim is correct[3]
(I may not continue engaging on this thread due to capacity constraints, but appreciate the responses!)- ^
Here I make no claims about the reasonableness of 1 mosquito per hour killed by the net as I don’t have any empirical data on this / I’m more uncertain than Nick is but also note that he has more relevant experience than I do here.
- ^
180.8/1.8 * 24* 365 = 879,893
- ^
Assuming 763x GiveWell is correct, a tradeoff of 14.3 days of mosquito excruciating pain (MEP) for 1 happy human life, 2 minutes of MEP per mosquito, this requires a tradeoff of 7.9 million mosquitos killed for one human life saved.
763*(14.3*24*60)/2 = 7,855,848
- ^
Don’t have a lot of details to share right now but there are a bunch of folks coordinating on things to this effect—though if you have ideas or suggestions or people to put forward feel free to DM!
The values I provide are not my personal best guesses for point estimates, but conservative estimates that are sufficient to meaningfully weaken your topline conclusions. In practice, even the assumptions I just listed would be unintuitive to most if used as the bar!
I agree “what fits intuition” is often a bad way of evaluating claims, but this is in context of me saying “I don’t know where exactly to draw the line here, but 14.3 mosquito days of excruciating suffering for one happy human life seems clearly beyond it.”
It seems entirely plausible that a human might take a tradeoff of 100x less duration (3.5 hours * 100 is ~14.5 days), and also value human:mosquito tradeoff at >100x. It wouldn’t be difficult to suggest another OOM in both directions for the same conclusion.The main thing I’m gesturing at is that for a conclusion as unintuitive as “2 mosquito weeks of excruciating suffering cancels out 1 happy human life”, I think it’s reasonable to consider that there might be other explanations, including e.g. underlying methodological flaws (and in retrospect perhaps inconsistent isn’t the right word, maybe ‘inaccurate’ is better).
For example, by your preferred working definition of excruciating pain, it definitionally can’t exist for more than a few minutes at a time before neurological shutdown. I think this isn’t necessarily unreasonable, but there might be failure modes in your approach when basically all of your BOTECs come down to “which organisms have more aggregate seconds of species-adjusted excruciating pain”.
I estimate 14.3 mosquito-days of excruciating pain neutralise the benefits of the additional human welfare from saving 1 life under GW’s moral weights.
Makes sense—just to clarify:
My previous (mis)interpretation of you suggesting 11minutes of MEP trading off 1 day of fully healthy human life would indicate a tradeoff of 11 / (24*60) = 0.0076.Your clarification is that 14.3 mosquito-days trades off against 1 life:
assuming 1 life as 50 DALYs this is 14.3 / (50*365.25) = 0.00078
So it seems like my misinterpretation was ~10x overvaluing the human side compared to your true view?I understand that may seem very little time, but I do not think it can be dismissed just on the basis of seeming surprising. I would say one should focus on checking whether the results mechanistically follow from the inputs, and criticing these:
My view is probably something like:
”I think on the margin most people should be more willing to entertain radical seeming ideas rather than intuitions given unknown unknowns about moral catastrophes we might be contributing to, but I also think the implicit claim[1] I’m happy to back here is that if your BOTEC spits out a result of “14.3 mosquito days of excruciating pain trades off with 50 human years of fully healthy life” then I do expect on priors that some combination of your inputs / assumptions / interpretation of the evidence etc have lead to a result that is likely many factors (if not OOMs) off the true value (if we magically found out what it was (and I think such a surprising result should also prompt similar kinds of thoughts on your end!)). I’ll admit I don’t have a strong sense of how to draw a hard line here, but I can imagine for this specific case that I might expect the tradeoff for humans is closer to 3.5 hours of excruciating pain vs a life, and that I value / expect the human capacity for welfare to be >100x that of a mosquito. If you believe both of those to be true then you’d reject your conclusion.
Another thing to consider might be something like “the way you count/value excruciating pain in humans vs in animals is inconsistent in a way that systematically gives results in favour of animals”
I don’t have too much to offer here in terms of this—I just wanted to know what the implied trade-off actually was and have it spelled out.- Feb 3, 2025, 9:06 PM; 17 points) 's comment on Insecticide-treated nets significantly harm mosquitoes, but one can easily offset this? by (
Gotcha RE: 23.9secs / 11mins, thanks for the clarification!
Looking at this figure you are trading off 7910000 * 2 minutes of MEP for a human death averted, which is 15820000 minutes, which is ~30 mosquito years[1] of excruciating pain trading off for 50 human years of a practically maximally happy life.
Is this a correct representation of your views?
(Btw just flagging that I think I edited my comment as you were responding to it RE: 1.3~37 trillion figures, I realised I divided by 2 instead of by 120 (minutes instead of seconds).)
- ^
7910000 * 2 / 60 / 24 / 365.25 = 30.08
- ^
TL;DR
I think you are probably at least a few OOMs off with these figures, even granting most of your assumptions, as this implies (iiuc) ~8 million mosquito deaths per human death averted.
At 763x GiveWell, a tradeoff of 14.3 days of mosquito excruciating pain (MEP) for 1 happy human life, 2 minutes of MEP per mosquito, a $41 million dollar grant in DRC, and $5100 per life saved, this implies 7.9 million mosquitos killed per human life saved, and that the grant will kill ~63 billion mosquitos.[1]
EDIT: my initial estimates were initially based on 11 mosquito minutes of excruciating pain neutralising 1 day of human life as stated in the text. This was incorrect because I misinterpreted the text. The true value that this post endorses is approximately a factor of 10 in the direction of the mosquito side of the tradeoff (i.e. the equivalent of ~68 mosquito seconds of excruciating pain neutralising 1 day of human life, and ~8million mosquito deaths per human death averted by bednets). I have edited the topline claim accordingly.
============
[text below is no longer accurate / worth reading; see above]A quick sense check using your assumptions and numbers (I typed this up quickly so might have screwed up the maths somewhere!)When you say:”1 day of [a practically maximally happy life] would be neutralised with 23.9s of excruciating pain.and”As a result, 11.0 min of excruciating pain would neutralise 1 day of a practically maximally happy life”I’m assuming you mean “23.9 mosquito seconds of excruciating pain” and “11.0 mosquito minutes of excruciating pain” trading off against 1 human day of a practically maximlly happy life (please correct me if I’m misunderstanding you!)At 763 times as much harm to mosquitos to humans, ~50 DALYs per life saved, and 11min (or 23.9 seconds) of MEP, this implies you are suggesting bednets are causing something like 333 million ~ 9 billion seconds of MEP per human death averted.[2]Using your figures of2 minutes of excruciating pain per mosquitokilled this gives a range of 3million~77 million mosquito deaths per human death averted in order for your 763x claim to be correct.[3]Using your stated figures of $41 million and $5100 per life for the GW grant, this implies you think the grant will lead to somewhere between 22~616 billion mosquito deaths in DRC alone.[4]For context, this source estimates global mosquito population as between110 trillion and ‘in the quadrillions’.- ^
763*(14.3*24*60)/2 = 7,855,848
41 million / 5100 * 763*(14.3*24*60)/2 = 63,154,856,470.6 - ^
365.25*50*11*60*763 = 9,196,629,750
365.25*50*23.9*763 = 333,029,471.25 - ^
333029471 / 120 = 2,775,245.59
9196629750 / 120 = 76,638,581.25 - ^
41 million / 5100 * 2,775,245.59 = 22,310,797,880.4
41 million / 5100 * 76,638,581.25 = 616,114,084,559
- ^
I didn’t catch this post until I saw this comment, and it prompted a response. I’m not well calibrated on how much upvotes different posts should get,[1] but personally I didn’t feel disappointed that this post wasn’t on the front page of the EA Forum, and I don’t expect this is a post I’d share with e.g., non-vegans who I’d discuss the meat eater problem with.[2]
- ^
I’m assuming you’re talking about the downvotes, rather than the comments? I may be mistaken though.
- ^
This isn’t something I’d usually comment because I do think the EA Forum should be more welcoming on the margin and I think there are a lot of barriers to people posting. But just providing one data point given your disappointment/surprise.
- ^
The ethical vegan must therefore decide whether their objection is to animals dying or to animals living.
One might object to animal suffering, rather than living/dying. So a utilitarian might say factory farming is bad because of the significantly net-negative states that animals endure while alive, while being OK with eating meat from a cow that is raised in a way such that it is living a robustly net positive life, for example.[1]
If you’re really worried about reducing the number of animal life years, focus on habitat destruction—it obviously kills wildlife on net, while farming is about increasing lives.
This isn’t an obvious comparison to me, there are clear potential downsides of habitat destruction (loss of ecosystem services) that don’t apply to reducing factory farming. There are also a lot of uncertainties around impacts of destroying habitats—it is much harder to recreate the ecosystem and its benefits than to re-introduce factory farming if we are wrong in either case. One might also argue that we might have a special obligation to reduce the harms we cause (via factory farming) than attempt habitat destruction, which is reducing suffering that exists ~independently of humans.
...the instrumentalization of animals as things to eat is morally repugnant, so we should make sure it’s not perpetuated. This seems to reflect a profound lack of empathy with the perspective of a domesticate that might want to go on existing. Declaring a group’s existence repugnant and acting to end it is unambiguously a form of intergroup aggression.
I’m not sure I’m understanding this correctly. Are you saying animals in factory farms have to be able to indicate to you that they don’t want to go on existing in order for you to consider taking action on factory farming? What bar do you think is appropriate here?
If factory farming seems like a bad thing, you should do something about the version happening to you first.
If there were 100 billion humans being killed for meat / other products every year and living in the conditions of modern factory farms, I would most definitely prioritise and advocate for that as a priority over factory farming.
The domestication of humans is particularly urgent precisely because, unlike selectively bred farm animals, humans are increasingly expressing their discontent with these conditions, and—more like wild animals in captivity than like proper domesticates—increasingly failing even to reproduce at replacement rates.
Can you say more about what you mean by “the domestication of humans”? It seems like you’re trying to draw a parallel between domesticated animals and domesticated humans, or modern humans and wild animals in captivity, but I’m not sure what the parallel you are trying to draw is. Could you make this more explicit?
This suggests our priorities have become oddly inverted—we focus intense moral concern on animals successfully bred to tolerate their conditions, while ignoring similar dynamics affecting creatures capable of articulating their objections...
This seems like a confusing argument. Most vegans I know aren’t against factory farming because it affects animal replacement rates. It’s also seems unlikely to me that reduced fertility rates in humans is a good proxy/correlate for the amount of suffering that exists (it’s possible that the relationship isn’t entirely linear, but if anything, historically the opposite is more true—countries have reduced fertility rates as they develop and standards of living improve). It’s weird that you use fertility rates as evidence for human suffering but seem to have a extremely high bar for animal suffering! Most of the evidence I’m aware of would strongly point to factory farmed animals in fact not tolerating their conditions well.
...who are moreover the only ones known to have the capacity and willingness to try to solve problems faced by other species.
This is a good argument to work on things that might end humanity or severely diminish it’s ability to meaningfully + positively affect the world. Of all the options that might do this, where would you rank reduced fertility rates?
- ^
Though (as you note) one might also object to farming animals for food for rights-based rather than welfare-based reasons.
- Jan 30, 2025, 7:30 AM; 5 points) 's comment on Ethical Veganism is Wrong by (
- ^
Screwworm Free Future is hiring for a Director
Reposting from LessWrong, for people who might be less active there:[1]
TL;DRFrontierMath was funded by OpenAI[2]
This was not publicly disclosed until December 20th, the date of OpenAI’s o3 announcement, including in earlier versions of the arXiv paper where this was eventually made public.
There was allegedly no active communication about this funding to the mathematicians contributing to the project before December 20th, due to the NDAs Epoch signed, but also no communication after the 20th, once the NDAs had expired.
OP claims that “I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation. I am not aware of an agreement between Epoch AI and OpenAI that prohibits using this dataset for training if they wanted to, and have slight evidence against such an agreement existing.”
Seems to have confirmed the OpenAI funding + NDA restrictions
Claims OpenAI has “access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities.”
They also have “a verbal agreement that these materials will not be used in model training.”
Edit (19/01): Elliot (the project lead) points out that the holdout set does not yet exist (emphasis added):As for where the o3 score on FM stands: yes I believe OAI has been accurate with their reporting on it, but Epoch can’t vouch for it until we independently evaluate the model using the holdout set we are developing.[3]
Edit (24/01):
Tamay tweets an apology (possibly including the timeline drafted by Elliot). It’s pretty succinct so I won’t summarise it here! Blog post version for people without twitter. Perhaps the most relevant point:OpenAI commissioned Epoch AI to produce 300 advanced math problems for AI evaluation that form the core of the FrontierMath benchmark. As is typical of commissioned work, OpenAI retains ownership of these questions and has access to the problems and solutions.
Nat from OpenAI with an update from their side:
We did not use FrontierMath data to guide the development of o1 or o3, at all.
We didn’t train on any FM derived data, any inspired data, or any data targeting FrontierMath in particular
I’m extremely confident, because we only downloaded frontiermath for our evals *long* after the training data was frozen, and only looked at o3 FrontierMath results after the final announcement checkpoint was already picked .
============
Some quick uncertainties I had:
What does this mean for OpenAI’s 25% score on the benchmark?
What steps did Epoch take or consider taking to improve transparency between the time they were offered the NDA and the time of signing the NDA?
What is Epoch’s level of confidence that OpenAI will keep to their verbal agreement to not use these materials in model training, both in some technically true sense, and in a broader interpretation of an agreement? (see e.g. bottom paragraph of Ozzi’s comment).
In light of the confirmation that OpenAI not only has access to the problems and solutions but has ownership of them, what steps did Epoch consider before signing the relevant agreement to get something stronger than a verbal agreement that this won’t be used in training, now or in the future?
- ^
Epistemic status: quickly summarised + liberally copy pasted with ~0 additional fact checking given Tamay’s replies in the comment section
- ^
arXiv v5 (Dec 20th version) “We gratefully acknowledge OpenAI for their support in creating the benchmark.”
- ^
See clarification in case you interpreted Tamay’s comments (e.g. that OpenAI “do not have access to a separate holdout set that serves as an additional safeguard for independent verification”) to mean that the holdout set already exists
I’ll say up front that I definitely agree that we should look into the impacts on worms a nonzero amount! The main reason for the comment is that I don’t think the appropriate bar for whether or not the project should warrant more investigation is whether or not it passes a BOTEC under your set of assumptions (which I am grateful for you sharing—I respect your willingness to share this and your consistency).
Again, not speaking on behalf of the team—but I’m happy to bite the bullet and say that I’m much more willing to defer to some deontological constraints in the face of uncertainty, rather than follow impartiality and maximising expected value all the way to its conclusion, whatever those conclusions are. This isn’t an argument against the end goal that you are aiming for, but more my best guess in terms of how to get there in practice.
Impartiality and hedonism often recommend actions widely considered bad in super remote thought experiments, but, as far as I am aware, none in real life.
I suspect this might be driven by it not being considered to be bad under your own worldview? Like it’s unsurprising that your preferred worldview doesn’t recommend actions that you consider bad, but actually my guess is that not working on global poverty and development for the meat eater problem is in fact an action that might be widely considered bad in real life for many reasonable operationalisations (though I don’t have empirical evidence to support this).[1]
I do agree with you on the word choices under this technical conception of excruciating pain / extreme torture,[2] though I think the idea that it ‘definitionally’ can’t be sustained beyond minutes does have some potential failure modes.
That being said, I wasn’t actually using torture as a descriptor for the screwworm situation, more just illustrating what I might consider a point of difference between our views, i.e. that I would not be in favour of allowing humans to be tortured by AIs even if you created a BOTEC showed that this caused net positive utils in expectation; and I would not be in favour of an intervention to spread the new world screwworm around the world, even if you created a BOTEC that showed it was the best way of creating utils—I would reject these at least on deontological grounds in the current state of the world.- ^
This is not to suggest that I think “widely considered bad” is a good bar here! A lot of moral progress came from ideas that initially were “widely considered bad”. Just suggesting this particular defence of impartiality + hedonism; namely that it “does not recommend actions widely considered bad in real life” seems unlikely to be correct—simply because most people are not impartial hedonists to the extent you are.
- ^
Neither of which were my wording!
- ^
Speaking for myself / not for anyone else here:
My (highly uncertain + subjective) guess is that each lethal infection is probably worse than 0.5 host-years equivalents, but the number of worms per host animal probably could vary significantly.
That being said, personally I am fine with the assumption of modelling ~0 additional counterfactual suffering for screwworms that are never brought into existence, rather than e.g. an eradication campaign that involves killing existing animals.
I’m unsure how to think about the possibility that the screwworm species which might be living significantly net positive lives such that it trumps the benefit of reduced suffering from screwworm deaths, but I’d personally prefer stronger evidence for wellbeing or harms on the worm’s end to justify inaction here (ie not look into the possibility/feasibility of this)Again, speaking only for myself—I’m not personally fixated on either gene drives or sterile insect approaches! I am also very interested in finding out reasons to not proceed with the project, find alternative approaches, which doesn’t preclude the possibility that the net welfare of screwworms should be more heavily weighed as a consideration. That being said, I would be surprised if something like “we should do nothing to alleviate host animal suffering because their suffering can provide more utils for the screwworm” was a sufficiently convincing reason to not do more work / investigation in this area (for nonutilitarian reasons), though I understand there are a set of assumptions / views one might hold that could drive disagreement here.[1]
- ^
If a highly uncertain BOTEC showed you that torturing humans would bring more utility to digital beings than the suffering incurred on the humans, would you endorse allowing this? At what ratio would you change your mind, and how many OOMs of uncertainty on the BOTEC would you be OK with?
Or—would you be in favour of taking this further and spreading the screwworm globally simply because it provides more utils, rather than just not eradicating the screwworm?
- ^
Launching Screwworm-Free Future – Funding and Support Request
It’s fine to apply regardless; there’s one application form for all 2025 in-person EAGs. You’ll likely be sent an email separately closer to the time reminding you that you can register for the East Coast EAG, and be directed to a separate portal where you can do this without needing to apply again.
Hey team—are you happy to share a bit more about who would be involved in these projects, and their track record (or Whylome’s more broadly)? I only spent a minute or so on this but I can’t find any information online beyond your website and these links, related to SMTM’s “exposure to subclinical doses of lithium is responsible for the obesity epidemic” hypothesis (1, 2).
More info on how much money you’re looking for the above projects would also be useful.
Ah my bad, I meant extreme pain above there as well, edited to clarify! I agree it’s not a super important assumption for the BOTEC in the grand scheme of things though.
However, if one wants to argue that I overestimated the cost-effectiveness of SWP, one has to provide reasons for my guess overestimating the intensity of excruciating pain.
I don’t actually argue for this in either of my comments.[1] I’m just saying that it sounds like if I duplicated your BOTEC, and changed this one speculative parameter to 2 OOMs lower, an observer would have no strong reason to choose one BOTEC over another just by looking at the BOTEC alone. Expressing skepticism of an unproven claim doesn’t produce a symmetrical burden of proof on my end!
Mainly just from a reasoning transparency point of view I think it’s worth fleshing out what these assumptions imply and what is grounding these best guesses[2] - in part because I personally want to know how much I should update based on your BOTEC, in part because knowing your reasoning might help me better argue why you might (or might not) have overestimated the intensity of excruciating pain if I knew where your ratio came from (and this is why I was checking the maths and seeing if these were correct, and asking if there’s stronger evidence if so, before critiquing the 100k figure), and because I think other EAF readers, as well as broader, lower-context audience of EA bloggers would benefit from this too.
If you did that, SWP would still be 434 (= 43.4*10^3*10^3/(100*10^3)) times as cost-effective as GiveWell’s top charities.
Yeah, I wasn’t making any inter-charity comparisons or claiming that SWP is less cost-effective than GW top charities![3] But since you mention it, it wouldn’t be surprising to me if losing 2 OOMs might make some donors favour other animal welfare charities over SWP for example—but again, the primary purpose of these comments is not to litigate which charity is the best, or whether this is better or worse than GW top charities, but mainly just to explore a bit more around what is grounding the BOTEC, so observers have a good sense on how much they should update based on how compelling they find the assumptions / reasoning etc.
I think it is also worth wondering about whether you trully believe that updated intensity. Do you think 1 day of fully healthy life plus 86.4 s (= 0.864*100*10^3/100) of scalding or severe burning events in large parts of the body, dismemberment, or extreme torture would be neutral?
Nope! I would rather give up 1 day of healthy life than 86 seconds of this description. But this varies depending on the timeframe in question.
For example, I’d probably be willing to endure 0.86 seconds of this for 14 minutes of healthy life, and I would definitely endure 0.086 seconds of this than give up 86 seconds of healthy life.
And using your assumptions (ratio of 100k), I would easily rather have 0.8 seconds of this than give up 1 day of healthy life, but if I had to endure many hours of this I could imagine my tradeoffs approaching, or even exceeding 100k.
I do want to mention that I think it’s useful that someone is trying to quantify these comparisons, I’m grateful for this work, and I want to emphasise that these are about making the underlying reasoning more transparent / understanding the methodology that leads to the assumptions in the BOTEC, rather than any kind of personal criticism!
So I suppose I would be wary of saying that GiveDirectly now have 3–4x the WELLBY impact relative to Vida Plena—or even to say that GiveDirectly have any more WELLBY impact relative to Vida Plena
Ah right—yeah I’m not making either of these claims, I’m just saying that if the previous claim (from VP’s predictive CEA) was that: “Vida Plena...is 8 times more cost-effective than GiveDirectly”, and GD has since been updated to 3-4x more cost-effective than it was compared to the time the predictive CEA was published, we should discount the 8x claim downwards somewhat (but not necessarily by 3-4x).
I think one could probably push back on whether 7.5 minutes of [extreme] pain is a reasonable estimate for a person who dies from malaria, but I think the bigger potential issue is still that the result of the BOTEC seems highly sensitive to the “excruciating pain is 100,000 times worse than fully healthy life is good” assumption—for both air asphyxiation and ice slurry, the time spent under excruciating pain make up more than 99.96% of the total equivalent loss of healthy life.[1]
I alluded to this on your post, but I think your results imply you would prefer to avert 10 shrimp days of excruciating pain (e.g. air asphyxiation / ice slurry) over saving 1 human life (51 DALYs).[2]
If I use your assumption and also value human excruciating pain as 100,000 times as bad as healthy life is good,[3] then this means you would prefer to save 10 shrimp days of excruciating pain (using your air asphyxiation figures) over 4.5 human hours of excruciating pain,[4] and your shrimp to human ratio is less than 50:1 - that is, you would rather avert 50 shrimp minutes of excruciating pain than 1 human minute of excruciating pain.
To be clear, this isn’t a claim that one shouldn’t donate to SWP, but just that if you do bite the bullet on those numbers above then I’d be keen to see some stronger justification beyond “my guess” for a BOTEC that leads to results that are so counterintuitive (like I’m kind of assuming that I’ve missed a step or OOMs in the maths here!), and is so highly sensitive to this assumption.[5]
- ^
Air asphyxiation: 1- (5.01 / 12,605.01) = 0.9996
Ice slurry: 1 - (0.24 / 604.57) = 0.9996 - ^
1770 * 7.5 = 13275 shrimp minutes
13275 / 60 / 24 = 9.21875 shrimp days - ^
There are arguments in either direction, but that’s probably not a super productive line of discussion.
- ^
51 * 365.25 * 24 * 60 = 26,823,960 human minutes
26,823,960 / 100,000 = 268.2396 human minutes of excruciating pain
268.2396 / 60 = 4.47 human hours of excruciating pain13275 / 268.2396 = 49.49 (shrimp : human ratio)
- ^
Otherwise I could just copy your entire BOTEC, and change the bottom figure to 1000 instead of 100k, and change your topline results by 2 OOMs.
Annoying pain is 10% as intense as fully healthy life.
Hurtful pain is as intense as fully healthy life.
Disabling pain is 10 times as intense as fully healthy life.
Excruciating pain is 100 k times as intense as fully healthy life.
- ^
I might be misunderstanding you, but are you saying that after reading the GD updates, we should update on VP equally to GD / that we should expect the relative cost-effectiveness ratio between the two to remain the same?
Appreciate this! There are a decent amount happening; can you DM me with a bit more info about yourself / what you’d be willing to help with?