Good post, and this also seems to be a very opportune time to be promoting wild animal vaccination. A few thoughts:
To start with, programs of this kind would only be implemented after a vaccine is developed and distributed among human beings.
In relation to the current pandemic, the media often mentions that there are 7 coronaviruses that can effect humans and we don’t have an effective vaccine for any of them. However, I was recently surprised to learn that there are several commercially available veterinary vaccines against coronaviruses—this raised my expectation that a human coronavirus vaccine could be successfully developed and seems promising for animal vaccination as well.
I think it’s worth thinking more about what level of safety testing goes into developing animal vaccines. The Hendra virus vaccine for horses might be an interesting case study for this. Hendra virus was relatively recently discovered in Australian, and can be transmitted from flying foxes (a megabat species), via horses, to humans where it has 60%+ case fatality. Fruit bat culling was very widely called for after a series of outbreaks in 2011, but the government decided to fund development for a horse vaccine instead (by unfortunate coincidence, a heat-wave latter killed 1/3rd of the flying fox population a few years later). A vaccine was developed within a year and widely administered soon after. However, some owners (particularly those of racing horses) reported severe side-effects (including death) and eventually started a class-action against the vaccine manufacturer. I don’t know if the anecdotal reports of side-effects stood up to further scrutiny (there could have been some motivated reasoning going on similar to that used by human anti-vaxxers), but it seems plausible that veterinary vaccine development accepts, or does not even attempt to consider, much worse side-effects that would be approved in a vaccine developed for humans. Given animal’s inability to self-report, some classes of minor side-effects may only be noticed by owners of companion animals who are very familiar with their behaviour. While I don’t think animal side-effects would be a consideration in developing vaccines for pandemic control or economic purposes, it seems more relevant in the context of vaccinating animals to increase their own welfare.
This may be the case especially for bats, because they have one of the highest disease burdens among wild mammals. Among other conditions, they are harmed by a number of different coronaviruses-caused diseases. In fact, they harbor more than half of all known coronaviruses.
Why do bats have so many diseases (lots of which humans seem to catch)? This comment (which I found in an SSC article) frames the question in another way:
There are over 1,250 bat species in existence. This is about one fifth of all mammal species. Just to get a sense of this, let me ask a modified version of the question in the title:
“Why do human beings keep getting viruses from cows, sheep, horses, pigs, deer, bears, dogs, seals, cats, foxes, weasels, chimpanzees, monkeys, hares, and rabbits?”
This re-framing doesn’t really change the problem, but it suggests that just viewing ‘bats’ as a single animal group comparable to ‘cows’ or ‘deers’ is concealing the scope of species diversity involved.
I heard Jonathan Epstein talk at a panel discussion on biosecurity last year. He was in favour of disease monitoring and management in wild animal populations, and also seemed sympathetic to the idea of doing this from both a human health and animal welfare standpoints. He might be interested in discussing this further, and is in a position where he could advocate for or implement these ideas.
Thanks for asking the questions I suggested. I thought found Aubrey’s response to this question the most informative:
Has any effort been made to see if the effects of multiple treatments are additive, in terms of improved lifespan, in a pre-clinical study?
No, and indeed we would not expect them to be additive, because we would not expect any one of them to make a significant difference to lifespan. That’s because until we are fixing them all, the ones we are not yet fixing would be predicted to kill the organism more-or-less on schedule. Only more-or-less, because there is definitely cross-talk between different damage types, but still we would not expect that lifespan would be a good assay of efficacy until we’re fixing pretty much everything.
I don’t have a background in anti-aging biology and my intuition was that the treatments would be have more of an additive effect. However, I agree with his view that there won’t be much effect on total life-span until everything is fixed.
My feeling is that this may make the expected value of life-extension research lower (by decreasing probability of success) given that all hallmarks need to be effectively treated in parallel to realize any benefit. If one proves much harder to treat in humans, or if all the treatments don’t work together, then that reduces the benefit gained from treating the other hallmarks, at least as far as LEV is concerned. This makes SRF’s approach of focusing on the most difficult problems seem quite reasonable and probably the most effective way to make a marginal contribution to life-extension research at the moment. Once all hallmarks are treatable pre-clinically in-vivo, then it seems like research into treatment interactions may become the most effective way to contribute (as noted, this will probably also be hard to get main-stream funding for).
Biosecurity researchers are often better-educated and/or more creative than most bad actors.
I generally agree with the above statement and that the risk of openly discussing some topics outweigh the benefits of doing so. But I recently realised there are some people outside of EA that I think are generally well educated, probably more creative than many biosecurity researchers, and who often write openly about topics the EA community may consider bioinfohazards: authors of near-future science fiction.
Many of the authors in this genre have STEM backgrounds, often write about malicious-use GCR scenarios (thankfully, the risk is usually averted), and I’ve read several interviews where authors mention taking pains to do research so they can depict a scenario that represents a possible, if sometimes ambitious, future risk. While these novels don’t provide implementation details, the ‘attack strategies’ are often described clearly and the accompanying narrative may well be more inspiring to a poorly educated bad actor looking for ideas than a technical discussion would be.
I haven’t seen (realistic) fiction discussed in the context of infohazards before and would be interested to know what others think of this. In the spirit of the post, I’ll refrain from creating an ‘attention hazard’ (or just advertising?) by mentioning any authors who I think describe GCR’s particularly well.
Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes
I haven’t seen this mentioned in other discussion of asteroid risk (i.e. I don’t think Ord mentions it in the Precipice) but I don’t think it should be ignored so quickly. If states/corporations develop technology to transfer asteroids to Earth orbit then this seems like it would represent an equivalent dual-use concern. Indeed, it may be even riskier than just developing tools for deflection, as activities like mining could provide ‘cover’ for maliciously aiming an asteroid at Earth. On the positive side, similar tools can probably be used for both orbital transfer and deflection, so the risky technology may also be its own counter-technology.
At the start of Chapter 6 in the precipice, Ord writes:
To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others as one in 50. So much of one’s work in accurately assessing the size of each risk is thus immediately wasted. Furthermore, the meanings of these phrases shift with the stakes: “highly unlikely” suggests “small enough that we can set it aside,” rather than neutrally referring to a level of probability. This causes problems when talking about high-stakes risks, where even small probabilities can be very important. And finally, numbers are indispensable if we are to reason clearly about the comparative sizes of different risks, or classes of risks.
This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:
In Nuevo San Juan, Peru, the Matsés people speak with what seems to be great care, making sure that every single piece of information they communicate is true as far as they know at the time of speaking. Each uttered sentence follows a different verb form depending on how you know the information you are imparting, and when you last knew it to be true.
The language has a huge array of specific terms for information such as facts that have been inferred in the recent and distant past, conjectures about different points in the past, and information that is being recounted as a memory. Linguist David Fleck, at Rice University, wrote his doctoral thesis on the grammar of Matsés. He says that what distinguishes Matsés from other languages that require speakers to give evidence for what they are saying is that Matsés has one set of verb endings for the source of the knowledge and another, separate way of conveying how true, or valid the information is, and how certain they are about it. Interestingly, there is no way of denoting that a piece of information is hearsay, myth, or history. Instead, speakers impart this kind of information as a quote, or else as being information that was inferred within the recent past.
I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn’t seem conducive to focusing on impact.
Agreed. In my brief experience with academic consulting one thing I’ve realised is that it is really quite reasonable for contracted consultants to charge a 50-100% premium (on top of their utilisation ratio—usually 50%, so another x2 markup) to account for their lack of benefits.
So if somebody is expecting to earn a ‘fair’ salary from impact purchases compared to employment (or from any other type of short-term contract work really) they should expect a funder to pay premium for this compared to employing them (or funding another organisation to do so) - this doesn’t seem like a good use of funds in the long-term if it is possible to employee that person.
I’m interested in seeing a second post on impact purchases and would personally consider selling impact in the future. I have a few general comments about this:
Impact purchases seem similar to value-based fees that are sometimes used in commercial consulting (instead of time- or project-based fees) and may be able to provide a complementary perspective. Although in business the ‘impact’ would usually be something easy to track (like additional revenue) and the return the consultant gets (like percentage of revenue up to a capped value) would be agreed on in advance. I wonder if a similar pre-arrangement for impact purchase could work for EA projects that have quantifiable impact outcomes, such as through a funder agreeing to pay some amount per intervention distributed, student educated, etc. Of course, the tracked outcome should reflect the funders true goals to prevent gaming the metric.
It seems like impact purchases would be particularly helpful for people coming into the EA community who don’t yet have good EA references/prestige/track-record but are confident they can complete an impactful project, or who want to work on unorthodox ideas that the community doesn’t have the expertise to evaluate. If they try something out and it works then they can get funds to continue and preliminary results for a grant, if not, it’s feedback to go more mainstream. For this dynamic to work people should probably be advised to plan relatively short projects (say a up too few months), otherwise they could spend a lot of time on something nobody values.
This could be a particularly interesting time to trial impact purchases used in conjunction with government UBI (if that ends up being fully brought in anywhere). UBI then removes the barrier of requiring a secure salary before taking on a project.
From my experience applying to a handful of early-career academic grants and a few EA grants, I agree that almost none provide any/useful feedback (beyond accepted or declined), either for the initial application or for progress or completion reports. However, worse than having no feedback is that I once heard from an European Research Council (ERC) grant reviewer that their review committees are required to provided feedback on rejected applications, but also instructed to make sure the feedback is vague and obfuscated so the applicant will have no grounds to ask for an appeal, which means the applicant gets feedback the reviewers know won’t be useful for improving their project… Why do they bother???
With regards to implementation. I think one point to consider is the demand from impacters relative to funds of purchasers. At least in academia, funding is constrained and grant success rates are often <20%, and so grantees know that it is unlikely they’ll get a grant to do their project (academic granters often say they turn away a lot of great projects they want to fund). If impact purchasers were similarly funding constrained relative to the number of good projects, I think the whole scheme would be less appealing as then even if I complete a great project, getting its impact bought would still involve a bit/lot of luck.
These posts about impact prizes and altruistic equity may also be of interest to consider.
Have a particular strength? Already an expert in a field? Here are the socially impactful careers 80,000 Hours suggests you consider first.
In the BBC today: Coronavirus: Robots use light beams to zap hospital viruses
Sure, I think the key questions would be:
-Of the treatments currently being developed (in reference to the list on lifespan.io), is it likely that treatments for multiple hallmarks can be used in parallel?
--Are there currently any observed or expected interactions between different treatments?
--Has any effort been made to see if the effects of multiple treatment are additive, in terms of improved lifespan, in a pre-clinical study?
-What side effects have been observed for the treatments currently in clinical trials?
It’s interesting to know that recurring and more frequent treatments are going to be needed. That point hasn’t been obvious to me before, but it could be important to consider in relation to the economics of scaling up mass anti-aging treatment—it’s not like a one of vaccination against a specific type of ageing damage, but still a ‘condition’ that requires ongoing, and perhaps increasing, care.
I was happy to see that I’m apparently not the only person who touches their face a lot and the BBC noted that many people even touch their face while giving official advice not to:
The main tips for how to avoid face touching were:
-Wear glasses on your face so you touch them instead.
-Make an effort to keep your hands clasped most of the time, so that touching your face is more of a conscious act that you’ll notice and and can choose to stop.
Nice piece Emanuele, I felt that I actually got what LEV was and why we should aim to get there more after reading this post than I did after reading your previous ones. A general comment is that from what the Lifespan.io roadmap shows, it really seems like anti-aging research has progressed quite far (i.e. quite a few on going and some late-stage clinical trials) relative to the fields fringe nature and apparently limited funding.
In terms of questions, there is one thing that I think is fairly critical—how well do multiple interventions combine?
What SRF claims is that solving all the seven categories will probably lead to lifespans longer than the current maximum.
As I understand this, treatments for all of the categories are being developed in independently. Is anybody looking to see if they can all be used in parallel? Could there be interactions between treatments that prevent this? It seems that the expected value of the anti-aging research is only realised if it will, at some point, be possible to treat all the categories in parallel. Research into a treatment for one category that wouldn’t be compatible with other treatments seems like it should receive much lower priority.
It seems like there could be ways to test this already. For instance, the roadmap shows many treatments are already at the pre-clinical in-vivo stage. If we start applying multiple therapies in-vivo, we can start to test how compatible they are. Do you know if that has been done?
Starting to test multiple therapies in-vivo could also provide some fundamental evidence about how the benefits of multiple therapies combine. At the moment the assumption seems to be that, say, individually treating mitochondrial mutations and extracellular aggregates, prolongs expected life by X and Y years, respectively, so treating them both in combination will prolong life by X + Y years, but both negative or positive returns on the combination could occur. To be honest, I have some general scepticism about anti-aging research because ageing is very widely conserved in the animal kingdom (there are only a few animals with negligible senescence). It could be that there is some evolutionary path way negligible senescent animals went down that is hard to cross-over to even if we treat all the categories, so I have a weak prior that senescent animals will get diminishing returns from multiple therapies.
Another point that I think is worth discussing is how the damage repair approach effects the metabolic processes causing the damage?
Dr. de Grey always stresses how the damage repair approach, which he also calls “the maintenance approach”, has a big advantage over geriatrics and the kind of biogerontology aimed at targeting the metabolic processes that are causing this damage.
For instance, if we treat an 80 year olds telomere attrition, are we going to need to treat them again in the future? Are consecutive treatments going to need to occur at more regular intervals? I don’t know much about how treatments effect the underlying metabolic processes (as noted, metabolism is very complicated), but it could be that these continue picking up pace even as the damage they cause is repaired. Knowing about this could also be important in assessing the value of LEV as a whole, particularly if treatments have dose dependent side-effects. For instance, it may be that we can treat ageing out to 200 or so, but then rate of damage is so high that treatment dose required is too strong to tolerate. This is probably an issue for SENS 2.0, but it also seems like an area where some in-vivo testing can provide some useful information. If nothing else, finding that regularity of therapy is expected to increases suggests that treatments with more tolerable side-effects might be preferred (where there is a choice).
This are both fairly technical issues compared to the other questions you proposed in the post, but I think they point towards some fairly crucial considerations about how the additivity and repeatability of therapies will effect the goal of LEV.
In terms of hand sanitiser—in Brazil I’ve also found hand sanitiser is sold out or very expensive. However, here it is common to use 70% ethanol for household cleaning at it is possible to buy this in gel form as well, which is still well stocked and at normal prices. I expect this will work just as well for sanitisation. Would it be worth considering as an alternative if proper hand sanitiser is unavailable or for people on a budget (maybe it would leave you hands a bit dryer)?
I don’t recall seeing this product while living in Australia or Sweden, so I’m not sure how widely available it is. Here is a link to the last pack I bought, although there are many brands available in Brazil.
Further work from the authors of the original article:
Claims and statistical inference in animal physical cognition research.
Overall, our analysis provides a cautiously optimistic analysis of reliability and bias in animal physical cognition research, however it is nevertheless likely that a non-negligible proportion of results will be difficult to replicate.
and practicing not touching your face.
How important is it to avoid touching your face if you are also washing your hands regularly?
As a practical point, I think this is somewhat hard to avoid for some people. I feel I touch my face more than wanted and even though this occurs in social situations where it may be mildly unacceptable, I have problems breaking the habit (I do have weak symptoms of body-focussed repetitive behaviour disorder and it’s probably related to this). I don’t think the somewhat abstract threat of reducing infection risk will be enough to stop me touching my face much as I mostly do this without think about it, although that may change when the virus spreads to my region and I feel under more personal threat.
This made me recall the Pavlok, which is a wrist-band that uses aversion therapy (vibrations and electric shocks) to break bad habits like nail biting. Although I cant find this described as a use case on their website, I suspect it could also be used to break a face touching habit quickly. Alternatively, you can probably get most of the aversion from snapping a rubber band on your wrist whenever you notice you’re touching your face.
Thanks for the discussion on this Tom and Will.
I originally posted this article as, although it presents a very strong opinion on the matter and admittedly uses shock tactics by taking many values out of context (as pointed out by Romeo and Will), I thought that the sentiment was going in both same the direction that I personally felt science was moving and also with several other sources I’d read. I hadn’t looked into any of authors other work, and although his publication record seems reasonable, he has pushed some fairly fringe views on nutrition and knowing this does reduce the weight I give to views in this article (thanks for digging into it Tom).
For a more balanced critic of recent scientific practice I’d recommend the book Real Science by John Ziman (I have a pdf, PM if you’d like a copy). It’s a long but fairly interesting read on the sociology of science from a naturalistic perspective, and claims that University research has moved from an ‘academic’ to ‘post-academic’ phase, characterised as the transition from the rigorous pursuit of knowledge to a focus on applications, which represents a convergence between academic and industrial research traditions. Although this may lead to more applications diffusing out of academia in the short-term, the ‘post-academic’ system is claimed to loose some important features of traditional research, like disinterestedness, organised skepticism, and universality, and tends to trade quality for quantity. The influence of societal interests (including corporate goals) would be expected to have much influence on the work done by ‘post-academic’ researchers.
Agreed with both Will and Tom that there are certainly are still lot of people doing good academic research, and how strongly you weight the balance will depend on which scientists you interact with. Personally, I ended up leaving Academia without pursuing a faculty position (in-part) because I felt I the push to use excessive spin and hype in order to publish my work and attract funding was making it quite substanceless. Of course, this may have been specific to the field I was working in (invertebrate sensory neuroscience) and I’m glad to hear that you both have more positive outlooks.
Thanks for elaborating Will.
Agreed that the increase in funding for science will generally just increase the size of science, and the base assumption should be that the retraction rate will stay the same, which would lead to a roughly proportionate increase in the number of retractions with science funding. The 700% vs. 900% roughly agrees with that assumption (although it could still be that the reasons for retraction change over time).
The idea of increasing retractions being a beneficial sign of better epistemic standards is interesting. My observation is that papers are usually basically only retracted if scientific fraud or misconduct was committed (e.g. falsifying or manipulating research data) - questionable research practices (e.g. P-hacking, optional stopping or HARKing), failure to replicate, or even technical errors don’t usually lead to a retraction (Wikipedia also notes that plagiarism is a common cause of retractions). It is a pity there is no ground truth for scientific misconduct to reference the retraction rate against.
Aside, this summary of the influence of retractions and failure to replicate on later citations may be of interest. Thankfully, retraction usually has a strong reduction on the amount of citations the retracted paper receives.
I agree that it’s an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).
Still, my feeling is that this is closer to the truth than we’d want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I’ve seen that the topic (e.g. to round out somebody’s profile) and participants (e.g re-doing experiments somebody else did so they don’t have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it’s very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to ‘demonstrate’ their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.
My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won’t really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more ‘spin’, speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it’s clearly blue-sky research that hasn’t yet found an application.
As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn’t get any, common story) being outside the Ivory Tower can provide a lot more perspective about it’s failings than what you get from inside it.