I am an experienced interdisciplinary researcher and have focused on using computational methods to derive insights into biological systems. My academic research took me from collecting insects in tropical rainforests to imaging them in synchrotrons. I have become progressively more involved with the Effective Altruism community over several years and I am now aiming to apply my expertise in areas that more directly benefit society. To that end, I have recently redirected my research towards exploring novel technical countermeasures against viral pandemics.
gavintaylor
Re 3⁄3.1: When discussing the marginal returns on a human life, a quantitative way of modelling human capability could be as the product of sigmoidal curves with positive and negative slopes to represent the scaling up of capability during development and scaling down of capability during natural aging. As long as aging doesn’t kick in before development is finished then there is a plateau phase during which a person can perform at maximum capability and should produce constant returns on extra years in this phase.
Treating treating human capability as a single curve might be too simplistic. One could further break this down to intellectual and physical capability and intrinsic and extrinsic factors:
-Physical capability is simplest as humans probably reach peak intrinsic physical capability around 20 (sharp increase) and start to decline after 40 (slow decline). I’m not sure there are extrinsic factors related to physical capability that will change as a function of a person’s life span.
-Intrinsic intellectual capability could probably continue to scale up for a long time with a slow increase (some luminaries may currently get close to peak intellectual capacity, but I suspect that most people alive at the moment don’t) and this does not necessarily decline much during aging unless somebody gets an age related neurological disorder (which can cause a very sharp decline). While some might argue that people will keep increasing intellectual capability with age, I’d argue that there probably is an upper limit to intrinsic intellectual capability given the brain’s capacity to store and process information (although neurotechnology may extend this). However, extrinsic intellectual factors like professional network size, strength, and value generally do continue to increase over time and could be modelled as a curve with a slow increase; while social network size currently tends to decline in old age this seems to be related to declining physical capability (reduced stamina limiting ability to socialize and forcing retirement) and so improving physical health during old ago may also prevent decline in some extrinsic intelectual areas.
Productivity could then be judged as weighted sums and/or products of intrinsic and extrinsic intellectual and physical capability. The weighting will probably depend on the state of the society an individual lives and would change over time—subsistence farming would weight physical capability strongly, developed society initially favoured intrinsic intellectual capability but increased digital connectivity is increasing the value of extrinsic intellectual factors.
The reason I focus on a model composed of weighted sums/products of sigmoidal curves of positive/negative slopes is that these can actually create fairly interesting results. The sum or the product of two sigmoidal curves with opposing slopes will be something like a bell curve (although it can be flat topped and have asymmetric sides), which probably agrees quite well with how people would judge the productivity of a current human life-span. However, having three sigmoid curves with the result depending on the product of two of them can create a local maxima before a later plateau, which could be used to represent an early peak in productivity due to physical capacity that will later be exceeded by intellectual capability (see this figure for an example I used of such a model https://www.nature.com/articles/srep02614/figures/5 ). Also, sigmoidal curves are quite good at describing many biological processes.
In summary, the point I’m getting at is there could be a good biological/psychological framework to discount life years based on both development and aging.
*Note that I don’t have much experience in population ethics and am implicitly equating productivity to value and this may not be a good ethical framework (although I assume it will probably be agreeable to economists!).
Re 3.2: People also often do riskier things earlier in their lives. You don’t see many 50 year old startup founders, maybe because they a more likely to need guaranteed income to to support their kids and/or for retirement savings. But their greater knowledge and connections may give them a greater chance of success at high-risk/high-reward type endeavours, and so LEV may allow people to undertake such promising activities later in life when they are better prepared for them.
A thought about how LEV relates to the economics of supporting an aged population. If LEV makes older people healthy and productive this also benefits younger people, as less of their resources (either taken as government taxes or spent in person with relatives) are required to support the elderly. So I think this would further supports LEV the benefits of (under the person-affecting view) for younger people and goes against the idea that it’s better to package life into shorter units.
Another good article. I certainly agree that curing age-related diseases will save prevent a lot of end of life suffering.
To judge the impact of age related diseases on life satisfaction, it would be good to compare life satisfaction between people aged match groups of the elderly (within a few countries across the GDP range) that are: in good health, have physical disabilities, or mental disabilities. The reason I suggest this is that, although life satisfaction is positively correlated to life expectancy, most respondents from each country were probably relatively young (although I didn’t check the study methodology), and they may report an increase in life satisfaction from knowing that they will live longer, or having their grandparents around. This would be valuable data as getting a life satisfaction curve from 20 to 90 year old that don’t have age related-disabilities could indicate how to extrapolate life satisfaction to life spans that are only possible through LEV (and let you know how much life satisfaction is gained by removing age related disease).
This also presents an interesting issue with self-reported life satisfaction—people with dementia (or another neurological disorder) could report high-life satisfaction while an immediate carer might perceive they have low-life satisfaction. Who are we to believe?
You mention part of the longevity dividend could also be to allow people to make discoveries that require a large amount of experience to work on (also touched on in Part 2 with the intellectual luminaries). If longer lived people also care more about longer term issues this could be of particular benefit for EA related work in mid- to far-future X-risks if vastly more experienced people are able to make substantially more research progress than people who usually stop working at 65. Although, this cuts both ways as the extra experience might progress to be made hard problems than create X-risks, like AGI.
Also, I commented on Part 2 of the series that reducing the economic burden of supporting an aged population could would also be positive for younger people (before they have had theirs lives saved by LEV) under person affecting population ethics.
Some recent work on how life history relates to trade-offs between traits and performance:
A lot of the initial life history work was only qualitative, but it is definitely moving in a more quantitative direction.
That’s true, many aspects physical/mental aspects naturally decline with age and summing up many small improvements (appearance, neuroplasticity) could add up to a substantial extra benefit for LEV.
Still because aging tends to come with age related diseases, age and health are still covarying predictors of life satisfaction. Another good comparison would be the relative reduction in life satisfaction in healthy vs. disabelled between different age groups. I would go out on a limb and say that an elderly person is less bothered by being disabled than a younger person, but I may be wrong. Combined with a healthy life satisfaction curve across age, this could then be helpful in making the case for treating aging vs. treating age related diseases. The first piece of information extrapolates to (tentative) gain in life satisfaction just from living longer, the second predicts life satisfaction gained from curing the age-related diseases (which could also be done without curing aging).
This would be useful in prioritising LEV research between the hallmarks of aging that are most likely to result in the largest reduction in age-related diseases (if the hallmarks do not uniformly effect disease burden) rather than those that extend life the most. All the hallmarks should be addressed, but if likely gains in satisfaction from disease alleviation outweigh satisfaction from extended life (that still has a high probability of disease), the former should be our focus.
Very interesting! In terms of requiring a shared interaction space, Shannon Labs was recently trying to set up something similar to Bell Labs in a remote context. I don’t think that project even got started, but it would be interesting to know if there were any best practices that can be used to create interactions between remote team members? There are quite a few successful remote only tech companies that have done well, so they might provide some inspiration for cases when having a the whole group onsite isn’t feasible.
Oops, commented my own post.
Good point, it does seem best just to work on the most life extending therapy when phrased that way. Then the trade of between living longer and suffering from diseases less would probably just be considered by somebody looking to rank LEV relative to short-term causes.
Interesting post. I was wondering if you could clarify what you’d include in the terrestrial insect herbivore grouping a bit more precisely. It’s not defined in your post and might be a bit ambiguous to non-biologists, for instance:
-Are freshwater aquatic insects (or those with aquatic juvenile forms) included?
-Are species with winged adult forms included?
-Are nectavores included?
-Are insects that switch between being carnivorous and herbivorous between juvenile and adult stages included (I’m not sure there are many examples of this, maybe some ants)?
Also, you mention parental care in the context of fecundity but not juvenile mortality. I assume that parental care would drastically reduce juvenile mortality, is that correct?
Great post, I really like this series of posts from RP and look forward to the rest. I have a few comments about this one:
-Should a distinction be made between operant and classical (associative) conditioning in the requirement for valence to facility learning? I agree that learning to associate positive or negative experiences with an environmental state (such as probcius or sting extension reflexes in honeybees) require a valence cue.
However, the role of valence is less clear during operant condition which is often used to tune how a reflexive sensorimotor action is executed. For example, fixation paradigms have been used to study insect flight behaviour (an insect tries to position a visual object frontally) - if the coupling between their turning behaviour is artificially reversed (such that by turning right the object also appears to move right) an insect can learn to reverse the usual direction of their motor output to regain control visual position of the object. This is definitely (sensorimotor) learning, but doesn’t require an extrinsic valence cue to achieve (although the insect has an internal prefered world state it is comparing its sensory experience to, I’m not sure that the state error is analogous to an internal valence cue).
-The length of a life span correlating to the potential for learning are not entirely clear, as I think that most relatively long lived insects still have a lot of reflexive behavioral cues they rely on (and perhaps tune slightly with operant conditioning as above). Eusocial central place foragers a clear exception in that they are well known to have excellent capability for navigation and associative learning.
But in the example of long distance insect migrations (e.g. Monarchs butterflies or Bogong moths), most seem to simply follow genetically programmed instincts about where to go. Intuitively one might think that these insects will need to learn which flowers to forage on at different stages of their migration, although it could be that the have genetically programmed innate preferences for flowers that work well along the entire migration route. There is a bit of work on associative learning for Monarch butterflies—it seems they are capable of associative learning over a time scale of days (very slow compared to bees which can manage single trial learning), but they also have strong innate preferences for flower colours.
-How would you class an animal releasing a warning pheromone as a reaction to noxious stimuli? Lots of eusocial species do this to summon extra soldiers to attack a threat so I would probably call that case a defensive behaviour (in addition to being a physiological response).
Yet, in other cases warning pheromones are released to warn other conspecifics (for instance, aphid alarm pheromone causes dispersal) - for the insect being attacked this isn’t really defensive as it doesn’t benefit from the other aphids avoiding the threat (which are themselves moving away from the signal of a noxious stimulus); maybe it is almost analogous to a chemical vocalization?
Aside, even some plants issue chemicals that warn other plants or summon protective insects when they are attacked by herbivores.
Thanks! Ok terrestrial is pretty clear but herbivorous still throws up a lot of edge cases (although I do appreciate the focus of this article is on classic herbivores).
For instance, I thought of parasitic wasps/flies and social wasp with regards to the last point in my previous post. These often have nectivorous adults that predate other insects as food for their larvae.
Some other questionable herbivores are:
-Opportunistic carrion foraging by tropical bees.
-Consumers of animal waste products (I saw you included dung beetles as herbivores). There are also moths that drink the tears of sleeping birds, and dust mites eat shed skin cells.
-Insects that steal the stored plant products of other animals. Many bee species are known to raid honey from other bee colonies, although while some tropical species are known to do this frequently I’m not sure any do so exclusively. Bees on both sides are usually killed during the raids.
-Carnivorism that is primarily for aggressive reasons rather than to fulfill a dietary need. For instance, queens and worker in bee colonies will eat worker laid eggs they find.
A few thoughts about the categories in this article:
-Deception: There are some species of cuckoo bees that will sneak into the hive of another (in this case a solitary) bee, eat the owners eggs and then lay there own. As is the case with cuckoo birds, the owner then happily raises them as her own.
More extreme cases of nest parasitism occur in bumblebees when a cuckoo bumblebee invades a newly established hive of a true bumblebee, kills its queen, and then uses the original queen’s workers to raise her own offspring (the cuckoo bumblebee can only lay fertilised eggs, not workers). The later is more complex than a passive act of deception, although it’s also not clear to what extent the original workers are completely deceived or just being dominated by the invader.
-Self-control: I’m not sure that comparing self-control between feeding and reproductive contexts is really appropriate. Maybe a better choice would be fungus gardening or aphid herding by ants: In the former case the ants don’t eat the leaves they collect in order to grow fungus on them (although I am not sure the ants could actually digest the raw leaves), in the later case they don’t eat the aphids so they can milk them (this needs a video).
The self-control of bees is kind of imposed by most workers being sterile and the queen dominating them. This is also not universally true, and the weird relatedness between bee colony members and the occasional presence of workers with developed ovaries mean that it is advantageous for workers to lay male eggs if they have the opportunity (unfertilized honeybee eggs produce male clones of their mother; so a bee is most related to her sons, potentially more related to her sisters ((if they have the same father)) and their sons than her mother, and least related to her brothers—I’m not sure this is true for all social bee species). In bumblebees this can result in a worker revolts where the workers in an established colony kill their queen and all start laying male eggs.
-Paying a cost to receive a reward: Aphid herding ants defend their aphids from predators/competitors and it seems that they make a cost-benefit type decision about if they will defend them.
-Tool use: I think that prolonged nest construction kind of fits in here. External resources need to be collected over time (different bees use combinations of mud, resin, cotton, flower petals, small rocks, and other items to build their nests) in specific sequences, the cost is lost time foraging for food, and the benefit of the nest might not be realized until it is finished (or gained progressively during construction, it’s not useful straight away like the hermit-crab’s shell).
No worries Jason, happy to keep posting the examples that come to mind (finally my knowledge of obscure insect behaviours is useful in EA!). This is a recent review of bumblebee cuckoos that could be useful. I also found another study indicating bumblebee cuckoos actively change their odor profiles to maintain control over the hives workers.
I agree, bumblebees look amazingly cute when rolling balls around! The string pulling experiment done by the same lab also has a nice video.
Another comment about uncertainty monitoring: Central place foragers tend to spend extra time memorizing the visual landmarks around their nest at times, there is a recent paper on ants describing how this correlates to uncertainty in some detail. As an insect moves further from its nest the accuracy of its knowledge of the nest decreases (errors accumulate in its path integration), and there is evidence that the magnitude accumulated error influences which search strategy an ant will use if it gets lost.
I also have a feeling that insects will start to ignore a sensorimotor cue that provides starts to provide unreliable information. For instance, airflow and visual motion are usually correlated to movement direction and used to control parameters like flight speed. If wind is artificially manipulated such that it is no longer correlated to visual motion or flight speed (it should be random, not negatively correlated), then I think the insect would stop using it as a cue to control flight speed. I can’t find a reference for this quickly, but I can look further if it’s of interest. I recall something similar also occurs in the case where two cues are initially paired with a reward during associative conditioning but only one turns out to be consistently rewarded (the distractor is called a confound) - after a while a bee can learn to ignore the confound and increase its accuracy. Again I don’t have a reference at hand for this but could look later.
To extend the tool use point a bit, I recall that primates have been found to have extra neurons in sensorimotor brain regions that are most active when the animal is using a tool, and essentially provide extra capacity for the brain to extend sensory and motor mappings/homunculus to include external artifacts (apparently also quite useful when learning to control of things with BCI). I’m not sure if this type of latent neural capacity has been found in rodents and strongly suspect it wouldn’t be present in insects (they tend to be quite frugal with their neurons!), although tool using birds like crows may have been studied as a comparison. Having neural circrity for tool use should be a sufficient (but perhaps not necessary) criteria for flexible tool use and its quite an objective (if difficult) test.
I read this in Beyond Boundaries by Miguel Nicolelis (good book although a bit long winded and fanciful) which should have some academic references.
Actually, Nicolelis’s BCI work also has some relevance to self-recognition. You can put electrodes into a monkey’s motor cortex, measure the neural activation associated with, say, arm movement and then decode those signals to control the motion of a robot arm (that the monkey is is not aware of) pretty well. However, if you show the monkey the arm and it is rewarded for moving the robotic arm, it often stops moving its own arm while continuing to use the disembodied arm (with pretty much the same motor cortex activity). I’d never thought of this in the context of awareness before, but suggests it is somewhat analogous to a mirror test and overcomes some of the limitations you mentioned. A fair bit of a work has been done around insect neural interfaces (probably more invasive and extreme than anything an ethics board would let you do to a mammal to be honest) and you might find that similar tests have been performed but not labeled as a self-recognition tests.
Great post Jason! I have a good background in insect conditioning and navigation from my PhD so I hope that I can provide a useful contribution here.
I would have previously placed a higher weighting on classical conditioning as an indicator for valenced experience, but I wasn’t aware of the spinal cord conditioning studies on rodents. The spinal cord is classed as part of the central nervous system so we shouldn’t really surprised that it has some capacity for learning. Brains evolved simpler nervous structures which would also have benefited from some learning capability (so I’d expect some learning capacity in jellyfish nerve nets although I’m not sure they’ve been tested) so it makes sense that peripheral nervous circuits have maintained some capacity for learning, and this is probably evolutionary advantageous as it doesn’t put extra cognitive load on the brain.
A headless insect might even have quite a high relative learning capacity compared to a headless rodent (relative in the sense of what can be learnt by the body compared to the intact animal) - the ventral nerve cord (VNC) is quite complex, large relative to the brain (I don’t know the neuron ratio between VNC/spinal cord and brain for either vertebrates or insects, could be interesting to find this out), and contains the central pattern generator that coordinates locomotion. I’ve dissected quite a few insect heads and seen a lot of the bodies get up and walk away while headless. Diptera (flies) can fly while headless as the halteres provide gyroscopic feedback that stabilizes their attitude—once one of my headless hoverflies surprised my colleague when it flew into her hair while she was dissecting moth brains on the other side of the laboratory (true story). So it might be worth checking for studies in insect locomotory conditioning looking at the role of the VNC to see what is possible.
Aside, it’s fairly well known that if you do a bad job of cutting the head of a chicken and leave its brain stem intact then it can live quite a long life if it’s fed carefully. Would headless chickens fed through straws in a matrix-like factory farm suffer? That fact this feels repulsive while also seeming ethically preferable to factory farming intact chickens means something is wrong with this line of reasoning, right?
Anyway, back to conditioning. Allen et al. 2009 states:
spinal neurons belonging to the nociceptive system are sensitive to both Pavlovian and instrumental relations, and they exhibit a number of phenomena that when studied in normal, intact organisms, including human beings, are frequently described in cognitive or attentional terms. These phenomena include a distractor effect, latent inhibition and overshadowing, and learned helplessness effects.
...
We have indicated ways we think spinal mechanisms are much more restricted in their capacities than brain mechanisms.
I didn’t read the paper in sufficient detail to determine which conditioning phenomena were not present in spinal cords but were possible with intact brains, but I would suggest that those would be better indicators of complex learning that implies valenced experience (likewise, pick the conditioning phenomena that don’t occur for sleeping people sleep). For instance, the discrimination can be made between elemental learning (where a stimulus is always reinforced, e.g. A+ B-) and non-elemental learning (where stimuli are not always reinforced, e.g. A+, B+, AB-); the latter is usually taken to imply higher cognitive demands and I would assume that non-elemental learning paradigms cannot be learnt without the intact brain. There are still more complicated associative conditioning tasks like transfer and rule learning that I think would also provide quite a strong indication of complex thought. Honeybees are indeed able to learn all of these in visual and olfactory conditioning tasks (Martin Giurfa has a great review on this, the 2nd section also discusses elemental vs. non-elemental learning and I took the examples from there. Also, see Randolf Menzel (more olfactory) and Mandyam Srinivasan (more visual) for other honeybee learning and memory reviews). Most learning paradigms from honeybees have probably also been tested on Drosophila, but I’m less familiar with that literature.
Likewise with multimodal conditioning that is outside of the usual input-output relationship an organism’s experience suggests some cognitive flexibility. For instance, I think the conditioning studies with spinal cords worked on nociceptive reflex circuits that were already present but, I wouldn’t expect their spinal cord to learn to associate a smell with a motor action (asides from the fact the neurons from the nose to the spinal cord were cut, imagine you kept all the olfactory neural connections and removed the rest of the rodent brain). However, intact organisms are able to learn to associate say, mechanosensory or visual cues (as a conditioned stimulus) with a food reward (the unconditioned stimulus that induces proboscis extension or salivation), despite the fact that the CS isn’t closely linked to a gustatory reflex (whereas smell/taste interacts closely with gustatory circuits).
Ok, I went into a bit more detail on this than planned, but I will come back to operant conditioning and navigation!
Edited a bit for clarity and grammar.
Ok, I’ll discuss operant conditioning a bit here. I may have discounted this a too easily in my comment on part 1 (which was related to sensorimotor control) - I don’t think all aspects of operant conditioning necessarily require valanced experience, but it probably does at least require a predictive world model (or efferent copy) which, in itself, seems to be quite cognitively sophisticated.
I’ve thought a fair bit about operant conditioning in the context of adaptive sensorimotor control that ‘fine-tunes’ reflexive behaviours (see chapters 3 and 4 of my thesis). I mentioned fixation in my other comment, which is where an insect (say a bee) centers a visual object frontally—the bee has an intrinsically desired world state (object in front) and acts to realize that state, but I do not believe that positioning the object frontally really counts as positive valance for her. However, the bee is able to learn to change how it responds to discrepancies in the desired world state (say the polarity coupling her yaw torque to the world is inverted, she will learn to turn right to make the object turn right, instead of the normal situation of turning left to make an object on her left move rightwards) and one hypothesis is that the bee can make this adaption because it not only has both a desired world state and motor control program it would normally use to achieve that state, but also makes a prediction of how it’s actions will affect the world. In the event that the bee observes the results of its actions no longer match its predictions, and before she reverses the polarity of its yaw control, the bee may first update her world model to reflect the fact that it should now expect the world to turn in the opposite direction, from which the new predictions can be used to update the world model. Predictive models are an old idea from psychology that were explored in Drosophila using behavioral experiments before a neural circuit implementing an efferent copy was identified in them. The whole world model thing sounds rather abstract but has some real world examples, such as a growing animal learns that adapts its gait to longer legs, or an insect that adapts its flight muscle output to compensate for wing wear, and it also seems to have a relatively simple neural implementation in Drosophila that doesn’t really code for much information about about the world. Of course the appearance of adaptive sensory motor control may not necessarily require a predictive world model, and it is possible that a robust motor control scheme could pre-code responses to enough conditions to appear adaptive, but given that insects have small brains I’d suggest a basic adaptive process is involved. See Section 6.2.4 in my thesis for a more in depth discussion on adaptive control and extra literature references.
I agree that the classic case of operant conditioning (like learning to do something for a reward) does imply valanced experience (I don’t think that showing a conditioned reflex like salivating necessarily reduces the strength of this evidence). However, I don’t entirely agree with how you are phrasing learning new or unfamiliar actions—it would help to be more specific. In most cases what is being learnt is the use of known actions in new contexts or novel combinations of known actions (in known or novel contexts) - I think that learning a new action is quite rare. Let me elaborate—an adult rat (post development) probably knows how to push things in general (and indeed make most other motor actions its body is capable of) but it needs to form the association that applying the known pushing action to a lever gives it a reward. Likewise, the soccer playing bumblebees knew how to walk and probably more or less knew how to push things, but they had to learn to do this in sequence to get the ball to the reward point. Teach a rat (or bumblebee) to handstand an I will agree you’ve taught it a new action. Why make this distinction? The cognitive flexibility associated with learning to use known actions in new contexts seems different from learning new motor skills, and what should be assessed be assessed is the novelty of the context in the former case and the novelty of the action in the later case. Learning new combinations of known movements seems somewhere in between contextual and motor learning. Most organisms do motor learning during development when they have an intrinsic motivation to learn how to use their bodies (and the learning probably involves changing spinal cord type circuits), so I’d suggest that contextual learning provides stronger evidence for cognitive flexibility (I don’t know if any literature supports this, this is a distinction just became apparent to me when reading this post).
As an aside, I don’t think you’ve mentioned novelty seeking behaviour yet? I was peripherally involved in a study that shows honeybee choose to look a novel stimulus over a recently experienced stimulus in the absence of any specific reward. I’m not really sure how this fits into the framework of this study, but learning could be a good place to consider this.
I hope to get to navigation in the next of my mini-series of comments..
Edited a bit for clarity and grammar (without breaking the formatting as I did in my other comment).
My comments are certainly biased towards bees because of my background. I hope there are relevant examples available for other invertebrates groups, although it may be that a lot of these concepts have mostly been tested in Drosophila or eusocial insects.
Are studies on the capabilities of people with impaired conciousness (vegitative or minimally conciousness states, maybe dementia or delerium) considered by studies looking at the limits of human conciousness? I assume doing something like learning and memory research with such patients isn’t high priority for their carers, but I assume that, for instance, tasks a person in a vegitative state can do are unlikely to require conciousness.
Interesting post.
One extra point I thought of: the analysis calculates the value of LEV based on it eventually spreading to the entire population. But if LEV tech is very expensive and/or restricted/proprietary, then it may only ever be adopted by a small elite. This consideration should reduce the value of research that achieves LEV but has limited adoption. I don’t know enough about population ethics to know how this would be considered, could creating more inequality be considered negative? Or just a small positive for the population overall.
In terms of probability of reaching LEV, I think that it is also worthwhile considering that even if all of the hallmarks of ageing are addressed rapidly and people start living longer, people might not make it out to 1000 year average life spans. My intuitive feel as a biologist is that these hallmarks may just be the first signs of ageing, and that treating them might then allow other hallmarks to arise that tend to lead to deaths at say, 200 or 300 years. We obviously don’t know what these are yet and it might be that they can be more easily addressed than the original hallmarks—or maybe not.