I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilođ¸
Hi Vince. Do you have any plans to estimate the marginal cost-effectiveness of charities? I agree with Giving What We Can (GWWC) that this would be a major improvement to ACEâs evaluations.
We believe that ACEâs Charity Evaluation Programâs current approach does not sufficiently emphasise marginal cost-effectiveness as the most decision-relevant factor in evaluation decisions. For example, ACE primarily evaluates on the basis of organisationsâ existing programs, rather than explicitly focusing their evaluation on those programs that are most likely to be funded on the margin. This is in direct contrast to ACEâs Movement Grants, which explicitly evaluates programs that ACE would be influencing funding to on the margin.
The focus would be on capacity for welfare.
Would the goal be getting welfare ranges conditional on sentience for each theory of consciousness? If so, I wonder whether funders of consciousness research would be interested in funding that work.
Here is a related funny essay from Spencer Greenberg. âThe Mental Models You Donât Know Aboutâ.
Hi. You may be interested in the PhD thesis Consciousness in Functionally and Spatially Distributed Systems by Duygu AktaĹ. It was published this month.
Suppose this was all that existed of you, and your real brain never had existed. Would that mean that you never existed as a conscious being, despite all your thoughts and utterances still being a part of the world?
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marrâs levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.
Hi Toby. Thanks for the comment.
If the human brain operates according the known laws of physics, then in principle your brain could be simulated with a pen and paper (at least given unlimited time, ink, and paper), and it would behave identically to the real thing (it would talk and think like you and have all your opinions).
One would need infinite resources to fully reproduce the behaviour of the brain assuming the universe is continuous. Even if the universe is discrete, one would need an unfeasibly large amount of resources. The human brain has around 0.00120 m^3 (= (1.13 + 1.26)*10^-3/â2). The Planck volume is 4.22*10^-105 m^3. So the volume of a human brain corresponds to 2.84*10^101 (= 0.00120/â(4.22*10^-105)) times the Plack volume. Even assuming all the information in a volume equal to the Planck volume can be represented by a single bit, one would need 2.84*10^101 bits to fully represent the state of a human brain. This is more bits than the 10^80 or so atoms in the universe, and one needs more than 1 atom per bit in a digital computer.
I donât get why the âmoment of experience taking a thousand yearsâ thing is supposed to be so weird? If we slowed down all the processes in your brain then moments of experience would take longer in physical time. Thatâs not an argument against your consciousness being real. And this isnât a hypothetical. We can literally do that by sending you on a spaceship close to the speed of light, and thatâs exactly what would happen!
This is not what would happen under special relativity. If I was sent on a spaceship close to the speed of light, I would continue aging normally in my frame of reference. If I travelled for N years in the frame of reference of the spaceship, I would become N years older biologically speaking (neglecting the effects of microgravity). If I returned back to Earth then, it would have passed more than N years on Earth. So I would have effectively time-travelled into the future on Earth.
CF predicts that some sets of AND, OR, and NOT operations are conscious even if run at an arbitrarily low speed in their local frame of reference. So all my brain processes would have to slow down in the frame of reference of the brain for the analogy to hold. I guess one can get the closest to this slow down with cryopreserved brains, and I do not think these are conscious.
The pen and paÂper arÂguÂment against comÂpuÂtaÂtional functionalism
This kind of sounds like youâre thinking of pain as a sort of physical magnitude, like weight or charge.
I liked this chat I had with Gemini 3.1 Pro about the relationship between the intensity of subjective experiences, and the size of neuronal avalanches. You and @Bob Fischer may also find it interesting.
Hi Clara.
I donât care if TechnoBro 3000 celebrates his birthday in the asteroid belt with his 10^30 gold plated robot friends, but I do care if he can buy the elections of Democratistan.
You care about whether he can buy the elections intrinsically or instrumentally (in particular, because of its impact on the welfare of the people in Democratistan)? The latter is still very much compatible with rejecting egalitarianism and prioritarianism. Buying elections may decrease total welfare.
Hi Itsi. Thanks for the post.
I think it is difficult to assess AI x animals as a whole given its very broad scope. Likewise, it would have been difficult to evaluate âAI x steam enginesâ, âAI x electricityâ, or âAI x internetâ.
I believe there are large differences in the cost-effectiveness of projects covered by AI x animals. So I find it more useful to discuss particular interventions.
Thanks for the quick thoughts, Guillaume.
I would not base my estimates on their number of neurons (although it might be a good enough proxy for larger animals).
The graph below illustrates that âindividual number of neuronsâ^0.188 explains pretty well the estimates for the sentience-adjusted welfare ranges presented in Bobâs book. I also do not think the specific proxy matters that much. In allometry, âthe study of the relationship of body size to shape,[1] anatomy, physiology and behaviourâ, âThe relationship between the two measured quantities is often expressed as a power law equation (allometric equation)â. If the sentience-adjusted welfare range is proportional to âproxy 1â^âexponent 1â, and âproxy 1â is proportional to âproxy 2â^âexponent 2â, the sentience-adjusted welfare range is proportional to âproxy 1â^(âexponent 1â*âexponent 2â). So the results for âproxy 1â and exponent âexponent 1â*âexponent 2â are the same as those for âproxy 2â and âexponent 2âł.
whatever our current âplace-holderâ estimates are for sentience or welfare in shrimps, more research will most likely answer both
I very much agree. On the other hand, I think research on sentience criteria mostly decreases the uncertainty about anatomy and behaviour, and I believe there is way more uncertainty in how to go from those to quantitative comparisons of welfare across species.
RE welfare comparisons: I could imagine a difference between us being relative confidence that empirical research will improve our understanding?
I am not confident (empirical or philosophical) research on welfare comparisons across species will significantly decrease their uncertainty. However, the alternative for me is never finding out interventions that robustly increase welfare in expectation.
Would you expect the most useful work for reducing your own uncertainty to be philosophical or empirical?
I do not have a strong view either way. I think it is much easier to decrease i) the empirical uncertainty about anatomy and behaviour than ii) the philosophical uncertainty about how to go from those to quantitative comparisons of welfare across species. On the other hand, I believe ii) is much larger than i).
RE nematodes: I agree that this isnât clear cut in some sense, but I feel fairly confident that they should be bracketed out unless we significantly advance in our understanding of animal consciousness
Would medium confidence that nematodes engage in motivational trade-offs be enough for you to consider effects on them?
This report was entirely and carefully crafted by Guillaume Reho, with recurrent reviews and discussions with Aaron Boddy [co-founder of and chief strategy officer at the Shrimp Welfare Project (SWP)], whom I deeply thank for his time and help on this project.
I am glad @Aaron Boddyđ¸ is interested in this. I think funders have been assuming that all species of shrimps have a similar sentience-adjusted welfare range. So bringing attention to the weaker evidence for the sentience of Penaeidae shrimps may decrease funding for helping them, and they are the ones SWP has been targeting.
Thanks for this great research, Guillaume.
in his Welfare Range Estimates, (Fischer, 2023) argues that all invertebrates probably have welfare ranges âwithin two orders of magnitude of the vertebrates nonhuman animals [presented in his report]â
Do you have any thoughts on this? I read the whole book about welfare comparisons across species from @Bob Fischer, and I really liked it. However, I think the above vastly underestimates uncertainty. Here are my estimates for sentience-adjusted welfare ranges proportional to âindividual number of neuronsâ^âexponentâ, and âexponentâ from 0 to 2, which covers the best guesses that I consider reasonable.
Here are a few other grantmakers that might be interested in funding such research or welfare interventions: Animal Charity Evaluators, Animal Welfare Fund from EA Funds, Animal Welfare Fund from Founders Pledge, and Farm Animal Welfare fund from Coefficient Giving. Also feel free to comment or tag other grantmakers or funds that would be interested in shrimp sentience research.
There is also the Strategic Animal Funding Circle (SAFC), and maybe Falcon Fund (âWe also expect to place some bets on non-AI opportunities that are unusually strongâ).
Hi Abraham. Thanks for the great post.
This science alone wonât solve every issue in wild animal welfare. Even with the scientific knowledge necessary to make progress, there might be tricky philosophical questions that canât be answered empirically (When is a life worth living? How do we make decisions about tradeoffs between different species of animals?).
Have you considered reliable welfare comparisons across species as another necessarily pillar for robustly increasing welfare? I do not think perfect welfare measures, remote monitoring, and ecological modelling would be enough. I am very uncertain about how to compare welfare across species. Here are my estimates for sentience-adjusted welfare ranges proportional to âindividual number of neuronsâ^âexponentâ, and âexponentâ from 0 to 2, which covers the best guesses that I consider reasonable.
Putting aside nematodes (which I believe we should do), to a first approximation
Are you confident that nematodes can be neglected? I am not. I can see the welfare of nematodes being much smaller or larger than than of arthropods. Research on the sentience of nematodes is one of the âFour Investigation Prioritiesâ mentioned in section 13.4 of chapter 13 of the book The Edge of Sentience by Jonathan Birch.
So, our best models are basically at the level of: âwe can sort of say what will happen to 9 varieties of quasi-organisms at ~100-square-kilometer resolution,â an area that contains approximately 10 quadrillion insects.
Do you mean 10 trillion arthropods? 100 km^2 are 10^8 m^2 (= 100*(10^3)^2). Tropical and subtropical forests have 10^5 soil arthropods per m^2 based on Table S4 of Rosenberg et al. (2023). So I think 100 km^2 of tropical and subtropical forests have around 10^13 soil arthropods (= 10^8*10^5), 10 trillion.
For context, a community of just 500 species has 250,000 possible pairwise interactions.
Nitpick. 125 k (= 500*499/â2) possible pairwise interactions, because you are only counting interactions between difference species, and the interaction between species A and B is the same as that between B and A?
And I think we should make a giant risky bet on cage-free eggs.
Despite potentially dominant effects on ants and termites?
Ok. I will remind you about this in 1.5 months (June 22).
I think the probability of human extinction this century is much lower than 1 %. I guess the probability of you not paying me back for reasons that do not have to do with transformative AI (TAI), which I speculate would be around 25 % for a bet resolving at the end of 2034, is much higher than the probability of human extinction, or additional income no longer being relevant.