I am open to work.
Vasco Grilošø
ļPiĀgouās Dial
Thanks for the post!
Conditional on metastability, the mean credence that vacuum decay is inducible with arbitrarily advanced technology was 19%, with a slim majority finding its likelihood negligible, but a substantial minority asserting high likelihood.
For context, the expected lifetime of the universe based on the natural rate of vacuum decay is estimated to be 10^790 years.
Thanks for the post, Tyler!
There are a lot of ways to arrange 86 billion neurons. You could give them to one human, to 430 rats, or to 86 billion nematodes.
The above implies nematodes have 1 neuron, but they have around 300 neurons. So 86 billion neurons correspond to around 300 M nematodes.
For classical utilitarians, āhedoniumā is likely many orders of magnitude more valuable than human brains (or the equivalent instantiated in silico).
I estimated the welfare range per calorie consumption of bees is 4.88 k times that of humans, which suggests bees produce welfare 4.88 k times as efficiently if welfare is proportional to the welfare range.
Hi Tom,
It depends on the organisations which would receive the additional donations. If the person quitting their job donates 10 % of their gross annual salary to an organisation 10 times as cost-effective as their initial organisation, their donations doubled as a result of quitting, there was no impact from direct work in the new organisation, and they were not replaced in their original organisation, their annual impact after quitting would become 1.82 (= (0 + 0.1*10*2)/ā(0.1 + 0.1*10)) times as large as their initial annual impact.
Cage-free egg proĀducĀtion and real gross doĀmesĀtic product per capita
Thanks, Bob! Based on that, my understanding is that the welfare ranges refer to differences between the welfare per unit time of the best and worst moments that could be realistically experieced (that are ārealistic biological possibilitiesā).
What is the period of time to which āmost intenseā refers to? Any period of time, or the typical lifespan of the species? If the former, the welfare ranges practically refer to the intensities of very short experiences (for example, the worst possible second is worse than a random second of the worst possible minute).
ļThe Selfish Machine
Thanks for clarifying, Steven! I am happy to think about advanced AI agents as a new species too. However, in this case, I would model them as mind children of humanity evolved through intelligent design, not Darwinian natural selection that would lead to a very adversarial relationship with humans.
Thanks, David. I estimate the annual conflict deaths as a fraction of the global population decreased 0.121 OOM/ācentury from 1400 to 2000 (R^2 of 8.45 %). In other words, I got a slight downwards trend despite lots of technological progress since 1400.
Even if historical data clearly pointed towards an increasing risk of conflict, the benefits could be worth it. Life expectancy at birth accounts for all sources of death, and it increases with real GDP per capita across countries.
The historical tail distribution of annual conflict deaths also suggests a very low chance of conflicts killing more than 1 % of the human population in 1 year.
Interesting points, Steven.
So what if itās 30 years away?
I would say the median AI expert in 2023 thought the median date of full automation was 2073, 48 years (= 2073 ā 2025) away, with a 20 % chance before 2048, and 20 % chance after 2103.
Or as Stuart Russell says, if there were a fleet of alien spacecraft, and we can see them in the telescopes, approaching closer each year, with an estimated arrival date of 2060, would you respond with the attitude of dismissal? Would you write āI am skeptical of alien riskā in your profile? I hope not! That would just be crazy way to describe the situation viz. aliens!
Automation would increase economic output, and this has historically increased human welfare. I would say one needs strong evidence to overcome that prior. In contrast, it is hard to tell whether aliens would be friendly to humans, and no past evidence based on which one can establish a strong pessimistic or optimistic prior.
I can imagine someone in 2000 making an argument: āTake some future date where we have AIs solving FrontierMath problems, getting superhuman scores on every professional-level test in every field, autonomously doing most SWE-bench problems, etc. Then travel back in time 10 years. Surely there would already be AI doing much much more basic things like solving Winograd schemas, passing 8th-grade science tests, etc., at least in the hands of enthusiastic experts who are eager to work with bleeding-edge technology.ā That would have sounded like a very reasonable prediction, at the time, right? But it would have been wrong!
I could also easily imagine the same person predicting large scale unemployment, and a high chance of AI catastrophes once AI could do all the tasks you mentioned, but such risks have not materialised. I think the median person in the general population has historically underestimated the rate of future progress, but vastly overestimated future risk.
I feel like you overestimated Sinergiaās role in achieving their listed cage-free commitments. Among the 5 very big or giant ones driving their cost-effectiveness, you attributed 20 % of the impact to Sinergia in 2 cases, and 50 % in 1 case where they did not run a campaign or pre-campaign, and did not send a campaign notice.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities
Your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217/ā116) times your upper bound for the cost-effectiveness of ĆHKD, which again points towards only Sinergia being recommended.
We think itās reasonable to support both a charity that we are more certain is highly cost-effective (such as ĆHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia).
Your CEAs suggest the cost-effectiveness of ĆHKD is slightly more uncertain than that of Sinergia, which is in tension with the above. Your upper bound for the cost-effectiveness of:
ĆHKD is 18.1 (= 116ā6.4) times your lower bound.
Sinergia is 9.45 (= 2.05*10^3/ā217) times your lower bound.
In addition, your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217ā116) times your upper bound for the cost-effectiveness of ĆHKD, which again points towards only Sinergia being recommended.
[Question] DonatĀing more and betĀter is the best stratĀegy to maxĀimise imĀpact for the vast maĀjorĀity of peoĀple workĀing in imĀpact-foĀcussed orĀganiĀsaĀtions?
Thanks for the additional clarifications, Vince!
For this reason, we tend to create backward-looking CEAs and then assess whether there are any reasons to expect diminishing returns in the next two years (the duration of an ACE recommendation).
Makes sense. I very much agree the CEAs of past work are valuable. However, I suspect it would be good to be more quantitative/āexplicit about how that is used to inform your views about the cost-effectiveness of the additional funds caused by your recommendations. For example, you could determine the marginal cost-effectiveness of each organisation adding the contributions of their programs, determining each contribution multiplying:
The fraction of additional funds (which would be caused by your recommendation) going to the program i. You could ask the organisation about this.
The cost-effectiveness of additional funds going to the program as a fraction of its past cost-effectiveness. You currently consider this qualitatively.
The past cost-effectiveness of the program. You currently consider this quantitatively sometimes via backward-looking CEAs.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities, and we do use the framing that you suggest when considering adding the next marginal charity.
Great!
However, since we are unable to always fully quantify the impact on animals of charitiesā work, this is partially based on qualitative arguments and judgments, so our decisions may not always appear consistent with the results of our CEAs.
Have you described such judgements somewhere?
Thanks, Arturo.
All arguments based on behavioral similarity only proof we all come from evolution
Shrimps have a welfare range of 0.430 (= 0.426*1.01) under Rethink Prioritiesās (RPās) quantitative model, which does not rely on behavioral proxies.
This model aggregates several quantifiably characterizable physiological measurements related to activity in the pain processing system. Many lacked data for certain species, and our numbers reflect the averages of those measures that had data for different taxa. In some cases, we used surrogates when specific species data was not available. The advantage of this approach is that all of the results lend themselves to model construction and plausibly have some connection to welfare. However, many of these features are found in the peripheral nervous system, and as such arenāt necessarily related to conscious experiences which presumably take place in the central nervous system. This approach also could be thought to reflect the flaws of only focusing on those things that are easily measurable at the expense of looking at features which are likely to be more directly relevant.
The quantitative model relies on:
Nociceptive just noticeable differences (JNDs).
Maximum nociceptor response (spikes/ās).
Nociceptor density (NF/āmm or NF/āmm2).
Substance P concentration (ng/āg).
Change in stress hormone (% from control).
Change in heart rate (% from control).
Change in respiration rate (% from control).
Brain mass to body mass ratio.
Encephalization quotient.
Neuron packing density.
To arrive to a super low welfare of shrimp, one has to be not only super confident that behavioral proxies do not matter for welfare, but also that a very specific structural criteria (e.g. number of neurons, which have major flaws) is practically all that matters.
neuron count ratios (Shrimp=0.01% of human)
RP estimated shrimp have 1*10^-6 as many neurons as humans, not 0.01 % (10^-4).
Thanks, Alex! I very much agree with treating others as we would want to be treated by them (Golden Rule). On the other hand, I would want to increase the welfare of shrimp and other less powerful beings even if I was sure humans and their descendents would forever remain the most powerful beings. I just think suffering is bad, and happiness is good no matter where the beings experiencing them fall in the universal distribution of power.
Love it.