I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
Thanks for sharing, Ben. I like the concept. Do you have a target total (time and financial) cost? I wonder what is the ideal ratio between total amount granted and cost for grants of ā$670 to $3300ā.
Nice post, Alex.
Sometimes when thereās a lot of self doubt itās not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying āI donāt know enoughā I say āIām having the thought that I donāt know enough.ā I donāt have to believe or argue with the thought, I can just acknowledge it and return to what Iām doing.
Clearer Thinking has launched a program to learn cognitive diffusion.
Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_avāP_0)/ā(P_avāP_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_avāP_0/āN)/ā(P_avāP_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_av/ā(P_avāP_0) = 1/ā(1 - P_0/āP_av). In this model, super productive employees becoming more proctive would not increase their cost-effectiveness. It would just make them more impactful. For your parameters, employing an infinitely productive employee would be 3 (= 1/ā(1 ā 100ā150)) times as cost-effective as employing random employees.
Thanks for the relevant comment, Nick.
2. Iāve been generally unimpressed by responses to criticisms of animal sentience. Iāve rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.
I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to āindividual number of neuronsā^āexponentā, and āexponentā ranging from 0.5 to 1.5, which I believe covers reasonable best guesses, I estimate the absolute value of the total welfare of farmed shrimps ranges from 2.82*10^-7 to 0.282 times that of humans. In addition, I calculate the Shrimp Welfare Projectās (SWPās) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 0.00167 (= 2.06*10^5/ā0.0123) to 1.67 k (= 20.6/ā0.0123) times as cost-effectively as GiveWellās top charities increase the welfare of humans. Moreover, I have no idea whether HSI or GiveWellās top charities increase or decrease welfare accounting for effects on soil animals and microorganisms.
Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.
Hi David!
Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.
Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?
But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Why? It could be that future value is not exactly the same if AI takes over or not by a given date, but that the longer difference in value is negligible.
Thanks for the great post, Matthew. I broadly agree.
If we struggle to forecast impacts over mere decades in a data-rich field, then claiming to know what effects a policy will have over billions of years is simply not credible.
I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP). If this become 10 times as fast as some predict, I would expect the horizon of predictibity (regarding a given topic) to shorten, for instance, from a few decades to years.
To demonstrate that delaying AI would have predictable and meaningful consequences on an astronomical scale, you would need to show that those consequences will not simply wash out and become irrelevant over the long run.
Right. I would just say āafter significant change (regardless of when it happens)ā instead of āover the long runā in light of my point above.
Thanks for crossposting this, Joey.
This is a linkpost for My Career Plan: Launching Elevate Philanthropy!
The above does not link to the original post. You are supposed to type out the URL in the field above.
Despite not even having publicly launched, I have back-to-back monthly promising projects lined up, each with significant estimated impact, each with higher impact than my upper bound estimates of my ability to earn via for-profit founding (my next highest career option).
How did you determine this? Did you explicitly quantify the impact of the promising projects in terms of money donated to GiveWellās top charities or similar?
Another example is when AIM [Ambitious Impact] created the metric of SADs [suffering-adjusted days], which is now used not only by AIM but also across the animal welfare space.
Could you elaborate on which organisations use SADs? I am only aware of Animal Charity Evaluators (ACE) using them in their charity evaluations.
I am particularly excited about time-bound projects that take between 30 and 300 hours, especially projects that create a common good. By this, I mean outcomes that benefit multiple philanthropic actors in the ecosystem. One example might be creating an external evaluation system for a single foundation but publishing the methods and strategies so that multiple other foundations can also use them.
What do you think about decreasing the uncertainty in welfare comparisons across species as a common good project? I think much more research on that is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. Even neglecting these, I believe there is lots of room to change funding decisions as a result of more of research on that. I understand AIM, ACE, maybe the Animal Welfare Fund (AWF), and Coefficient Giving (CG) sometimes for robustness checks use the (expected) welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob Fischerās book as if they are within a factor of 10 of the right estimates (such that these could 10 % to 10 times as large). However, I can easily see much larger differences. For example, the estimate in Bobās book for the welfare range of shrimps is 8.0 % that of humans, but I would say one reasonable best guess (though not the only one) is 10^-6, the ratio between the number of neurons of shrimps and humans.
Thanks, Noah!
Dwarkesh PaĀtelās thoughts on AI progress (Dec 2025)
Thanks for the good point, Paul. I tend to agree.
Thanks for the post. I strongly upvoted it.
I have described my views on AI risk previously in this post, which I think is still relevant. I have also laid down a basic argument against AI risk interventions in this comment where I argue that AI risk is neither important, neglected nor tractable.
The 2nd link is not right?
Thanks for the comment, Mikhail. Gemini 3 estimates a total annualised compensation of the people working at Meta Superintelligence Labs (MSL) of 4.4 billion $. If an endorsement from Yudkowsky and Soares was as beneficial (including via bringing in new people) as making 10 % of people there 10 % more impactful over 10 years, it would be worth 440 M$ (= 0.10*0.10*10*4.4*10^9).
Thanks for the good point, Nick. I still suspect Anthropic would not pay e.g. 3 billion $ for Yudkowsky and Soares to endorse their last model as good if they were hypothetically being honest. I understand this is difficult to operationalise, but it could still be asked to people outside Anthropic.
@eleanor mcaree, to which extent is ACEās Movement Grants program open to funding research decreasing the uncertainty in interspecies welfare comparisons? @Jesse Marks, how about The Navigation Fund (TNF)? @ZoĆ« Sigle š¹, how about Senterra Funders? @JamesĆz šø, how about Mobius and the Strategic Animal Funding Circle (SAFC)? You can check my comment above for context about why I think such research would be valuable.
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/ā(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.
My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best guess that an event in very unlikely, but still be super uncertain about its probability. For example, one could believe an event has a probability of 10^-100 to 10^-10, which would imply it is super unlikely despite 90 (= ā10 - (-100)) orders of magnitude (OOMs) of uncertainty in the probability.
If I had Eliezerās views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.
Thanks for the comment, Tristan.
I have no doubt that if one human became superintelligent that would also have a high risk of disaster, precisely because they would have preferences that I donāt share (probably selfish ones)
I would worry if a single human had much more power than all other humans combined. Likewise, I would worry if an AI agent had more power than all other AI agents and humans combined. However, I think the probability of any of these scenarios becoming true in the next 10 years is lower than 0.001 %. Elon Musk has a net worth of 765 billion $, 0.543 % (= 765*10^9/ā(141*10^12)) of the market cap of all publicly listed companies of 141 T$.
The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.