I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
EsĀtiĀmaĀtion of the benefits of acĀcelĀerĀatĀing welfare reforms
Thanks for the great post, Stefan.
Risk aversion. Risk aversion is normally a reason not to make a hard-to-reverse decision. Since reversibility gives you the option to switch course if your strategy underperforms, it normally reduces the risk of a truly bad outcome. Note, though, that the standard view within the effective altruism movement seems to be that altruists should not be risk-averse.
It could similarly be argued that reversibility gives one the option to switch course despite their strategy performing well, this increasing the risk of missing a truly great outcome? The takeaway is that one should make certainly bad options less available, and certainly good options more unavoidable?
Hi Ajeya.
But for the first time, I donāt see any solid trend we can extrapolate to say it wonāt happen soon.[11] AI R&D really could be automated this year.
What are your predictions for the unemployment rate of software engineers? What do you think about these reasons for potentially overestimating the pace of automation based on AI benchmarks?
But thereās a big problem here ā if AIs are actually able to perform most tasks on 1-hour task horizons, why donāt we see more real-world task automation? For example, most emails take less than an hour to write, but crafting emails remains an important part of the lives of billions of people every day.
Some of this could be due to people underusing AI systems,[2] but in this post I want to focus on reasons that are more fundamental to the capabilities of AI systems. In particular, I think there are three such reasons that are the most important:
Time-horizon estimates are very domain-specific
Task reliability strongly influences task horizons
Tasks are very bundled together and hard to separate out.
Welcome to the EA Forum, Max. Thanks for the clarification, and additional context. I am rooting for your (GWWCās) success.
Thanks for asking, Vince. Here are some suggestions listed alphabetically which are not in your sheet, and have not yet been mentioned in other answers to your post:
Rethink Prioritiesā (RPās) animal welfare department.
Welfare Footprint Institute (WFI).
Thanks for the post, Michael.
However, any specific function or set of coefficients would (to me) require justification, and itās unclear that there can be any good justification.
I also worry about the arbitrariness of the weights (coefficients) of the models. In Bob Fischerās book about comparing welfare across species, there seems to be only 1 line about the weights used to aggregate the tentative estimates for the welfare range, the difference between the maximum and minimum hedonistic welfare per unit time. āWe assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive modelā. People usually give weights that are at least 0.1/āānumber of modelsā, which is at least 3.33 % (= 0.1/ā3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/āānumber of modelsā could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to ādistanceā^-2 (correct answer), ādistanceā^-20, or ādistanceā^-200, I imagine I would get a significant fraction picking the exponents of ā20 and ā200. Assuming 60 % picked ā2, 20 % picked ā20, and 20 % picked ā200, one may naively conclude the mean exponent of ā45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have no idea about the right exponent because they would not be able to adequately justify their picks. I think we are in a similar situation with respect to comparing hedonistic welfare across species.
Thanks for the post, Michael.
The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to make such comparisons at all.
I agree comparisons become increasingly uncertain as the difference between the states of the organisms increases. However, I do not think there is a point where comparisons go from possible, but extremely difficult to not possible at all. I would say there is just a progressive widening of the distribution representing the hedonistic welfare per unit time of a given state of an organism as it moves away from typical human states. As an example, I could say my hedonistic welfare right now is 0.5 to 1.5 times that of random human who is awake, whereas that of a random nematode might be 10^-17 to 1 times that of a random human who is awake. I estimate the ratio between the individual number of neurons of nematodes and humans is 2.79*10^-9, whose square is 7.78*10^-18, roughly 10^-17.
Here is a post illustrating this.