I agree that the impact of this decision is likely to be very small, but trying to analyze a complicated phenomenon can be personally beneficial for improving your skills at analyzing the impact of other phenomenon. In general, it seems good for EAs to practice analyzing the impact of various interventions, as long as they keep in mind that the impact of the intervention and the direct value of the analysis might be small.
This might be the case, though if someone has the time to analyze a complicated phenomenon and wants to get practice, I think they should take a bit more time to choose a phenomenon to start with, so that they can get one with other useful characteristics. For example, they might try to find something with a larger expected magnitude of impact, positive or negative, or to choose a question that is of direct relevance to the EA community (e.g. something which is an active topic of debate, or involves some very common thing many people in EA do).
Along those lines, I like Gwern’s study of melatonin, which involves a bit of self-experimentation but also expected-value calculations. Various other productivity tools/strategies could also be solid candidates.
Sometimes I do blatantly useless things so I can flaunt my rejection of the often unhealthy “always optimize” pressures within the effective altruism community. So today, I’m going to write about rock music criticism.
I certainly don’t endorse “always optimize”! I spend far too much time reading manga and trying to win Magic: the Gathering tournaments for that. I fully endorse analyzing things that are interesting/entertaining. But it seems bad to get stuck with something that is both low-expected-impact and low-interest. Someone who really likes Folding@Home should totally give the analysis a go; someone who doesn’t care and just wants evaluation practice has many other options.
I agree that the impact of this decision is likely to be very small, but trying to analyze a complicated phenomenon can be personally beneficial for improving your skills at analyzing the impact of other phenomenon. In general, it seems good for EAs to practice analyzing the impact of various interventions, as long as they keep in mind that the impact of the intervention and the direct value of the analysis might be small.
This might be the case, though if someone has the time to analyze a complicated phenomenon and wants to get practice, I think they should take a bit more time to choose a phenomenon to start with, so that they can get one with other useful characteristics. For example, they might try to find something with a larger expected magnitude of impact, positive or negative, or to choose a question that is of direct relevance to the EA community (e.g. something which is an active topic of debate, or involves some very common thing many people in EA do).
Along those lines, I like Gwern’s study of melatonin, which involves a bit of self-experimentation but also expected-value calculations. Various other productivity tools/strategies could also be solid candidates.
cf. Gwern’s study of catnip.
Also Luke’s post on Scaruffi:
I certainly don’t endorse “always optimize”! I spend far too much time reading manga and trying to win Magic: the Gathering tournaments for that. I fully endorse analyzing things that are interesting/entertaining. But it seems bad to get stuck with something that is both low-expected-impact and low-interest. Someone who really likes Folding@Home should totally give the analysis a go; someone who doesn’t care and just wants evaluation practice has many other options.