thank machine doggo
james.lucassen
Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I’m confused by the line “I could create just a little more hedonium”. My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?
I ended up interpreting things as if “hedonium” was meant to mean “utility”, and the narrator is deciding what their last thought should be—how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly—or if I was incorrect, I hope this feedback is helpful :)
...it was beautiful. And that is good.
~fin
Bro this is really scary. Well done.
Observation: prion-catalysis or not, any vaccine-evasion measures at all seem extraordinarily dangerous. For a highly infectious threat, the fastest response we have right now is mass vaccine manufacture, and that seems just barely fast enough. But our vaccine tech is public knowledge, and an apocalyptic actor can take all the time they want to design a countermeasure.
Once a threat with any sort of countermeasure is released, we first have to go through a vaccine development cycle to find that out in the first place, then a research cycle to figure out how to beat it, then a development/deployment cycle to use those research results and actually beat it. Those latter two phases seem quite slow and notably hard to speed up, since we’d have to find ways to prepare to do fast research, manufacturing, and deployment in a very general sense, to be able to respond to any plausible anti-vaccine measure.
I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable.
What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.
There’s so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.
Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).
I agree with most of your assessment here. But I think rather than “simple altruism”, it would be better to focus on “altruistic intent”. Making this substitution doesn’t change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.
That said, increasing altruistic intent is, I think, included under the heading of broad longtermism. I don’t have a source for this, but my impression is that not that much work goes towards broad longtermism because it seems really hard, not that urgent, and EAs tend to be bad at the key skills involved, like persuasion and politics.
I think this definition of “cause area” is roughly how the EA community uses the term in practice, and explains a lot of why/how it’s useful. It helps facilitate good discussion by pointing towards the best people to talk to, since others in my cause area will have common knowledge and interests with myself and each other. On this view, “cause area” is just EA-speak for a subcommunity.
That makes it a bit hard to justify the common EA practice of “cause prioritization” though, since causes aren’t really particularly homogeneous with regard to their impact. I think doing “intervention prioritization” would be a lot more useful, even though there’s way more interventions than causes.
Is there some kind of up-to-date dashboard or central source for GiveWell’s main “cost-per-expected-life” figure?
The Metaculus question mentioned in this post cites values like $890 in 2016, $823 in 2017, $617 in 2018 and $592 in 2019, and I can’t find the field they refer to in the resolve condition (?!)
This 80K article lists the value as $2300 in 2020.
This GiveWell summary sheet from 2016 has a minimum value of $901
GiveWell’s Top Charities page lists $3000-$5000 to save a life for Malaria Consortium, Against Malaria Foundation, New Incentives, and Hellen Keller International.
If such a thing does not exist, I’ll probably reach out to GiveWell and see what they think about implementing one. There are so many numbers floating around that are hard to verify and differ dramatically.
I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I’m not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works—as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author’s ability.
Potential ways around this that come to mind:
Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
Maybe even without that, there’s still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?
If it costs $4000 to prevent a death from malaria, malaria deaths happen at age 20 on average, and life expectancy in Africa is 62 years, then the cost per lifetime saved is $0.0109/hour.
If you make the average US income of $15.35/hour, this means that every marginal hour you work to donate can be expected to save 1,412 hours of life, if you take the very thoroughly researched, very scalable, low-risk baseline option. If you can only donate 10% of your income, then your leverage is reduced to a mere 141.2. Just by virtue of having been born in a developed country, every hour of your time can be converted to days or weeks of additional life for someone else.
While not as insanely huge as some of the figures making the argument for longtermism, I find this figure more shocking on a psychological level because it’s so simple to calculate and yields such an unexpected result. This type of calculation is what first got me interested in Singer-style EA.
Agree that this is worth a shot, would be Huge if it worked. But it seems like Mr Beast and Mark Rober might be selecting causes to avoid controversy, which would make it hard to get EA through. Both of their platforms are mainly built on mass appeal. Planting trees and cleaning up the oceans are extremely uncontroversial causes—nobody is out there arguing that they do net harm. This is not the case with EA.
That said, if any of you folks went to high school with Mark Rober or something, I would still be extremely excited to try this. I have a 3rd or 4th degree connection to him, but that seems a bit too far to do much of anything.