I follow Crocker’s rules.
niplav
You’re right. I had been thinking only about the mean on the distribution over discount rates, not the number of affected beings. Thanks :-)
Also relevant: The Cost of Kids:
The present-value cost of having a child may be at least $300K (measured in US dollars as of roughly 2012) when both direct expenditures and opportunity costs are considered. This shows the value of using the most effective birth-control methods, like the implant and vasectomy. That said, some people may find having children very important to their wellbeing, and in such cases, having children may be worth the cost.
I think at least Brian Tomasik cares about this.
If your suspicion is correct, then that’s pretty damning for the WAW movement, unless climate change prevention is not the highest-leverage influence against WAS (a priori, it seems unlikely that climate change prevention would have the highest positive influence).
I really hope that humanity and its descendants think about what they want to experience last a very long time before it actually happens. I personally would like the last experience in the universe to be something akin to being wrapped in a warm blanket and very slowly falling asleep, an experience of comfort (and not much discursive thought), but I really hope we’ll talk about this beforehand!
Minor nitpick:
And so we have ever less energy to simulate our brains. It gets ever harder. With less energy, we are forced to slowly shut systems down.
I think that the problem is not lack of energy, but lack of energy gradients or negentropy to get any useful work out of.
This is a very good point. I think the current sentiment comes from two sides:
Not wanting to alienate or make enemies with AI researchers, because alienating them from safety work would be even more catastrophic (this is a good reason)
Being intellectually fascinated by AI, and finding it really cool in a nerdy way (this is a bad reason, and I remember someone remarking that Bostroms book might have been hugely net-negative because it made many people more interested in AGI)
I agree that the current level of disincentives for working on capabilities is too low, and I resolve to telling AI capabilities people that I think their work is very harmful, while staying cordial with them.
I like this idea.
Yeah, this is an unfortunate gradient, you have to decide not to follow it :-/
But there is more long-term glory in it.
I strongly agree with you, and would add that long content like Gwern’s (or Essays on Reducing Suffering or PredictionBook or Wikipedia etc.) are important as epistemic infrastructure: they have the added value of constant maintenance, which allows them to achieve depth and scope that is usually not found in blogs. I think this kind of maintenance is really really important, especially when considering long-term content. I mourn the times when people would put a serious effort into putting together an FAQ for things—truly weapons from a more civilized age.
I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again, and are quickly abandoned—and perhaps as Assange says, not a moment too soon. (But isn’t that sad? Isn’t it a terrible ROI for one’s time?) On the other hand, the best blogs always seem to be building something: they are rough drafts—works in progress.
—Gwern, “About This Website”, 2021
On the other hand, most blogs to me seem to be epistemic fireworks (or, maybe more nicely, epistemic tinder that sparks a conversation): read mostly when released, and then slowly bit-rotting away until the link falls stale. (Why don’t people care more about their content when they put so much effort intro producing it‽).
I find it ironic that the FTX Long Term Future Fund is giving out a price to a medium that is so often so ephemeral, so much not long-term, as blogs (what value can I gain from reading the whole archive of Marginal Revolution? A lot, probably, but extremely little value per post, I’m likely better off reading Wikipedia.). What’s next? The $10k price for the best discord message about longtermism? The best tweet? (”It’s about the outreach! Many more people read tweets and discord messages!”)
I agree with this, the success rate for wikis appears to be fairly low, at least in my anecdotal experience: who has read articles on the Cause Priorization wiki or the LessDead wiki or the LessWrong wiki? Even the EA forum wiki or the LessWrong tags are barely read or updated (perhaps a merge of the two would be helpful?).
Unfortunately, the wildest inclusionists have lost, so we can’t just put everything onto Wikipedia, which would be the best option.
Good content gathers dust on those isolated wikis, sadly.
[UNENDORSED] Reward Long Content
Thanks, I’ll take a closer look at impact certificates.
As an avid user of Kiwix, I’d be very interested in any of those.
Thanks for the detailed answer :-)
I list this ageing as a drawback of blogs because it is in practice, if not in theory (and the structure of blogs encourages this). Safe some exceptions (SSC for minor stylistic edits, Nintil) blogs are usually not maintained in any way. I guess my complaint is that it would be entirely possible for this maintenance to occur (unlike with tweets/newspapers/books), but it usually doesn’t happen.
The structure of blogs doesn’t encourage this either: usually arranged chronologically, not by topic, with a focus on novelty.
As for reading archives, there is certainly a style of blog that is in practice not linked very often in the long term (I’m thinking of Overcoming Bias or Marginal Revolution or Econlog).
I don’t object to the posts being unfinished! That would be quite hypocritical of me :-) My argument is that here, the incentives are structured such that it’s much more likely that people will start a blog because of the prize, and once it’s over, they abandon it. I admit to the point that the prize will probably push marginal not-yet-bloggers over the edge.
One way to improve the prize might be to reward blogs that stay maintained years from now, but I don’t think you make that point.
Interesting! I thought I was absolutely making that point, especially in this section and especially especially in this section.
Maybe I wasn’t as clear here as I could have been, but I didn’t want to discourage the bloggers on your site. But to be absolutely clear:
You list 1 person who doesn’t have a public blog at all (!), 2 people whose blogs contain 0 content (!!), 2 more people who have written 3 blogposts each, and 3 more bloggers who actually have presentable blogs. That 3⁄8 blogs that could be in the category of “Flagship blog”. Not, I think, a good ratio.
Yeah, you’re probably right about this one.
Maybe it’s that I think that a prize is not the perfect way to approach this: Prizes seem to be useful for very discrete problems that have a very clear solution criterion, and less useful for very long-term, open-ended endeavours (where something like certificates of impact or retroactive funding are more suited).
I should have been clearer about this: I think blogs are in an odd position in the discourse—there is much more discourse going on on Twitter/YouTube/Discord (?) than on blogs, and I believe that this will not change much (newsletters notwithstanding). On the Pareto-frontier of “produces long-term value” and “encourages discourse” I think blogs are at best in an odd spot, and encouraging good YouTube videos about effective altruism would be a much better way to enter the discourse (admittedly, this may be already happening, with the OpenPhil grant to Kurzgesagt).
In the case of Scott Alexander as an example, it seems noteworthy that he was writing online from at least 2006 on his LiveJournal. This seems to be a common thread with many well-known bloggers: they build up a following over a long time, and hone their skills with repeated practice. My intuition is that the marginal Scott Alexander is more likely to already be writing online, and might have done so for a couple of years already, than to be on the fence whether to do any writing at all.
I’m not sure how easy it is to monetize a blog/newsletter today—afaik, most bloggers make not enough money to be worth it the additional hassles with taxes. But I might be wrong, I’ve never tried it.
Minor point: Hands and Cities/The Fitzwilliam were created before the prize, so I would put them in the “found” category, but I don’t think Hands and Cities would technically qualify for the prize—it’s older than 12 months (having been created October 2020). (The Fitzwilliam does, but only because Sam Enright switched platforms—his blog was started in november 2020, which would disqualify it for the prize; but that’s where all the good content is!)
I definitely don’t underestimate the pedagogical value of blogs! I’ve read a fair share of them over the years, and learned nearly everything I value knowing from them. My complaint is that blogs often capture the outer loop of a community in a way that is far from optimal, and that most blogs are just really inefficient (such as Marginal Revolution or Econlog or Overcoming Bias) because their information is just scattered over many posts and not fully organized anywhere, as opposed to sites such as Gwerns (other positive examples are Essays on Reducing Suffering, An Anarchist FAQ, Ethan Morse’s site, Metaculus, XXIIVV, nintil, FAQs and of course Wikipedia).
Perhaps we’d need a third outer loop to distill findings from blogs into long content, then?
As a point of clarification, this project was funded by FTX Future Fund, but isn’t connected beyond that.
Thanks, corrected
Another advantage of not posting under your real name is that one can more easily criticize parts of effective altruism without incurring some reputational risk for the real-life EA community (c.f. 80000 hours anonymous interviews). Not posting under one’s real name can make it easier to not conform or have higher-variance opinions.
Another advantage of not posting under your real name is that one can more easily criticize parts of effective altruism without incurring some reputational risk for the real-life EA community (c.f. 80000 hours anonymous interviews). Not posting under one’s real name can make it easier to not conform or have higher-variance opinions.
While I was not brought to EA/rationality through HPMoR, I strongly endorse this proposal.
I thought Yudkowsky’s Engines of Cognition was beautifully done.
Note that the Engines of Cognition books were mostly neither written nor compiled or designed by Yudkowsky, but by members of the LessWrong community/the LessWrong team, respectively (there is one essay by Yudkowsky in there).
Argument against longtermism:
Longtermism seems to rely on zero discount rates for the value of future lives. But per moral uncertainty, we probably have a probability distribution over discount rates. This probability distribution is very likely skewed towards having positive discount rates (there are much more plausible reasons why future lives are worth less than current lives, but very few (none?) why they should be more important ceteris paribus).
Therefore, expected discount rate is positive, and longtermism loses some of its bite.
Possible counterarguments
Discount rates are not part of moral uncertainty, but different kind of normativity (decision theoretic?), over which we ought not have uncertainty
Equally plausible reasons for positive as for negative discount rates (although I don’t know which ones?)
Complete certainty in 0 discount rate (seems way overconfident imho)
Main inspiration from the chapter on practical implications of moral uncertainty from MacAskill, Bykvist & Ord 2020. I remember them discussing very similar implications, but not this one – why?