I want to write a post saying why Aaron and I* think the Forum is valuable, which technical features currently enable it to produce that value, and what other features I’m planning on building to achieve that value. However, I’ve wanted to write that post for a long time and the muse of public transparency and openness (you remember that one, right?) hasn’t visited.
Here’s a more mundane but still informative post, about how we relate to the codebase we forked off of. I promise the space metaphor is necessary. I don’t know whether to apologize for it or hype it.
You can think of the LessWrong codebase as a planet-sized spaceship. They’re traveling through the galaxy of forum-space, and we’re a smaller spacecraft following along. We spend some energy following them, but benefit from their gravitational pull.
(The real-world correlate of their gravity pulling us along is that they make features which we benefit from.)
We have less developer-power than they do (1 dev vs 2.5-3.5, depending on how you count.) So they can move faster than we can, and generally go in directions we want to go. We can go further away from the LW planet-ship (by writing our own features), but this causes their gravitational pull to be weaker and we have to spend more fuel to keep up with them (more time adapting their changes for our codebase).
I view the best strategy as making features that LW also wants (moving both ships in directions I want), and then, when necessary, making changes that only I want.
One implication of this is that feature requests are more likely to be implemented, and implemented quickly, if they are compelling to both the EA Forum and LessWrong. These features keep the spaceships close together, helping them burn less fuel in the process.**
*(and Max and Ben)
** I was going to write something about how this could be a promising climate-change reduction strategy, until I remembered that carbon emissions don’t matter in outer space.
Good noticing. One facet of the LessWrong feature is that as users pass a certain amount of karma they gain privileges. I believe that high karma users such as Eliezer (on LW) or Peter Hurford (here) can moderate their own posts, even on the frontpage. I think very low karma users may not be able to moderate posts that remain on their personal blog. Given the EA Forum’s differences in how we treat the personal blog / frontpage distinction, we may want to diverge from LW’s feature-set here. I haven’t touched it since I was initially setting up the Forum, and I’m not sure how I left it. I’m not sure all of the features are there. Certainly we’d want to write up a user’s guide for the feature. I appreciate the comment. When we were setting up the Forum it wasn’t top priority, but very plausibly the landscape has changed. Without making a public commitment (😛), I wouldn’t be surprised if that fix got prioritized – it does seem useful for encouraging people to post.
Ah, like, if EAs have already discovered it, then you expect it to quickly reach saturation, efficient market style?
Then you’d be looking for something with no donors. (Or nearly none.) The percent of them who were EAs wouldn’t be relevant.
Appreciation post for Saulius
I realized recently that the same author that made the corporate commitments post and the misleading cost effectiveness post also made all three of these excellent posts on neglected animal welfare concerns that I remembered reading.
Fish used as live bait by recreational fishermen
Rodents farmed for pet snake food
35-150 billion fish are raised in captivity to be released into the wild every year
For the first he got this notable comment from OpenPhil’s Lewis Bollard. Honorable mention includes this post which I also remembered, doing good epistemic work fact-checking a commonly cited comparison.
Posting this on shortform rather than as a comment because I feel like it’s more personal musings than a contribution to the audience of the original post —
Things I’m confused about after reading Will’s post, Are we living at the most influential time in history?:
What should my prior be about the likelihood of being at the hinge of history? I feel really interested in this question, but haven’t even fully read the comments on the subject. TODO.
How much evidence do I have for the Yudkowsky-Bostrom framework? I’d like to get better at comparing the strength of an argument to the power of a study.
Suppose I think that this argument holds. Then it seems like I can make claims about AI occurring because I’ve thought about the prior that I have a lot of influence. I keep going back and forth about whether this is a valid move. I think it just is, but I assign some credence that I’d reject it if I thought more about it.
What should my estimate of the likelihood we’re at the HoH if I’m 90% confident in the arguments presented in the post?
I feel like, before pulling out “blog posts won’t convince me” you could have first provided any links to support your view.
I believe the #3 not showing up is due to it having non-bold text on that line. (the  footnote). This is kinda awkwardly unexpected behavior, sorry about that. But I’m not sure what I’d rather the behavior be. The simple rule of “lines with only bold text are counted as h4, otherwise it’s treated as a paragraph” probably leads to less surprise than some attempt to do a threshold.
Agree that there’s a different incentive for cooperative writing than for clickbait-y news in particular. And I agree with your recommendations. That said, I think many community writers may undervalue making their content more goddamn readable. Scott Alexander is a verbose and often spends paragraphs getting to the start of his point, but I end up with a better understanding of what he’s saying by virtue of being fully interested.
All in all though, I’d recommend people try to write like Paul Graham more than either Scott Alexander or an internal memo. He is in general more concise than Scott and more interesting than a memo.
He has several essays about how he writes.
Writing, Briefly — Laundry list of tips
Write like you talk
The Age of the Essay — History of the essays we write in school versus the essays that are useful
A Version 1.0 — “The Age of the Essay” in rough draft form with color coding for if it was kept
Tip: if you want a way to view Will’s AMA answers despite the long thread, you can see all his comments on his user profile.
I agree with this, but I add a factor for well-written-ness and the cleverness of the idea.
Alright, the title sounds super conspiratorial, but I hope the content is just boring. Epistemic status: speculating, somewhat confident in the dynamic existing.
Climate science as published by the IPCC tends to
1) Be pretty rigorous
2) Not spend much effort on the tail risks
I have a model that they do this because of their incentives for what they’re trying to accomplish.
They’re in a politicized field, where the methodology is combed over and mistakes are harshly criticized. Also, they want to show enough damage from climate change to make it clear that it’s a good idea to institute policies reducing greenhouse gas emissions.
Thus they only need to show some significant damage, not a global catastrophic one. And they want to maintain as much rigor as possible to prevent the discovery of mistakes, and it’s easier to be rigorous about things that are likely than about tail risks.
Yet I think longtermist EAs should be more interested in the tail risks. If I’m right, then the questions we’re most interested in are underrepresented in the literature.
What’s one piece of research / writing that you think is missing from the public internet, but you think a Forum writer could create?
There’s a delay between when something gets posted and when a moderator categorizes it. That said, this seems like a classic community post to my eyes, but I’m not a moderator.
This first shortform comment on the EA Forum will be both a seed for the page and a description.
Shortform is an experimental feature brought in from LessWrong to allow posters a place to put quickly written thoughts down, with less pressure to make it to the length / quality of a full post.
(Also you gave me Strangers Drowning as I recall)
Feedback on the books you have: I liked Superintelligence, though it wasn’t a big deal for me, and was lukewarm on the 80k career guide (sorry 80k).
My level of moral ambition was seriously raised by reading these two books at a time when I was just getting exposed to EA:
Strangers Drowning by Larissa MacFarquhar
Famine, Affluence and Morality by Peter Singer
Glad to see this writeup! I really like that you compare yourself directly to your estimate of your counterfactual work. And it comes up positive! Great work. Especially given that I think entrepreneurship is really hard.
Some comments after half-skimming half-reading, sorry if I’m asking dumb questions:
1. You basically are using a net promoter question at one point, but it seems like most experts on the subject would say that getting a 7+ is way too easy. Wikipedia says that 7-8 is considered “passive”. Typically there’s a score that gets calculated, which I’d be interested in your report of what you got here.
2. Can you report the increase in hours in effect size as well as absolute hours?
3. I would say it’s worth noting what the clients who didn’t complete 4 weeks thought.
4. Maybe considering writing up some of your best advice? I’ve heard (but cannot recall the source) that for-profit consulting firms will post their best advice because it acts as a beacon, drawing in those who found it useful. And it seems extra pro-social in an EA context.