Appreciation post for Saulius
I realized recently that the same author that made the corporate commitments post and the misleading cost effectiveness post also made all three of these excellent posts on neglected animal welfare concerns that I remembered reading.
Fish used as live bait by recreational fishermen
Rodents farmed for pet snake food
35-150 billion fish are raised in captivity to be released into the wild every year
For the first he got this notable comment from OpenPhil’s Lewis Bollard. Honorable mention includes this post which I also remembered, doing good epistemic work fact-checking a commonly cited comparison.
Also, I feel that as the author, I get more credit than is due, it’s more of a team effort. Other staff members of Rethink Charity review my posts, help me to select topics, and make sure that I have to worry about nothing else but writing. And in some cases posts get a lot of input from other people. E.g., Kieran Greig was the one who pointed out the problem of fish stocking to me and then he gave extensive feedback on the post. My CEE of corporate campaigns benefited tremendously from talking with many experts on the subject who generously shared their knowledge and ideas.
Thanks JP! I feel I should point out that it’s now basically my full time job to write for the EA forum, which is why there are quite many posts by me :)
The new Forum turns 1 year old today.
🎵Happy Birthday to us 🎶
Posting this on shortform rather than as a comment because I feel like it’s more personal musings than a contribution to the audience of the original post —
Things I’m confused about after reading Will’s post, Are we living at the most influential time in history?:
What should my prior be about the likelihood of being at the hinge of history? I feel really interested in this question, but haven’t even fully read the comments on the subject. TODO.
How much evidence do I have for the Yudkowsky-Bostrom framework? I’d like to get better at comparing the strength of an argument to the power of a study.
Suppose I think that this argument holds. Then it seems like I can make claims about AI occurring because I’ve thought about the prior that I have a lot of influence. I keep going back and forth about whether this is a valid move. I think it just is, but I assign some credence that I’d reject it if I thought more about it.
What should my estimate of the likelihood we’re at the HoH if I’m 90% confident in the arguments presented in the post?
This first shortform comment on the EA Forum will be both a seed for the page and a description.
Shortform is an experimental feature brought in from LessWrong to allow posters a place to put quickly written thoughts down, with less pressure to make it to the length / quality of a full post.
Thus starts the most embarrassing post-mortem I’ve ever written.The EA Forum went down for 5 minutes today. My sincere apologies to anyone who’s Forum activity was interrupted.I was first alerted by Pingdom, which I am very glad we set up. I immediately knew what was wrong. I had just hit “Stop” on the (long unused and just archived) CEA Staff Forum, which we built as a test of the technology. Except I actually hit stop on the EA Forum itself. I turned it back on and it took a long minute or two, but was soon back up....Lessons learned:* I’ve seen sites that, after pressing the big red button that says “Delete”, makes you enter the name of the service / repository / etc. you want to delete. I like those, but did not think of porting it to sites without that feature. I think I should install a TAP that whenever I hit a big red button, I confirm the name of the service I am stopping.* The speed of the fix leaned heavily on the fact that Pingdom was set up. But it doesn’t catch everything. In case it doesn’t catch something, I just changed it so that anyone can email me with “urgent” in the subject line and I will get notified on my phone, even if it is on silent. My email is jp at organizationwebsite.
Alright, the title sounds super conspiratorial, but I hope the content is just boring. Epistemic status: speculating, somewhat confident in the dynamic existing.
Climate science as published by the IPCC tends to
1) Be pretty rigorous
2) Not spend much effort on the tail risks
I have a model that they do this because of their incentives for what they’re trying to accomplish.
They’re in a politicized field, where the methodology is combed over and mistakes are harshly criticized. Also, they want to show enough damage from climate change to make it clear that it’s a good idea to institute policies reducing greenhouse gas emissions.
Thus they only need to show some significant damage, not a global catastrophic one. And they want to maintain as much rigor as possible to prevent the discovery of mistakes, and it’s easier to be rigorous about things that are likely than about tail risks.
Yet I think longtermist EAs should be more interested in the tail risks. If I’m right, then the questions we’re most interested in are underrepresented in the literature.
We’re planning Q4 goals for the Forum.
Do you use the Forum? (Probably, considering.) Do you have feelings about the Forum?
If you send me a PM, one of the CEA staffers running the Forum (myself or Aaron) will set up a call call where you can tell me all the things you think we should do.
Please fix the EA forum search engine and/or make it easier to find forum posts through Google.
On the whole, I really like the search engine. But one small bug you may want to fix is that occasionally the wrong results appear under ‘Users’. For example, if you type ‘Will MacAskill’, the three results that show up are posts where the name ‘Will MacAskill’ appears in the title, rather than the user Will MacAskill.
EDIT: Mmh, this appears to happen because a trackback to Luke Muehlhauser’s post, ‘Will MacAskill on Normative Uncertainty’, is being categorized as the name of a user. So, not a bug with the search engine as such, but still something that the EA Forum tech team may want to fix.
Oh the joys of a long legacy of weird code. I’ve deleted those accounts, although I’m sad to report that our search engine is not smart enough to figure out that “Will MacAskill” should return “William_MacAskill”
Is there a way to give Algolia additional information from the user’s profile so that it can fuzzy search it?
We could probably add a nickname field that we set manually.
Yeah, you can add lots of additional fields. It also has like 100 options for changing the algorithm (including things like changing the importance of spelling errors in search, and its eagerness to correct them), so playing around with that might make sense.
With a configuration change, the search engine now understands that karma is important in ranking posts and comments. (It unfortunately doesn’t have access to karma for users.)
This doesn’t fix the example I put forward, but it does make the search function more understandable and less frustrating. Thanks!
Oh, interesting. LessWrong always had that, and I never even thought about that maybe being a configuration difference between the two sites.
Curious what the problem with the current search engine is? Agree that it’s important to be able to find forum posts via Google, which is currently an EA Forum specific issue, but improvements to the search likely also affect LessWrong, so I am curious in getting more detail on that.
Posts are not listed in order of relevance. You need to know exact words from the post you’re searching for in order to find it—preferably exact words from the title.
For example, if I wanted to find your post from four days ago on long term future grants and typed in “grants”, your post wouldn’t appear, because your post uses the word “grant” in the title instead.
FYI, this was a very helpful concrete example.
On reflection your reasoning is false though—it’s not because the post uses the word ‘grant’. If I search ‘grant’ I get almost identical results, certainly the first 6 are the same. If I search ‘ltf grants’ I get the right thing even though neither ‘ltf’ or ‘grants’ is in the title. I also think that it’s not like there aren’t a lot of other posts you could be searching for with the word ‘grant’ - it isn’t just random other posts, there are *many* posts withing ~2x karma that have that word in the title.
Still, I share a vague sense that something about search is not quite right, though I can’t put my finger on it.
(Edit: This was written before Khorton edited a concrete example into their comment)
Interesting. I haven’t had many issues with the search. I mostly just wanted it to have more options that I can tweak (like restricting it to a specific time period and author). If you know of any site (that isn’t a major search engine provider) that has search that does better here, I would be curious to look into what technology they use (we use Algolia, which seems to be one of the most popular search providers out there, and people seem to generally be happy with it). It might also be an issue of configuration.
Speaking to the google search results – It’s pretty hard to just rise up the google rankings. We’ve done the basic advice: the crawled page contains the post titles and keywords, made sure google finds the mobile view is satisfactory. It’s likely there more we can do but it’s not straightforward. Complicating matters is that during the great spampocalypse in May, we were hit with a punitive action from google, because we were polluting their ranking algorithm with spam links. You may remember a time when there were no results linking to posts at all. We fixed it, but it’s possible (and I’d guess likely) that we’re still getting dinged for that. Unfortunately, google gives us no way of knowing.
NB: We’re now done planning Q4. Suggestions are still valuable, but consider holding off on further comments for a bit, we have a final draft of a post that’s about to give a lot more context. Of course, if you’ve got a useful comment you’d otherwise forget about, I don’t mind continuing to answer.
I’m wondering about the possibility to up-vote one’s own posts and comments. I find that a bit of an odd system. My guess would be that someone up-voting their own post is a much weaker signal of quality than someone up-voting someone else’s post.
Also, it feels a bit entitled/boastful to give a strong up-vote to one’s own posts and comments. I’m therefore reluctant to vote on my own work.
Hence, I’d suggest that one shouldn’t be able to vote on one’s own posts and comments.
By default your comments are posted with a regular upvote on them posts with a strong upvote on them. The fact that it’s default seems to me to lower my concern about boastfulness. Although I do think it’s possible the Forum shouldn’t let you change away from those defaults. When I observed someone strong-upvoting their comments on LW, I found it really crass.
As to why not change the default, I do think that you by default endorse your comments and posts. This provides useful info to people, because if you’re a user with strong upvote power, your posts and comments enter more highly rated. This provides a small signal to new users about who the Forum has decided to trust. And it makes it less likely that you’ll see a dispiriting “0” next to your comment. OTOH, we don’t count self-votes for the purposes of calculating user karma, so maybe by consistency we shouldn’t show it.
Although I do think it’s possible the Forum shouldn’t let you change away from those defaults.
I am in favor of these defaults and also in favor of disallowing people to change them. I know of two people on LW who have admitted to strong-upvoting their comments, and my sense is that this behavior isn’t that uncommon (to give a concrete estimate: I’d guess about 10% of active users do this on a regular basis). Moreover, some of the people who may be initially disinclined to upvote themselves might start to do so if they suspect others are, both because the perception that a type of behavior is normal makes people more willing to engage in it, and because the norm to exercise restrain in using the upvote option may seem unfair when others are believed to not be abiding by it. This dynamic may eventually cause a much larger fraction of users to regularly self-upvote.
So I think these are pretty strong reasons for disallowing that option. And I don’t see any strong reasons for the opposite view.
I guess there are two different issues:
1) Should comments and posts by default start out with positive karma, or should it be 0?
2) Should it be possible for the author to change the default level of karma their post/comment starts out with?
This yields at least four combinations:
a) Zero initial karma, and that’s unchangeable.
b) Zero initial karma by default, but you could give up-votes (including strong up-votes) to your own posts, if you wanted to.
c) A default positive karma (which is a function of your total level of karma), which can’t be changed.
d) A default positive karma, which can be increased (strong up-vote) or decreased (remove the default up-vote). (This is the system we have now.)
My comments only pertained to 2), whether you should be able to change the default level of karma—e.g. to give strong up-votes to your own own posts and comments. On that, you found it “crass” when someone did that. You also made this comment:
This provides useful info to people, because if you’re a user with strong upvote power, your posts and comments enter more highly rated. This provides a small signal to new users about who the Forum has decided to trust. And it makes it less likely that you’ll see a dispiriting “0” next to your comment.
This rather seems to relate to 1).
As stated, I don’t think one should be able to change the default level of karma. This would rule out b) and d), and leave a) and c). I have a less strong view on how to decide between those two systems, but probably support a).
I agree with you and Pablo that I’d rather see it unchangeable. My prioritization basically hinges on how common it is. If Pablo’s right and it’s 10%, that seems concerning. I’ve asked the LW team.
Making it unchangeable also seems reasonable to me, or at least making it so that you can no longer strong-upvote your own comments.
Strong-upvoting your own posts seems reasonable to me (and is also the current default behavior)
I clicked on ‘go to Permalink’ for this post, because I was going to send it to a friend, but I don’t think it did anything.
What I actually wanted to do was find a link to just this post (not the whole shortform) that wasn’t going to change.
What happens when you do that is that now your url bar in your browser points to this post, with a fancy standalone version of the comment above the post. Unfortunately, because the post doesn’t actually change, you aren’t navigating to a new page and your scroll stays where it is. It’s a new feature from LessWrong, I’ve filed a bug report with them.
I’d be interested in seeing views/ hits counters on every post and general data on traffic.
Also quadratic voting for upvotes.
This is an interesting question. It would certainly prevent a bunch of bad behavior and force people to be more intentional in their voting. Here are I think the main reasons we / LW have talked about it but not implemented it:
a) Some people just read way more of the Forum than others. Should their votes have less weight because they must be spread over many comments?
b) I don’t want users to have to think about conserving their voting resources. If they like something, I want them to vote something and move on. Karma is fun, but the purpose of the site is the content.
We could a) put that data on the start of every post or b) put it under a menu option under the … menu. I think (a) wouldn’t provide enough value to balance the cost of busying the UI, which is currently very sparse and the more valuable for it. I don’t expect (b) would be used much. I don’t have the data to back this up (yet! I really want to be able to easily check all of these) but I guess most people don’t click on those menu buttons very often.
Mandatory field 200 characters summarizing the blogpost.
Mandatory keywords box.
Better Google Docs integration.
Better Google Docs integration
My guess is that it’ll be hard to beat copy and pasting. Copy and pasting of styling works fairly well and is a pretty simple C-c,C-v. It works fairly well right now, with the main complaints (images, tables) being limitations of our current editor. I’m optimistic that a forthcoming upgrade to use CKEditor will improve the situation a lot.
It works fairly well right now, with the main complaints (images, tables) being limitations of our current editor.
It works fairly well right now, with the main complaints (images, tables) being limitations of our current editor.
Copying images from public Gdocs to the non-markdown editor works fine.
See an upcoming post for how I feel about tagging.
Mandatory field 200 characters summarizing the blogpost
This one’s been requested a few times. My thought is that a well written post has a summary or hook in the first paragraph. Aaron is more optimistic though.
With this one and the keywords box, I’d tend heavily towards leaving it optional but encouraged. I want to keep posting easy, and lean towards trusting the authors to know what will work with their post.
I want to write a post saying why Aaron and I* think the Forum is valuable, which technical features currently enable it to produce that value, and what other features I’m planning on building to achieve that value. However, I’ve wanted to write that post for a long time and the muse of public transparency and openness (you remember that one, right?) hasn’t visited.
Here’s a more mundane but still informative post, about how we relate to the codebase we forked off of. I promise the space metaphor is necessary. I don’t know whether to apologize for it or hype it.
You can think of the LessWrong codebase as a planet-sized spaceship. They’re traveling through the galaxy of forum-space, and we’re a smaller spacecraft following along. We spend some energy following them, but benefit from their gravitational pull.
(The real-world correlate of their gravity pulling us along is that they make features which we benefit from.)
We have less developer-power than they do (1 dev vs 2.5-3.5, depending on how you count.) So they can move faster than we can, and generally go in directions we want to go. We can go further away from the LW planet-ship (by writing our own features), but this causes their gravitational pull to be weaker and we have to spend more fuel to keep up with them (more time adapting their changes for our codebase).
I view the best strategy as making features that LW also wants (moving both ships in directions I want), and then, when necessary, making changes that only I want.
One implication of this is that feature requests are more likely to be implemented, and implemented quickly, if they are compelling to both the EA Forum and LessWrong. These features keep the spaceships close together, helping them burn less fuel in the process.**
*(and Max and Ben)
** I was going to write something about how this could be a promising climate-change reduction strategy, until I remembered that carbon emissions don’t matter in outer space.
Tip: if you want a way to view Will’s AMA answers despite the long thread, you can see all his comments on his user profile.