Is that an intentional policy, or just a feature that hasn’t been implemented yet?
If intentional, could you say why? Obviously it could be confusing, but there are some substantial downsides to preventing it.
I’m not sure how public the hiring methodology is, but if it’s fully public then I’d expect the candidates to be ‘lost’ before the point of sending in a CV.
If it’s less public that would be less likely, though perhaps the best candidates (assuming they consider applying for jobs at all, and aren’t always just headhunted) would only apply to jobs that had a transparent methodology that revealed a short hiring process.
I think this will make the forum far more useful. Could you add some kind of taglist (or prominent link to one) to the home page?
I wonder if there’s a case for carrying heavier loads on your front if you can’t easily use hands only. It seems counterintuitive, since that would pull you forward into a hunch, but maybe what matters would be working your posterior chain rather than the actual posture it temporarily puts you in.
I’ve got a very slowly in-progress multipart essay attempting to definitively answer this question without resort to (what we normally mean by) intuition: http://www.valence-utilitarianism.com/posts/choose-your-preference-utilitarianism-carefully-part-1
Kudos to 80K for both asking and publishing this. I think I literally agree with every single one of these (quite strongly with most). In particular, the hiring practices criticism—I think there was a tendency especially with early EA orgs to hire for EAness first and competence/experience second, and that this has led to a sort of hiring practice lock in where they value the characteristics if not to the same degree then with a greater bias than a lean efficiency-minded org should have.
A related concern is overinterviewing—I read somewhere (unfortunately I can’t remember the source) the claim that the longer and more thorough your interview process, the more you select for people with the willingness and lack of competition for their time to go through all those steps.
This (if I’m right) would have the quadruple effect of wasting EAs’ times, which you’d hope would be counterfactually valuable, wasting the organisations’ times, ditto, potentially reducing the fidelity of the hiring practice, and of increasing the aforementioned bias towards willingness.
Re: searching for great posts, there is also an archive page where you can order by top and other things in the gear menu.
Ok, that’s quite a lot more helpful than I’d realised—why not make it more prominent though? I didn’t see these options even when actively looking for them, and even knowing they’re there, unless I deep link to the page as someone above suggested, it’s several clicks to reach where I want to be. Though (more on this below), the ‘top’ option is the only one I can see myself ever using.
Can you say more about how you used the old forum? I’m hearing something like “A couple of times per year I’d look at the top-posts list and read new things there”. (I infer a couple of times per year because once you’ve done it once or twice I’d guess you’ve read all the top posts.) I think that’s still very doable using the archive feature.
I mainly used the ‘top posts in <various time periods>’ option (typically the 1 or 3 month options, IIRC); median time between visits was probably something like 1-3 months, so that fit pretty well. That said, even on the old forum I strongly wished for a way to filter by subject. Honestly, my favourite forums for UX were probably the old phpBB style ones, where you’d have forums devoted to arbitrarily many subtopics. I don’t think they’re anywhere near the pinnacle of forum design, but ‘subtopic’ is such an important divider, that I feel much less clear on how I can get value from a forum without it (which is part of why I’ve never spent a huge amount of time on the EA forums—though a bigger part is just not having much time to spare)
To a lesser degree, I found the metadata on who’d been active recently. It let me pseudo-follow certain users (though I suspect an actual follow function would be more helpful)
Am also surprised that you lose posts. My sense is that for a post to leave the frontpage takes a couple of days to a week. Do you keep tabs open that long? Or are you finding the posts somewhere else?
Often a friend would link me to a post that had already been around for a week or two when I read it.
My impression, incidentally, is that the search functionality is decidedly better than it was on the old forum: the search results seem to be more related to what I’m looking for, and be easier to sort through (eg separating ‘comments’ and ‘posts’)
For what it’s worth, my main concerns are the visual navigation (esp filtering and sorting) rather than a search feature—the latter I find Google invariably better for, as long as you can persuade the bots to index frequently.
(also worth noting that for me it’d be really helpful to have a user-categorisation or tagging system, so we could easily filter by subject matter. Even just old-school subforums would be swell, but the ideal might be allowing non-authors to tag posts as well)
A less drastic option would be for OpenPhil to just hire more research staff. I think there’s some argument for this given that they’re apparently struggling to find ways to distribute their money:
1) a new researcher doesn’t need to be as valuable as Holden to have positive EV against the counterfactual of the money sitting around waiting for Holden to find somewhere to donate it to in 5 years
2) the more researchers are hired, even (/especially) when they’re ones who Holden doesn’t agree with, the more they guard against the risk of any blind spots/particular passions etc of Holden’s coming to dominate and causing missed opportunities, since ultimately as far as I can tell there aren’t really other strong feedback mechanisms on the grants he ends up making than internal peer review.
(I wouldn’t argue strongly for this, but I haven’t seen a counterpoint to these arguments that I find compelling)
The PA view doesn’t need to assign disvalue to death to make increasing lifespans valuable. It just needs to assign to death a smaller value than being alive.
It depends how you interpret PA. I don’t think there is a standard view—it could be ‘maximise the aggregate lifetime utility of everyone currently existing’, in which case what you say would be true, or ‘maximise the happiness of everyone currently existing while they continue to do so’, which I think would turn out to be a form of averaging utilitarianism, and on which what you say would be false.
If we make LEV nearer we don’t increase the distress anti-aging therapies will cause to people at first. We just anticipate the distress.
Yes, but this was a comment about the desirability of public advocacy of longevity therapies rather than the desirability of longevity therapies themselves. It’s quite plausible that the latter is desirable and the former undesirable—perhaps enough so to outweigh the latter.
This doesn’t matter though, since, as I wrote, impact under the neutral view is actually bigger.
Your argument was that it’s bigger subject to its not reducing the birthrate and adding net population in the near future is good in the long run. Both are claims for which I think there’s a reasonable case, neither are claims that seem to have .75 probability (I would go lower for at least the second one, but YMMV). With a .44+ probability that one assumption is false, I think it matters a lot.
Financing aging research has only the effect of hastening it, so moving the date of LEV closer. The ripple effect that defeating aging would cause on the far future would remain the same. People living 5000 years from now wouldn’t care if we hit LEV now or in 2040. So this isn’t even a measure of impact.
Again this is totally wrong. Technologies don’t just come along and make some predetermined set of changes then leave the world otherwise unchanged—they have hugely divergent effects based on the culture of the time and countless other factors. You might as well argue that if humanity hadn’t developed the atomic bomb until last year, the world would look identical to today’s except that Japan would have two fewer cities (and that in a few years, after they’d been rebuilt, it would look identical again).
Also, my next post is exactly on the shorter term impact. I think it’ll be published in a couple of weeks. It will cover DALYs averted at the end of life, impact on life satisfaction, the economic and societal benefits, impact on non-human animals.
Looking forward to it :)
I think it’s an interesting cause area (upvoted for investigating something new), though I have three important quibbles with this analysis (in ascending order of importance):
1) The person-affecting (PA) view doesn’t make this a slam-dunk. PAness doesn’t signify that death in itself has negative value, so given your assumption ‘that there isn’t suffering at the end of life and people get replaced immediately’, on the base PA view, increasing lifespans wouldn’t in itself generate value. No doubt there are flavours of PA that would claim death *does* have disvalue, but those would need to be argued for separately.
Obviously there often *is* profound suffering at the end of life, which IMO is a much stronger argument for longevity research—on both PA and totalising views. Though I would also be very wary of writing articles arguing on those grounds, since most people very sensibly try to come to terms with the process of ageing to reduce its subjective harm to them, and undoing that for the sake of moving LEV forward a few years might cause more psychological harm than it prevented.
2) My impression is that the PA view is held by a fairly small minority of EAs and consequentialist moral philosophers (for advocates of nonconsequentialist moral views, I’m not sure the question would even make sense—and it would make a lot less sense to argue for longevity research based on its consequences), and if so, treating it as having equal evidential weight as totalising views is misleading.
It’s obviously too large a topic to give much of an inside view on here, but if your view of ethics is basically monist (as opposed to dualist—ie queer-sort-of-moral-fact-ist) I don’t think there’s any convincing way you could map real-world processes onto a PA view, such that the PA view would make any sense. There’s too much vagary about what would qualify as the ‘same’ or a ‘different’ person, and no scientific basis for drawing lines in one place rather than another (and hence, none for drawing any lines at all).
3) ‘Reminder: most of the impact of aging research comes from making the date of LEV come closer and saving the people who wouldn’t otherwise have hit LEV.’
This is almost entirely wrong. Unless we a) wipe ourselves out shortly after hitting it (which would be an odd notion of longevity), or b) reach it within the lifespans of most existing people *and* take a death-averse-PA view, the vast majority of LEV’s impact of it will come on the ripple effect on the far future, and the vast majority of its expected impact will be our best guess as to that.
EAs tend to give near-term poverty/animal welfare causes a pass on that estimation, perhaps due to some PA intuitions, perhaps because they’re doing good and (almost) immediate work, which if nothing else gives them a good baseline for comparison, perhaps because the immediate measurable value might be as good a proxy as any for far-future expectation in the absence of good alternative ways to think about the latter (and plenty of people would argue that these are all wrong, and hence that we should focus more directly on the far future. But I doubt many of the people who disagree with *them* would claim on reflection that ‘most of the impact of poverty reduction comes from the individuals you’ve pulled out of poverty’).
Longevity research doesn’t really share these properties, though, and certainly doesn’t have them to the same degree, so it’s unlikely to have the same intuitive appeal, in which case it’s hard to argue that it *should*. Figuring out the short-term effects is probably the best first step towards doing this, but we shouldn’t confuse it with the end goal.
the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.
This seems very wrong to me. I work at Founders Pledge in London, and I doubt a single one of the staff there would disagree with a proposition like ‘the magnitude of London rents has a profound effect on my lifestyle’.
They also pay substantially closer to market rate salaries now than they did for the first 2-3 years of existence, during which people no doubt would have been far more sympathetic to the claim.
A couple of thoughts I’d add (as another trustee):
3. Demand for the hotel has been increasing more or less linearly (until we hit current funding difficulties). As long as that continues, the projects will tend to get better.
This seems like a standard trajectory for meta-charities: for eg I doubt 80k’s early career shifts looked anywhere near as high value as the average one does now. I should know—I *was* one of them, back when their ‘career consultation’ was ‘speculating in a pub about earning to give’ (and I was a far worse prospect than any 80k advisee or hotel resident today!)
Meanwhile it’s easy to scorn such projects as novel-writing, but have we forgotten this? For better or worse, if Eliezer hadn’t written that book the rationality and EA communities would look very different now.
6. This might be true as a psychological explanation, but, ceteris paribus, it’s actually a reason *to* donate, since it (by definition) makes the hotel a more neglected cause.
I would be wary of equivocating different forms of ‘inconvenience’. There are at least three being alluded to here:
1) Fighting the akrasia of craving animal products
2) The hassle of finding vegan premade food (else of having to prepare meals for yourself)
3) Reduced productivity gains from missing certain nutrients (else of having to carefully supplement constantly)
Of these, the first basically irrelevant in the hotel—you can remove it as a factor by just not giving people the easy option to ingest them. The second is completely irrelevant, since it’s serving or supplying 90% of the food people will be eating.
So that only leaves three, which is much talked about, but so far as I know, little studied, so this ‘inconvenience’ could even have the wrong sign: the only study on the subject I found from a very quick search showed increased productivity from veganism for health reasons; also on certain models of willpower that treat it as analogous to a muscle, it could turn out that depriving yourself (even by default, from the absence of offered foods) you improve your willpower and thus become more productive.
I’ve spoken to a number of people who eat meat/animal products for the third reason, but so far as I know they rarely seem to have reviewed any data on the question, and almost never to have actually done any controlled experiments on themselves. Honestly I suspect many of them are using the first two to justify a suspicion of the third (for eg, I know several EAs who eat meat with productivity justifications, but form whom it’s usually *processed* meat in the context of other dubious dietary choices, so they demonstrably aren’t optimising their diet for maximal productivity).
Also, if the third does turn out to be a real factor, it seems very unlikely that more than a tiny bit of meat every few days would be necessary to fix the problem for most people, and going to the shops to buy that for themselves seems unlikely to cause them any serious inconvenience.
I can’t help but appreciate the irony that 5 hours after having been posted this is still awaiting moderator approval.
Given that other organizations can raise large funds, an alternative explanation is that donors think that the expected impact of the organizations that cannot get funding is low.
It’s not entirely obvious how that looks different from EA being funding constrained. No donors are perfectly rational and they surely tend to be irrational in relatively consistent ways, which means that some orgs having surplus funds is totally consistent with there not being enough money to fund all worthwhile orgs. (this essentially seems like a microcosm of the world having enough money to fix all its problems with ease, and yet there ever having been a niche for EA funding).
Also, if we take the estimates of the value of EA marginal hires on the survey from a couple of years back literally, EA orgs tend to massively underpay their staff compared to their value, and presumably suffer from a lower quality hiring pool as a result.
I agree with all of this, though I’d add that I think part of the problem is the recent denigration of earning to give, which is often all that someone realistically *can* do, at least in the short term.