Is the claim here that EA orgs focusing on GCRs didn’t think GoF research was a serious problem and consequently didn’t do enough to prevent it, even though they easily could have if they had just tried harder?
My impression is that many organisations and individual EAs were both conerned about risks due to GoF research, and were working on trying to prevent it. A postmortem about strategies used seems plausibly useful, as does a retrospective on whether it should have been an even bigger focus, but the claim as stated above I think is false, and probably unhelpful.
Overall I liked this post, and in particular I very strongly endorse the view that it’s worth spending nontrivial time/energy/money to improve your health, energy, productivity etc. I don’t have a strong view about how useful the specific pieces of advice were, my impression is that the literature is fairly poor in many of these areas. Partly because of this, my favourite section was:
One thing people sometimes say when I tell them there is a small chance taking some pill will fix their problems is that this seems somehow like cheating because it doesn’t require any lifestyle changes. As if because it’s easy you don’t really deserve to have it fixed? I don’t get it but suffice to say that if for ~$20 you can trial something with a simply massive expected value (even if it’s unlikely to work) and usually with almost no downside (you can just stop taking it after two weeks if it doesn’t work) you should definitely try that thing. Think of it like buying a lottery ticket but with much better odds and a chance of actually making you consistently happier in the long-run.
It’s noteworthy that the above applies not just to “taking some pill”, but in fact to any low-cost-of-trying intervention which might prove substantially beneficial in the long run.
To that end, I was surprised to see the following at the end (as I think its framing is contradicted by the above).
Less ideal solutions (but still definitely worth considering) include patching over the problem by trying things like nootropics, antidepressants, or other medication.
It seems straightforwardly wrong to characterise medically treating e.g. clinical depression or ADHD as a “less ideal solution” which is merely “patching over the problem”. For many, treatment will be necessary for at least some time even if lifestyle adjustments and therapy are sufficient management in the longer term. For many others, medicine is a necessary part of the long-term solution, and possibly also a sufficient long-term solution.I really liked this quote from Howie in a recent 80k1 podcast about this.
 - I’m linking to this because I think it makes the point well, but should probably disclose that I’ll be working at 80k from September. The opinions above are only intended to represent my views, including the interpretation of what Howie’s saying in the quote.
I agree, a single rejection is not close to conclusive evidence, but it is still evidence on which you should update (though, depending on the field, possibly not very much)
Agree with this but would note that “The Signal and the Noise” should probably be your first intro or likely isn’t worth bothering with. It’s a reasonable intro but I got ~nothing out of it when I read it (while already familiar with Bayesian stats).
The “metaculus” forecast weights users’ forecasts by their track record and corrects for calibration, I don’t think the details for how are public. Yes you can only see the community one on open questions.
I’d recommend against drawing the conclusion you did from the second paragraph (or at least, against putting too much weight on it). Community predictions on different questions about the same topic on Metaculus can be fairly inconsistent, due to different users predicting on each.
I already believed it and had actually been recently talking to someone about it, so I was surpsied and pleased to come across the post, but couldn’t find a phrasing which said this which didn’t just sound like I was saying “oh yeah thanks for writing up my idea”. Sorry for the confusion!
Thanks for writing this, even accounting for suspicious convergence (which you were right to flag), it just seems really plausible that improving animal welfare now could turn out to be important from a longtermist perspective, and I’d be really excited to hear about more research in this field happening.
released his preliminary findings on the Social Science Research network as a preprint, meaning the study has yet to receive a formal peer review.
It’s worth noting that Campbell didn’t subject the homicide findings to the same battery of statistical tests as he did the police killings since they were not the main focus of his research.
I thought there had also been some cautionary tales learned in the last year about widely publicisng and discussing headline conclusions from preprint data without appropriate caveats. Apparently not.
There’s the EA jobs facebook group, and I’ll pm you a discord link.
It’s worth noting that 80k has a lot of useful advice on how to think about career impact, and also the option to apply for advising, as well as the jobs board. There’s also Probably Good (search for their forum post) and Animal Advocacy careers.
I want to echo this. I think my own experience of debating has been useful to me in terms of my ability to intelligence-signal in person, but was pretty bad overall for my epistemics. One interesting thing about BP (which was the format I competed in most frequently at the highest level) was the importance in the 4th speaker role identifying the cruxes of the debate (usually referring to them as “clash”), which I think is really useful. Concluding that the side you’ve been told to favour has then “won” all of the cruxes is… less so.
All this advice seems realy good, and I want to particularly echo this bit:
It might be worth reframing how you think about this as “how can I find a job that has the biggest impact”, rather than “how can I get an EA job”.
This post is already having a huge impact on some of the most influential philosophers alive today! Thanks so much for writing it.
Evidence Action are another great example of “stop if you are in the downside case” done really well.
I was under the impression CSER was pretty “core EA”! Certainly I’d expect most highly engaged EAs to have heard of them, and there aren’t that many people working on x-risk anywhere.
I’ve been much less successful than LivB but would endorse it, though I’d note that there are substantially better objective metrics than cash prizes for many kinds of online play, and I’d have a harder time arguing that those were less reliable than subjective judgements of other good players. It somewhat depends on sample though, at the highest stakes the combination of v small playerpool and fairly small samples make this quite believable.
Hi Jacob,I think you might really enjoy and benefit from reading this blog by Julia Wise. While it’s great that you have such a strong instinct to help people, we’re in this game for the long haul, and you won’t have a big impact by feeling terrible about yourself and feeling guilty if you don’t make sacrifices.In particular, it’s very likely that focusing on doing well in college and then university is going to make a much bigger different to your lifetime impact than whether you can get a part-time job to donate right now.
I’ve discovered hear this idea relatively recently but have been extremely impressed so far. Looking forward to this episode!
Because the orgs in question have literally said so, because I think the people working there genuinely care about their impact and are competent enough to have heard of Goodhart’s law, and because in several cases there have been major strategy changes which cannot be explained by a model of “everyone working there has a massive blindspot and is focused on easy to meet targets”. As one concrete example, 80k’s focus has switched to be very explicitly longtermist, which it was not originally. They’ve also published several articles about areas of their thinking which were wrong or could have been improved, which again I would not expect for an organisation merely focused on gaming its own metrics.
Yeah to be clear I meant that the decision making processes are probably informed by these things even if the metrics presented to donors are not, and from the looks of Ben’s comment above this is indeed the case.