The sections “Biggest challenges with writing big reports” and “What it’s like working at Open Phil” were interesting and relatable
A lot of what was said in these sections aligned quite a bit with my own experiences from researching/writing about EA topics, both as part of EA orgs and independently.
For example, Ajeya said:
One thing that’s really tough is that academic fields that have been around for a while have an intuition or an aesthetic that they pass on to new members about, what’s a unit of publishable work? It’s sometimes called a ‘publon’. What kind of result is big enough? What kind of argument is compelling enough and complete enough that you can package it into a paper and publish it? And I think with the work that we’re trying to do — partly because it’s new, and partly because of the nature of the work itself — it’s much less clear what a publishable unit is, or when you’re done. And you almost always find yourself in a situation where there’s a lot more research you could do than you assumed naively, going in. And it’s not always a bad thing.
It’s not always you’re being inefficient or you’re going down rabbit holes, if you choose to do that research and just end up doing a much bigger project than you thought you were going to do. I think this was the case with all of the timelines work that we did at Open Phil. My report and then other reports. It was always the case that we came in, we thought, I thought I would do a more simple evaluation of arguments made by our technical advisors, but then complications came up. And then it just became a much longer project. And I don’t regret most of that. So it’s not as simple as saying, just really force yourself to guess at the outset how much time you want to spend on it and just spend that time. But at the same time, there definitely are rabbit holes, and there definitely are things you can do that eat up a bunch of time without giving you much epistemic value. So standards for that seemed like a big, difficult issue with this work.
I think most of the EA-related things I’ve started looking into and writing up, except those that I deprioritised very early on, ending up growing and spawning spinoff tangent docs/posts. And then those spinoffs often ended up spawning their own spinoffs, and so on. And I think this was usually actually productive, and sometimes the spinoffs were more valuable than the original thing, but it definitely meant a lot of missed deadlines, changed plans, and uncertainties about when to just declare something finished and move on.
I don’t have a lot of experience with research/writing on non EA-related topics, so maybe this is just a matter of my own (perhaps flawed) approach, or maybe it’s just fairly normal. (One thing that comes to mind here is that—if I recall correctly—Joe Henrich says in his newest book, The WEIRDest People in the World, that his previous book—Secret of Our Success—was all basically just meant to be introductory chapters to WEIRDest People. And the prior book is itself quite long and quite fascinating!)
But I did do ~0.5FTE years of academic psychology research during my Honours year. There I came up with the question and basic design before even starting, and the final product really had stuck pretty closely to that, and on schedule, with no tangents. So there’s at least weak evidence that my more recent tangent-heavy approach (which I think I actually endorse) isn’t just an approach I’d adopt even in more established fields.
A few other things Ajeya said in those sections that resonated with me:
So a lot of the feeling of collaboration and teamyness and collegiality is partly driven by like, does each part of this super siloed organisation have its own critical mass.
[...]
And then [in terms of what I dislike about my job], it comes back to the thing I was saying about how it’s a pretty siloed organisation. So each particular team is quite small, and then within each team, people are spread thin. So there’s one person thinking about timelines and there’s one person thinking about biosecurity, and it means the collaboration you can get from your colleagues — and even the feeling of team and the encouragement you can get from your colleagues — is more limited. Because they don’t have their head in what you’re up to. And it’s very hard for them to get their head in what you’re up to. And so people often find that people don’t read their reports that they worked really hard on as much as they would like, except for their manager or a small set of decision makers who are looking to read that thing.
And so I think that can be disheartening.
It was interesting—and sort of nice, in a weird way! - to hear that even someone with a relatively senior role at one of the most prominent and well-resourced EA orgs has those experiences and perceptions.
(To be clear, I’ve overall been very happy with the EA-related roles I’ve worked in! Ajeya also talked about a bunch of stuff about her job that’s really positive and that also resonated with me.)
One other part of those sections that feels worth highlighting:
Rob Wiblin: Is there anything you can say to people who I guess either don’t think it’s possible they’ll get hired by Open Phil and maybe were a bit disappointed by that, or have applied and maybe didn’t manage to get a trial?
Ajeya Cotra: Yeah. I guess my first thought is that Open Phil is not people’s only opportunity to do good. Even doing generalist research of the kind that I think Open Phil does a lot of, especially for that kind of research, I think it’s a blessing and a curse, but you just need a desk and a computer to do it. I would love to see people giving it a shot more, and I think it’s a great way to get noticed. So when we write reports, all the reports we put out recently have long lists of open questions that I think people could work on. And I know of people doing work on them and that’s really exciting to me. So that’s one way to just get your foot in the door, both in terms of potentially being noticed at a place like Open Phil or a place like FHI or GPI, and also just get a sense of what does it feel like to do this? And do you like it? Or are the cons outweighing the pros for you?
I sort-of effectively followed similar advice, and have been very happy with the apparent results for my own career. And I definitely agree that there are a remarkable number of open questions (e.g., here and here) which it seems like a variety of people could just independently have a crack at, thereby testing their fit and/or directly providing useful insights.
The sections “Biggest challenges with writing big reports” and “What it’s like working at Open Phil” were interesting and relatable
A lot of what was said in these sections aligned quite a bit with my own experiences from researching/writing about EA topics, both as part of EA orgs and independently.
For example, Ajeya said:
I think most of the EA-related things I’ve started looking into and writing up, except those that I deprioritised very early on, ending up growing and spawning spinoff tangent docs/posts. And then those spinoffs often ended up spawning their own spinoffs, and so on. And I think this was usually actually productive, and sometimes the spinoffs were more valuable than the original thing, but it definitely meant a lot of missed deadlines, changed plans, and uncertainties about when to just declare something finished and move on.
I don’t have a lot of experience with research/writing on non EA-related topics, so maybe this is just a matter of my own (perhaps flawed) approach, or maybe it’s just fairly normal. (One thing that comes to mind here is that—if I recall correctly—Joe Henrich says in his newest book, The WEIRDest People in the World, that his previous book—Secret of Our Success—was all basically just meant to be introductory chapters to WEIRDest People. And the prior book is itself quite long and quite fascinating!)
But I did do ~0.5FTE years of academic psychology research during my Honours year. There I came up with the question and basic design before even starting, and the final product really had stuck pretty closely to that, and on schedule, with no tangents. So there’s at least weak evidence that my more recent tangent-heavy approach (which I think I actually endorse) isn’t just an approach I’d adopt even in more established fields.
A few other things Ajeya said in those sections that resonated with me:
It was interesting—and sort of nice, in a weird way! - to hear that even someone with a relatively senior role at one of the most prominent and well-resourced EA orgs has those experiences and perceptions.
(To be clear, I’ve overall been very happy with the EA-related roles I’ve worked in! Ajeya also talked about a bunch of stuff about her job that’s really positive and that also resonated with me.)
One other part of those sections that feels worth highlighting:
I sort-of effectively followed similar advice, and have been very happy with the apparent results for my own career. And I definitely agree that there are a remarkable number of open questions (e.g., here and here) which it seems like a variety of people could just independently have a crack at, thereby testing their fit and/or directly providing useful insights.