I’m not going to deal with the topic of the post, but there’s another reason to not post under a burner account if it can be avoided that I haven’t seen mentioned, which this post indirectly highlights.
When people post under burner accounts, it makes it harder to be confident in the information that the posts contain, because there is ambiguity and it could be the same person repeatedly posting. To give one example (not the only one), if you see X number of burner accounts posting “I observe Y”, then that could mean anywhere from 1 to X observations of Y, and it’s hard to get a sense of the true frequency. This means it undermines the message of those posting, to post under burners, because some of their information will be discounted.
In this post, the poster writes “Therefore, I feel comfortable questioning these grants using burner accounts,” which suggests in fact that they do have multiple burner accounts. I recognize that using the same burner account would, over time, aggregate information that would lead to slightly less anonymity, but again, the tradeoff is that it significantly undermines the signal. I suspect it could lead to a vicious cycle for those posting, if they repeatedly feel like their posts aren’t being taken seriously.
Eva
Current plans as the incoming director of the Global Priorities Institute
An update and personal reflections about AidGrade
Giving Later
Thanks. A quick, non-exhaustive list:
Get feedback early on. Talking to people can save a lot of time
You should have a very clear idea of why it is needed. Good ideas sound obvious after the fact
That’s not to say people won’t disagree with you. If your idea takes off you will need to have a thick skin
A super-easy way to have more impact is to collaborate with others. This doesn’t help for job market papers, where people tend to want to have solo-authored work. But you can get a lot more done collaborating with others and the outputs will be higher-quality, too
Apart from collaborating with people on the actual project, do what you can to get buy-in from other people who have no relationship to the project. Other people can magnify the impact in big ways and small
It can take a while before early-career researchers find a good idea. Have more ideas than you would think
I would really like to read a summary of this book. The reviews posted here (edit: in the original post) do not actually give much insight as to the contents. I’m hoping someone will post a detailed summary on the forum (and, as EAs love self-criticism, fully expect someone will!).
There’s another point I don’t quite know how to put but I’ll give it a go.
Despite the comments above about having many ideas and getting feedback early about one’s projects—which both point to having and abandoning ideas quickly—there’s another sense in which actually what one needs is an ability to stick to things. And the good taste to be able to evaluate when to try something else and when to keep going. (This is less about specific projects and more about larger shifts like whether to stay in academia/a certain line of work at all.)
I feel like sometimes people get too much advice to abandon things early. It’s advice that has intuitive appeal (if you can’t pick winners, at least cut your losses early), and it’s good advice in a lot of situations. But my impression is that while there are some people who would do better failing faster, there are also some people who would do better if they were more patient. At least for myself, I started having more success when sticking with things for longer. The longer you stick to a thing, the more expertise you have in it. That may not matter in some fields, but it matters in academia.
Now, obviously, you want to be very selective about what you stick to. That’s where having good taste comes in. But I’d start by looking honestly at yourself and looking at people near you that you see doing well for themselves in your chosen field, and asking which side of the impatient-patient spectrum you fall on compared to them. Some people are too patient. Some people are too impatient. I was too impatient and improved with more patience, and for some people it’s the opposite. Which advice applies the most to you depends on your starting point and field, and of course the outside options.
For econ PhDs, I think it’s worth having a lot of ideas and discarding them quickly especially in grad school because a lot of them are bad at first, but I also think there are people who jump ship from an academic career too early, like when they are on the market or in the first few years after. I suspect this might be generally true in academia where expertise really, really matters and you need to make a long-term investment, but I can’t speak for certain about other academic fields beyond economics. And I’ve definitely met many academics who played it too safe for maximizing impact, and many people who didn’t leave quickly enough. What I’m trying to emphasize is that it’s possible to make mistakes in both directions and you should put effort into figuring out which type of error you personally are more likely to make.
As a small note, we might get more precise estimates of the effects of a program by predicting magnitudes rather than whether something will replicate (which is what we’re doing with the Social Science Prediction Platform). That said, I think a lot of work needs to be done before we can have trust in predictions, and there will always be a gap between how comfortable we are extrapolating to other things we could study vs. “unquantifiable” interventions.
(There’s an analogy to external validity here, where you can do more if you can assume the study you predict is drawn from the same set as those you have studied, or the same set if weighted in some way. You could in principle make an ordering of how feasible something is to be studied, and regress your ability to predict on that, but that would be incredibly noisy and not practical as things stand, and past some threshold you don’t observe studies anymore and have little to say without making strong assumptions about generalizing past that threshold.)
Thanks for mentioning the Social Science Prediction Platform! We had some interest from other sciences as well.
With collaborators, we outlined some other reasons to forecast research results here: https://www.science.org/doi/10.1126/science.aaz1704. In short, forecasts can help to evaluate the novelty of a result (a double-edged sword: very unexpected results are more likely to be suspect), mitigate publication bias against null results / provide an alternative null, and over time help to improve the accuracy of forecasting. There are other reasons, as well, like identifying which treatment to test or which outcome variables to focus on (which might have the highest VoI). In the long run, if forecasts are linked to RCT results, it could also help us say more about those situations for which we don’t have RCTs—but that’s a longer-term goal. If this is an area of interest, I’ve got a podcast episode, EA Global presentation and some other things in this vein… this is probably the most detailed.
I agree that there’s a lot of work in this area and decision makers actively interested in it. I’ll also add that there’s a lot of interest on the researcher side, which is key.
P.S. The SSPP is hiring web developers, if you know anyone who might be a good fit.
I’ve stayed at a (non-EA) professional contact’s house before when they’d invited me to give a talk and later very apologetically realized they didn’t have the budget for a hotel. They likely felt obliged to offer; I felt like it would be awkward to decline. We were both at pains to be extremely, exceedingly, painstakingly polite given the circumstances and turn the formality up a notch.
I agree the org should have paid for a hotel, I’m only mentioning this because if baseline formality is a 5, I would think it would be more normal to kick it up to a 10 under the circumstances. It makes this situation all the more bizarre.