I was disappointed GiveDirectly wasn’t mentioned given that seems to be more what he would favour. The closing anecdote about the surfer-philosopher donating money to Bali seems like a proto-GiveDirectly approach but presumably a lot less efficient without the infrastructure to do it at scale.
OscarD
GiveWell Misuses Discount Rates
Flooding is not a promising cause area—shallow investigation
Hiring Retrospective: ERA Fellowship 2023
GiveWell should use shorter TAI timelines
Reflections on the Biological Weapons Convention
Australians are pessimistic about longterm future (n=1050)
[Question] What is the impact of animal agriculture on wild animal suffering?
Thanks for writing this! I agree making this decision very carefully seems well warranted. One minor thing—you generously offer for people to contact you to ask about out-of-scope things (delivery research). I can well imagine someone finding this post in several years’ time and wanting to contact you, but I’m not sure if this anonymous account will exist/be monitored then; one possible solution is to link to a google form that would go to your email (without revealing what your name/email is).
Cosmic NIMBYs and the Repugnant Conclusion
Thanks for this!
I think it would have been interesting if you had written out some predictions beforehand to compare to the actual lessons. I should have done this myself too perhaps, as in hindsight it is now easy for me to think that it is a priori straightforward that eg the size of the future (time and expansion rate) dominate the EV calculation for x-risk interventions. I think a key value of a model like this could be to compare it to our intuitions/considered judgements to try to work out where they and the model disagree, and how we can change one or the other accordingly.
I am also confused as to why we need/want monte carlo simulations in the first place. My understanding of the model is that cost-effectiveness is essentially the product of several random variables: cost-effectiveness = X * Y * (1/Z) where X~Lognormal(a,b), Y ~ Normal(c,d), Z ~ Beta(e,f). In this case can’t we just analytically compute the exact final probability distribution? I am a bit rusty on the integration required, but in principle it seems quite doable (even if we need to integrate numerically rather than exactly), and like this would give more accurate and perhaps faster results. Am I missing something, why wouldn’t my approach work?
In a separate comment I describe lots of minor quibbles and possible errors.
This analysis seems fair to me. One mitigating feature I think is that precisely because the vacuous applause light statements aren’t meant to be action-guiding, they generally are not action-guiding. Hopefully Bob makes his remark and then people nod wisely and go on with selecting the best candidates as if ~nothing had happened. I think there is still a danger that something that should have been a mere applause light is misinterpreted as action-guiding (as Alice did here) but this time by Chloe who accepts the remark uncritically. Chloe may then go on to form real policies and plans based on Bob’s remark. So yes this seems bad, but hopefully not really bad. In terms of what to do about it, perhaps if more people start engaging critically with applause light statements and they are shown up to be without much thought or substance, this renders making such statements negative in terms of social value and fixes the problem. But perhaps more likely is that the people engaging critically with applause lights are hounded down for being insensitive. Tricky.
[Question] How Should Free Will Theories Impact Effective Altruism?
I appreciate your willingness to change your mind and make this difficult decision. I think this is a big part of what makes EA great, thank you. (I make no comment on whether this is the correct decision at the object level, I don’t really know enough to say.)
Record time to safe clinical development
we believe this success is a testament to what’s achievable when a talented, altruistic group of people come together to tackle a seemingly impossible task
Amazing that you got to a clinical trial so quickly! I find myself confused as to how this could happen though: it would be quite surprising to me if other companies and people are just insufficiently clever or hard-working enough or something. Surely there is huge optimisaiton pressure that is already being brought to bear on making R&D and clinical trials go quickly and efficiently as this is a big chunk of how pharma companies make money, and so it seems strange if there were just lots of big wins waiting to be taken in improving the process. What do you think?
Food Security: Pests and Diseases Report
Thanks, useful thoughts, I think I roughly agree with you and will change this. I suppose the tradeoff I was facing with the title (not that I spent any time weighing up different options consciously) is between brevity, accurateness, and interestingness. I think the more complete title would be something like ‘Updating weakly against the Biological Weapons Convention being as important to work on as I thought’. I think I will change the title to ‘Reflections on the BWC’ so that people who only see the title don’t get a negative vibe (I agree we want people overall to think good thoughts about the BWC). And then if people are interested enough to read the post, they will see that I raise, quite sloppily/intuitively, some drawbacks. More than me arguing the BWC is −10 on some scale of goodness, what I was thinking is it moved from +20 to +10 or something.
I haven’t thought about it lots but I think I would endorse something like ‘the BWC should continue to exist, and should be larger and bigger and better, but it is less of a central priority than I thought, and so people who care about prioritisation and don’t have individual reasons that the BWC is unusually good for them should strongly consider focusing more on something else’.
Good on you for being courageous and scout-minded enough to shut this down (and to start it in the first place)! I hope you find great projects to move onto.
I think I am a lot more on board with promoting idea pluralism (I realise I should have said this in my original comment, I was focusing there on what I found more controversial or difficult to think about well). I think science generally would go faster if funders took more risks on heterodox ideas (particularly given most research projects have far larger upside risks than downside risks, so ‘hits-based’ funding could work well). That’s a good point re things being cheaper to run in poorer countries, so more cost-effective all else equal.
I can’t imagine the applicants are any less good
At one level, yes intelligence and creativity are ~evenly distributed worldwide. But I think this gets to my earlier point about educational and other opportunities currently being very unequally distributed, so I think it would be the case (unfortunately) that applicants with access to loads of opportunities to develop their thinking and writing and research skills, disproportionately in the rich world, will be better able to contribute straight away. I think there could also be a strong case to run such fellowships elsewhere with fellows who have had fewer opportunities and are currently less capable, as this is more additional, but this seems like a notably different theory of change.
I am curious about the impact on allocating funding between worldviews. The substantial reduction in longtermist funding should raise the value of the marginal longtermist grant, and thus change the optimal allocation between longtermism, global health, and animals. But does the worldview-diversification type approach preclude this sort of reallocation as the funding situation in a cause area changes?