Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.
haha—good question. And yes, from notes.
I hate April 1st so much.
A small gripe with the title—you don’t make any argument for this tech solving global poverty, just congestion in the wealthiest cities on earth. I know transportation has economic benefits elsewhere but your post makes no claims about this.
I want more posts about flying cars.
I’m still assuming the reliability requirement is too high. If a car stops working it rolls to a halt, a flying crashes into a residential area. Planes don’t do this, but they have costly constant checks. Maybe a fleet owner (non personal ownership) and lots of sensors for automated checks makes the reliability feasible.
Similarly security seems like a daunting challenge.
Noise I hadn’t thought of.
Do we even need them though, if a city goes full AV you can theoretically have very high speed regular cars and no junctions / traffic. At even just 60mph, a 30 min commute encompasses an area significantly larger than Greater London. And commuting in an AV could be very comfortable with a desk and WiFi. Whilst it’s hard to work on trains I could imagine even “going for an AV pomodoro” in the middle of the day just for the concentration benefits of reclusion and a fixed travel time.
Assuming good automation is required for good flying cars, I’m also not sold on automation being net-good for employment. Life satisfaction—sure.
I drafted but didn’t publish a post yesterday titled “where are all the ideas?”. Really glad to see a contribution of this type.
I regularly simplify my evaluations into pros and cons lists and find them surprisingly good. Open Phil’s format essentially boils down into description, pros, cons, risks.
Check out kialo. It allows really nice nested pro con lists built of claims and supporting claims +discussion around those claims. It’s not perfect but we use it for keeping track of logic for our early stage evals / for taking a step back on evals we have gotten too in the weeds with
Really love how clearly you’ve communicated the relevance of the findings and how they fit in.
This is great—thanks for continuing to do these roundups, always things I’ve missed
Seconded on title, enjoyed content but title felt click-baity and misleading, especially given 90% of readers will only read the title.
I’m so glad to see a post from people working in the industry in question—thank you for taking the time, making the post and contributing to the discussion, I strong upvoted.
Impact investing comes up a lot in donor advisory, so have a few points to add:
1 I generally still advise donors not to use their philanthropic, impact maximising, allocation for impact investing. I still do not have a very thorough way of explaining how I came to this conclusion and no existing materials I know of could be sent to a HNW.
2 most large donors only give a small portion of their assets, >90% of their capital is generally invested and impact investing (II) can come from that allocation. This changes the discussion considerably. Eg from ‘II vs donating’ to ‘where are you spending your time’. We could also discuss lost returns but it seems like most II does not have sufficiently lower returns on average to worry about it from an impact perspective. Giving better will make significantly more of a difference than 5-20% more or less profit from their investments. Not always going to be true but generally seems the right view.
3 donor funds are a very different question to an individuals career, the leverage available in some impact investing careers is so high that it requires a separate investigation that I haven’t seen. I would guess that an evaluator could move 10-100x more capital for the same effort in II than in donation evaluation.
4 stepping back, effectiveness minded II might be an important consideration in designing the models orgs within top cause areas consider. If you were comparing two models for your org and one assumed limited philanthropic capital and so maximised its impact per dollar but the other assumed huge swathes of II funding and so took on a much lower impact per dollar, and tried to design a model that would be copied, build IP, be acquired by industry giant etc..
5 bio / ai safety have many opportunities for doing far more damage than good, at face value it does not seem a good fit for larger, IP-driven investment models. However, a big worry is control and trust. If top researchers and strategists in the area felt there was capacity for responsible cautious impact investing the area, that might speed up how quickly market driven approaches emerged.
I ran my first hiring process to hire someone for an EA role last year and was amazed how long it took me. I’ve hired around 20 times in the past and only spent a couple weeks and 20-40h per role. Last year I spent 8 months and hundreds of hours. I reflected afterwards on why and can list a few hypotheses:
normally rely heavily on gut to build my shortlist. Did not feel comfortable doing this for this role as it felt like there were so many failure modes for a bad hire. Both ways that a hire could go badly and severity of impact for a hire going badly.
normally relying on intuition heavily is highly reversible. Worst case scenario I have to fire the candidate after probation, I’ll never see them again and no one knows them. I’m open with candidates that this is my policy and that they should be careful accepting an offer. In EA I felt like everyone knows everyone and a fired hire could cause significant reputational damage with a one-sided narrative. I don’t endorse this view as rational but the fear was definitely a factor in why I took so long.
I was hiring for a role that defies regular role definition. No one applying to the role had applied to a similar position before let alone worked in one. Potentially this was the largest factor and my other points are moot.
I wasn’t hiring someone to have similar skills to me, instead hiring someone to have the skills I don’t have. Normally I would judge experience, passion, intelligence, lateral thinking ability, ambition and team fit then let a team lead judge specific ability.
many candidates treated the process like a 2 way application the whole way through the process. This three off my intuitions and normally I would have dropped all candidates who weren’t signalling they were specifically very excited about my role. First call excluded.
many candidates’ conversations included career advice from me. This threw off my intuitions but I consider it time well spent in ll the cases where I spent over 2h
I worried a lot about how much time of others I was using. Assuming a candidate spent 4x more time than I did, I used over a thousand hours of people’s time.
ultimately I made offers to two candidates both of which I had had strong gut feelings about very early, which was rewarding but also highly frustrating.
The key thing I intend to change next time is being much faster. I didn’t feel like (for me) the extra process complexity and caution added that much insight and crucially, it threw off my intuition.
The main downside of reduced complexity seems to be the increased chance of a bad hire and the potential damage of firing them. I think next time I will return to my original method and be very transparent with the person I make an offer to that their 3 month probation is not just a formality, pointing them to this article as an explanation to why it’s not worth it for others for me to have a long drawn out process that may only slightly reduce the risk of a bad hire.
** I do not advocate anyone else doing this unless they are confident in their hiring intuitions. I also haven’t tried it yet and it may go terribly. **
Thank you to the OP for posting. Illuminating!
Found Bridgespan’s 2018 report useful and interesting.
Nice list Saulius, thank you.
[idea]: Invite-only Google Sheet List of considerations relevant to funding a group (one group per tab) and then columns of donor’s weights for those considerations. I would find this really interesting.
Deal could be that you only get access if you’re willing to share your weights!
For instance, like other big non-profits, EA orgs might want to hire institutional fundraisers to tap into larger grants from big foundations other than the usual suspects
I’ve looked into this a few times and it does seem like it will become a promising channel. In particular from the big donors that do very large checks (>$500k). At least one org I know is experimenting with hiring a full-time grant-writer. I currently think it won’t work well for most EA orgs for some time to come.
Worth noting that most big foundations have large sr. management time overheads and often require designing bespoke projects just for that foundation. Grant-writers also generally have slow payback periods (>1-2 years not rare, more if first one doesn’t work out), are very tricky to evaluate during hiring and expire once you run out of foundations / major donors to apply to (most don’t do much repeat funding). Not insurmountable challenges.
An alternative is to hire one-off fundraisers who approach lesser known major donors for you, I think that may be promising but requires a large time investment to train that person to talk about your charity. They also still require a large amount of Sr. mgmt’s time (non-foundation major donors will generally want to speak to the founders and most those conversations will end up being a no) and are more likely to generate one-off funding rather than repeat.
It may be that expanding philanthropic advisory within EA in general is more promising. Whilst not specifically focused on raising funds for EA orgs, an increased number of smart and best-arguments-aware donors in the space in general could well have a similar result for less sr. mgmt time cost.
It could also be that having a semi-centralised fundraising team that manages a team of generalist fundraisers that are shared and specialist fundraisers that each work for a different large EA org could work really well. Train them all up in tandem and work out how to evaluate them, focus on >$300k checks from major donors but also have a grant-writer or two shared between them, hire most talent from mainstream pools, etc., We looked into something like this to function across all the GiveWell charities but it ultimately looked like it wouldn’t work.
It doesn’t seem unlikely that that last option never makes sense, because by the time you have orgs large enough to justify the above ($5m-$10m pa), those orgs also organically start to hire their own internal fundraisers and grant-writers just to meet their large budgets.
Looking forward to putting more thought into this.
Social protection system coverage (helping more people access government benefits); CC estimates that this is less than one-fifth as valuable as cash transfers
That is surprising, they’ve done a lot of work in and around India where welfare budget utilisation has been infamously poor until only quite recently and where the huge rural population seems to make it particularly hard to get welfare to the poorest who need it most.
I wonder how their economists account for the counterfactual of unused government funds, I’ve seen quite a few calcs where unused welfare funds that go back into the central pot are only discounted by 1⁄4 − 1⁄2, which I still find very unintuitive even given only that the average wealth of a government spending recipient is far higher than that of a recipient of welfare.
I’ve been keeping an eye out for a charity / org in India that is particularly good at increasing access to government welfare so this is relevant for that.
Really impressed by both how you’ve executed this as well as the write-ups. 🙌
Awesome think-piece, thank you
Note: I have a feeling that ‘policing tone’ is an annoying meme for a forum and something more appropriate for moderators than for readers so I’ll post this one and default to refraining from doing it again.
Quick few thoughts on the tone of this, feel free to ignore if it doesn’t change your mind:
Most of these articles have been good but this one is certainly the worst out of all that I have seen (n=25 or so, from multiple writers) and I believe it has negative expected value.
This part, right at the top and at a few other points, I felt a little uncomfortable. If I were the author and I read this, I feel like I would feel more ‘attacked by my allies’ than ‘constructively critiqued’.
I feel like some quick changes to the tone, particularly early on (e.g., ‘I found this one distasteful’ rather than ‘this is the worst I’ve seen’) feels less aggressive. Perhaps adding an extra paragraph at the beginning saying a few positive things about their column in general (if you have those views) and saying that you only mean your comments in a constructive way. Personally, that would be enough for me to take the feedback well. Maybe no one on their team reads it, maybe one person reads it and forwards it to the whole team. Seems worth assuming the latter is the case and it’s that scenario that prompted me to make this comment.
Given in particular that Future Perfect is not funded by donors that explicitly identify with EA ideas and that run by Vox, my quick guess is that careful constructive criticism is far more valuable / low risk than more assertive / slightly aggressive criticism (apologies if I’m already preaching to the choir here). I’m currently still really glad that Vox, Ezra etc., have chosen to do this column taking lots of EA ideas into account.
Funnily enough I had a similar opinion about one of the mobile thumbnails for their anti-mars piece. The thumbnail read “Elon wants to go to Mars, here’s why that’s a bad idea” which didn’t seem worth it.