Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Note: This message came out of a conversation with u/AppliedDivinityStudies and therefore contains a mix of opinions from the two of us, even though I use “I” throughout. All mistakes can be attributed to me (An1lam) though.
Really appreciate you all running this program and writing this up! That said, I disagree with a number of the conclusions in the write-up and worry that if neither I nor anyone else speak up with our criticisms, people will get the (in my opinion) wrong idea about bottlenecks to more longtermist entrepreneurship.
At a high level, many of my criticisms stem from my sense that the program didn’t lean in to the “entrepreneurship” component that hard and as a result ended up looking a lot like typical EA activities (nothing wrong with typical EA activities).
First, I strongly disagree with the implicit conclusion that fostering a LE requires lots of existing LE entrepreneurs, specifically:
And also:
If there existed a large pool of LE entrepreneurs with the right skills, there’d be a less pressing need for this sort of program. I get that you’re wary of analogies to tech startups due to downside risk but to the degree one wants to foster an ecosystem, taking a risk on at least some more junior people seems pretty necessary. Even within the EA ecosystem, my sense is that people who founded successful orgs often hadn’t done this before. E.g., as far as I know Nick Bostrom hadn’t founded an FHI 0.0 before founding the current instantiation of FHI. Same for GiveWell, CEA, etc. Given that, the notion that doing LE entrepreneurship requires “backgrounds in both longtermism and entrepeneurship” seems like too restrictive a filter.
Second, without examples it’s a little hard to discuss, but I feel like the concern about downside risk is real but overblown. It’s definitely an important difference between LE entrepeneurship and traditional startups to be mindful of but I question whether it’s being used to justify an extreme form of the precautionary principle that says funders shouldn’t fund ideas with downside risks instead of the, more reasonable IMO, principle of funding +EV things or trying to ensure the portfolio of projects has +EV.
Third, I think some of the assumptions about what types of activities should take precedence for LE entrepreneurship deserve re-examining. As I alluded to above, it seems like the activities you say matter most for LE entrepeneurship, “research, strategic thinking, forecasting long-run consequences, and introspection [rather] than finding product-market fit” are suspiciously similar to “typical EA activities”. From my perspective, it could instead be interesting to try and take some of the startup gospel around iteration, getting things out into the wild sooner rather than later, etc. seriously and adapt them to LE entrepreneurship rather than starting from the (appearance of the) assumption that you have very little to learn from that world. This isn’t fully charitable, but I have the sense that EA has a lot of people who gravitate towards talking/strategizing/coordinating and getting other people to do things but sometimes shy away from “actually doing things” themselves. I view an LE entrepreneurship incubator as an opportunity to reward or push more people towards the “actually doing things” part. Part of this may also be that I’m a bit confused about where the boundary between normal and LE entrepreneurship lies. In my mind, SpaceX, fusion startups, psychedelics research would all qualify as examples of LE entrepreneurship with limited downside risk or at least not existential downside risks. Would you agree that these qualify as good examples?
Fourth, you mention advisors but only say a few by name. I’m 1) curious whether any of these advisors were experienced entrepreneurs and 2) interested in whether you considered getting advisors only adjacent to EA but very experienced entrepreneurs. As an example, at least one founder of Wave is an EA-aligned successful entrepreneurs who I can only imagine has wisdom to impart about entrepreneurship. I don’t live in the Bay Area but I have the sense that there are quite a few other EA-adjacent founders there who might also be interested in advising a program like this.
Fifth, this is more low-level but I still don’t really understand the skepticism of a YC-like incubator for LE entrepreneurship. It seems like your arguments boil down to 1) the current pool is small and 2) the requirements are different. But on 1, when YC started, the pool of entrepreneurs was smaller too! Such a program can help to increase the size of that pool. On 2, I agree that a literal copy of YC would have the issues you describe but I’d imagine a YC-like program blending the two community’s thinking styles in a way that gets most of the benefits of each while avoiding the downsides. As an aside, we are also very supportive of longtermists doing YC but for slightly different reasons. This may also be related to the confusion about what qualifies as LEE.
Summarizing, my goal in writing this comment is not to just criticize the program. Instead, I worry that by highlighting the need for experience and the overwhelming risk of harm, the write-up as-is might discourage would-be LE entrepreneurs from trying something . I hope that my comment can help provide a counterweight to that.
FWIW, as someone who previously warned about risk of accidental harm, I personally mostly agree with this comment. I think what I care more is “option value to shut projects down if they turn out to be harmful” than preventing damage in the first place (with the exception of projects that have very large negative effects from the very beginning).
I think offering funding & advice causes more people to work with you, and the closer they are working with you, the larger the influence your opinion is likely to have on the question of whether they should shut down their project.
Thanks for the comment An1lam, and apologies for the delay! Really appreciate the engagement.
Some thoughts in response:
On (1), I agree that fostering an ecosystem doesn’t require a load of experienced people; you’re very much right that that’s the whole problem that we’re trying to solve in the first place! What we mostly mean to say in terms of pointing to the lack of requisite experience is that we found that there weren’t that many individuals who are ready to hit the ground running with founding a substantial longtermist start-up right now; hence a lot of LE activities that we would have been excited about pursuing instead (and which are mentioned in the “what we’d be excited about” section) are targeted at making up for this lack of experience by e.g. fostering a community, bringing folks in-house in a foundry.
On (2), as you say, it’s difficult to talk about this in the abstract, and certainly I feel sympathetic to the worry that we might be being too cautious (that was one of the motivations for me starting this project in the first place). With that said, I do think thinking thoroughly through downside risk considerations and treating them more seriously than is the norm among founders writ large is something that I want to encourage of all longtermist entrepreneurs, and I’d worry about not doing it enough more than I’d worry about doing it too much given what’s at stake. The thing that I’m interested in advocating for here, though, is something like demonstration of thought, care, and consideration, rather than avoiding projects with downside risk altogether (I think the two often are conflated); for example, if it turns out that after a reasonable amount of thinking there is some amount of downside risk, but the EV of the project is still high & the founder has identified reasonable things they could do to mitigate this downside risk, I (and I imagine most EA funders) would be supportive of that project going ahead.
On (3), seems right that EAs in general, and particularly folks trying to start new projects, have a fair amount that they could learn from ‘traditional’ start-up best practice (that was a big part of the framing that we took in designing curricula materials for the fellowship, for example). I’m not sure where we disagree here; I do also think that strategic thinking and introspection are important, and perhaps uniquely so for start-ups which don’t follow the usual product-fit dynamic (although I think several do, even something like a new research org), or where there are very slow feedback loops to use in the interim, hence why it might be unusually important for longtermist entrepreneurs to have this trait. But I don’t think thinking this is important is mutually exclusive to thinking that learning from more mainstream start-up practice is useful.
On (4), advisors were a combination of experienced entrepreneurs (some of whom also happened to be quite engaged with EA, some less so) and domain experts.
On (5), basically Jonas’ comment captures the main thing—that given our epistemic state in longtermist cause areas at the moment, it just seems quite difficult to come up with start-up ideas that fit the typical YC model (find product-market fit, then scale), which means that the idea generation process I think needs to be quite a bit more involved, be advised by domain experts, etc. and the build-iterate cycle necessarily needs to be tweaked from the usual dynamic expected for e.g. software products. I.e. mostly leaning on (2) of ‘the requirements are different’ rather than the pool being small.
Two additional high level things that come to mind in response:
1 - We definitely don’t want to discourage promising longtermist entrepreneurs who are excited about starting something in this space, the intent was to clarify for future potential incubator founders what we tried and learned.
2 - In terms of your overall concern that the program wasn’t entrepreneurial enough, it’s a bit hard to comment. At the end of the day, we do think that simply replicating traditional entrepreneurship in this space won’t quite work, but also do think that the EA community could learn a ton from the entrepreneurship community (and designed our program with both of those in mind). It’s possible you disagree, or possible that the tone and seriousness with which we centered entrepreneurship didn’t come through in this post.
Hey Jade,
Thanks so much for your reply. This actually really helped clarify things for me. I think we may still have some different priors about (2) but overall your comment made me think we agree much more than we disagree (and than I’d previously thought we’d disagreed).
I again just want to note that I’m grateful you ran the program and engaged so productively with my comment.
Really glad to hear, and in turn, really appreciate your engagement with the post!
Regarding a YC incubator model, I think the main issue is just that people rarely generate sufficiently well-targeted and ambitious startup ideas. I really don’t think we need another dozen donation apps or fundraising orgs, but that’s what people often come up with. I think we’d want something that does more to help people develop better ideas. (Perhaps that’s what you had in mind as well.)
Honestly I wasn’t too sure what the biggest issue was but what you described seems reasonable to me!
A quick thought on having a YC-style programme and taking risks on more junior talent:
Domain expertise is important—I think YC would agree on this. If taking on a deep tech startup they would look for someone on the team who had domain expertise in the field.
I think early YC Internet startups like Dropbox or Airbnb make it look like domain expertise is less important and it’s more about just getting stuck in. The difference is that when Dropbox started there was no expert in “files on the internet” so the founders could basically become the world experts just by getting stuck in and working on it.
The difference with Longtermist areas like AI and Bio is you can’t just become the expert by working on it and (yes, you guessed it) the downside risk means we don’t want to take bets on people to just go for it and try it out (unlike Dropbox where it doesn’t really matter if it fails catastrophically).
Very interesting, valuable, and thorough overview!
I notice you mentioned providing grants of 30k and 16k that were or are likely to be turned down. Do you think this might have been due to the amounts of funding? Might levels of funding an order of magnitude higher have caused a change in preferences?
Given the amount of funding in longtermist EA, if a project is valuable, I wonder if amounts closer to that level might be warranted. Obviously the project only had 300k in funding, so that level of funding might not have been practical here. However, from the perspective of EA longtermist funding as a whole, routinely giving away this level of funding for projects would be practical.
Hey Michael! I don’t know if more money would have changed their decisions, but I want to clarify that the funding panel wasn’t funding constrained (we actually had more than $300k set aside for this), and funders didn’t make the decision with that as a limitation.
The cases aren’t actually that similar — in one, the funding panel gave a low amount to discourage the individual from pursuing the idea and support a career transition, in the other they gave the individuals more than requested — but in both cases the uncertainty of what the people would do was the key cause in giving a relatively small amount of money, not being funding constrained.
If you don’t want someone to do something, makes sense not to offer a large amount of $. For the second case, I’m a bit confused by this statement:
“the uncertainty of what the people would do was the key cause in giving a relatively small amount of money”
What do you mean here? That you were uncertain in which path was best?
“the uncertainty of what the people would do” -->
Both groups were being funded for open-ended plans (in one case, a career transition, in the other “exploring EA field-building”), rather than a specific venture, hence the uncertainty.
“If you don’t want someone to do something” -->
This isn’t the case—if the funders hadn’t wanted the recipients to move forward, they wouldn’t have given funding. In that case, the funder offered to support a different plan than the one that was originally pitched, namely instead of a venture, a career transition.
Given the constraints highlighted above, It seems like a venture builder model (focussed on a specific cause area) may be more effective, wherein the following process is repeated:
(1) Generate plausible venture ideas from existing research within EA orgs
(2) Analyze ideas on two dimensions - (a) Cost benefit Analysis (b) Operational feasibility
(3) Incubate and recruit EA aligned technical and non technical co-founders (who then build their own team)
(4) Tie further funding and possibly bonuses to specific short term milestones
It seems like EA orgs with existing research capabilities are best suited to support above steps #1 and #2a . This is essentially a high level analysis of the expected value of building this company, a rough estimate of costs (in orders of magnitude) which helps us quickly come to a “Go” /”No go” decision.
I think step 2 (b) needs to be outsourced to technical consultants or economic consultants (think Analysis Group, Brattle group etc) who can conduct feasibility analyses or more accurately estimate lifetime costs. Let’s say we wanted to produce and distribute some state of the art PPE equipment. We’d need to have a rough understanding of things like supply and demand dynamics of raw materials, regulations around PPE equipment, legal risk etc. Answers to these questions could determine whether it’s worth doing something even if the high level cost benefit analysis was positive. These also don’t seem like the type of questions orgs like Open Phil routinely answer.
Step 4 could potentially unlock some talent constraints and allow founders to recruit a management team or lead scientists that may not be completely EA aligned or long termist but are still incentivised economically to help move things along.
Everything above is of course contingent on the ability to actually generate actionable ideas that pass Step 2(a) in a particular cause area.
Very interesting read, thanks for publishing this!
I am curious what qualified as “having longtermist experience” for you?
Glad to hear!
Roughly this would mean having worked in a relevant area (e.g. bio, AI safety) for at least 1 − 2 years and able to contribute in some capacity to that field. To be clear, some ideas would require a lot more experience—this is just a rough proxy.
Is there a list of the ideas that the fellows were working on? I’d be curious.
It’s not surprising to me that there aren’t many “product focused” traditional startup style ideas in the longtermist space, but what does that leave? Are most of the potential organisations research focused? Or are there some other classes of organisation that could be founded? (Maybe this is a lack of imagination on my part!)
Hi Rory, thanks for the comment! We haven’t published those ideas. In terms of classes of organisation, one way to carve up the space is to think about Object-level and Meta-level approaches to generating ideas.
Object-level approaches focus on doing direct work to solve the problem at hand. For example:
developing and deploying technologies
conducting research
advocating for policy change
The main type of impact here comes in the form of tangible changes in actions taken in the real world, in whatever form that might take.
Meta-level approaches focus on improving the capacity for others to solve the problem. This can be done on the EA/longtermist wide-level (building up the movements) or in a specific domain, e.g. building a talent pipeline specifically for bio policy experts. Concrete types of meta work include, for example:
community and field building
the dissemination of ideas and knowledge and values
increasing the resources available to work on object-level approaches
The main type of impact here comes in the form of the change in likelihood that object-level approaches will be impactful.
Hope that’s useful!
I just discovered this thread, but figured I’d back up some data points! I’ve been in the EA community since 2016-ish and done entrepreneurship and longtermist work for about 2 years. I won’t lean too much into my experience, I’ll generally say things other entrepreneurs and impact-minded people would agree with.
Re: Founder pool
It is helpful to think of “longtermism” and “entrepreneurship” as two entirely distinct, and very uncommon skillsets/mental frameworks. Currently, way less than 1% of the world would identify as longtermist, and less than 1% would identify as entrepreneurs. While these traits are slightly correlated, the vast majority of lontermists are not entrepreneurs, and the vast majority of entrepreneurs are not longtermist.
That’s totally fine, but makes things difficult for a longtermist incubator. I mean, if Y Combinator started prioritising longtermist involvement, I’m pretty sure they’d have similar challenges.
Re: Project pool
Again, overlap is hard. One thing Y Combinator constantly emphasises is that most startups never achieve product-market fit. I.e. it’s really, really hard to build something people want to pay for. So the reliable solution is to try and iterate a lot with fairly little information. A 10% success rate is really good if you try a dozen times a month.
So just building anything successful is hard. Then you have to try and make something impactful, which is another, completely separate hard thing to do. Most successful, lucrative companies are not “effective” or “impactful” in the longtermist sense. I’ve tried this thought exercise myself, sometimes I would think of/test like 9 potential lucrative startup ideas before I find one that’s lucrative and also possibly impactful.
Anyway, point is: This is a difficult thing to do, essentially trying to find a very narrow eligible pool and get them to work on a very small set of possible ideas. I really appreciate that this was tried!
I’d be interested to hear more about the thoughts behind this key lesson:
This makes sense to me in light of the different tasks and operational requirements that different purposes are likely to require, but I noticed a theme running through the rest of the report of uncertainty. This included things like funders being uncertain of downside risk; uncertainty about what actions a person with funding should take; an expertise/experience bottleneck for longtermism and especially the combination of longtermism and entrepreneurship.
What do you think of the relationship between the constraints and having multiple orgs in the space?
Hi Ryan, I may be misunderstanding the question so correct me if I’m wrong—are you saying something like: “given that there’s lots of uncertainty about what’s needed this seems in tension with starting an organisation that concentrates on only one user type (e.g. recent generalist graduate) or one domain (e.g. AI Safety)”?
Is the slack or other community resources still being used / are they still available for additional people to join?
The Slack used for the fellowship is no longer being used
Do you have any reflections or recommendations about what people who meet one but not both of these criteria could be doing to become great potential LEs? I appreciate that there is an obvious answer along the lines of “try the other one out!” but I’m wondering if you have any specific suggestions beyond that.
I.e.
What could people with longtermist experience but negligible entrepreneurship experience be doing to bridge that gap? Are there any specific resources (books, articles, courses, internships, etc) you’d recommend for people to start testing their personal fit with this and building relevant skills?
And the same question again for people with entrepreneurship experience but negligible longtermist experience.
(Further also to hrosspet’s question, I’d be interested in roughly how you were defining/conceptualising those two categories, and if you have general comments about the ways in which people tended to insufficiently developed in one or the other.)
Thanks for this post. It’s great to see the writeup to be able to learn from the experience, even though it didn’t work out for you guys in this iteration of the idea.
I sense a slight potential tension between the comment that “EA operations generalists, often with community-builder backgrounds, who would be interested in working on EA meta projects” seem like a promising group to work with and the comment that “There are very few people with longtermist & entrepreneurial experience (e.g., 2-3 years experience in both) that we trust to execute ambitious projects in specific areas of longtermism (bio, AI, etc.).” I would imagine that the former group would tend to not have much experience in “specific areas of longtermism”. I’d love any clarity you can shed on this:
Am I just wrong? I.e. do some/many of these people have substantial experience in specific areas?
Is it that you see this group as being promising specifically for various meta projects that don’t require deep expertise in any one area?
Is it that you think that this gap could potentially be bridged as part of a longtermist entrepreneurship incubator’s role, e.g. by getting promising-seeming potential future LEs placed into jobs where they can build some domain specific knowledge before revisiting the idea of LE, or some such?
Something else?
Hey Jamie—Ben Clifford here, thanks for flagging this.
I think your second bullet captures the idea well. I don’t think being good at EA community building and associated ideas requires deep domain expertise in areas like AI or Bio.
There would be an argument for thinking about bullet 3 as well but it wasn’t what I was thinking.
Thanks for this post! Reading through these lessons has been really informative. I have a few more questions that I’d love to hear your thinking on:
1) Why did you choose to run the fellowship as a part-time rather than full-time program?
2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?
3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?
4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?
5) Was there a reason you didn’t have a public online presence, or was it just not a priority?
Thanks, great questions! In response:
1) How come you choose to run the fellowship as a part-time rather than full-time program?
We wanted to test some version of this quickly, part time meant:
It was easier to get a cohort of people to commit at short notice as they could participate alongside other commitments
We could deliver a reasonable quality stripped back programme in a short space of time and had more capacity to test other ideas at the same time
With that said, if we were to run it again, we almost certainly would have explored running a full-time program for the next iteration.
2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?
Do you mean non-profits rather than for-profits? If so, I think this is because nonprofits present the most obvious neglected opportunities for doing good. Participants did consider some for profit ideas.
3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?
The latter—we were trying to learn rather than optimise for early success.
4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?
Yes, mostly insofar as the Longtermist space is broader than the x-risk space—there are ideas that might help the long term future or reduce s-risk without reducing x-risk.
5) Was there a reason you didn’t have a public online presence?
I think having an online presence that is careful about how this work is described (e.g. not overhyping entrepreneurship or encouraging any particular version of it) is important and therefore quite a bit of work. We felt we could be productive without one for the time we were working on the project so decided to deprioritise it. If we had continued to work on the project, we would have spent time on this.