Awesome. Thanks for sharing.
Clifford
Interesting, thanks! I’d be curious to ask about the connections you have made on slack etc. I’ll message you.
Relatedly, this uptick is kind of wild to me.
Thanks—I’ll correct that.
Good to see this question Garrison! I’m working on effectivealtruism.org and planning to add a section like this to the website.
This is a decent existing page for this but very tricky to find: https://www.effectivealtruism.org/impact
I think I agree with this. Two things that might make starting a startup a better learning opportunity than your alternative, in spite of it being a worse learning environment:
You are undervalued by the job market (so you can get more opportunities to do cool things by starting your own thing)
You work harder in your startup because you care about it more (so you get more productive hours of learning)
Cool—thanks for engaging in this! Excited to see what you do in future.
Thanks for the reply.
I agree no correlation would be surprising but I wouldn’t be totally surprised if it was less predictive than say ”openness to new ideas” or something.
I wonder if you could learn more by interviewing people who are just starting to get interested in EA and seeing how their responses change over say a year? Interviewing people who have just started an intro to EA fellowship/virtual program could work well for this.
Fascinating, thanks for doing this research—excited to see more work in this area.
Is it possible that being E and A correlates with EAs who have been involved and absorbed EA ideas but wouldn’t correlate with EAs if you were able to survey them before they got involved in EA?
I found myself agreeing with the statements that predicted E and A but not sure I would have done before getting into EA.
I could also imagine someone who is very open to reasonable arguments but isn’t particularly E or A but comes to agree with the statements over time.
[sorry if I’ve misrepresented what you’re saying—I read the post a couple of days ago and may be misremembering]
Nice, thanks!
Thanks for the suggestions everyone. I’ve now made a long list in a spreadsheet of people but it doesn’t feel like a good idea to share it widely (I haven’t contacted these people and I don’t want to give the wrong impression about who has claimed to be affiliated). I’d be very happy to share on an individual basis—feel free to email me at ben.clifford@centreforeffectivealtruism.org
“ I could take a bet on less trusted/proven people as grantmakers.”
I was thinking just yesterday that if I won the EA lottery this might be a cool thing to do—I think the value of giving a “future grantmaker” the opportunity would be high and then I would guess that their end decision wouldn’t be much worse than your punt/GiveWell charities.
To find this person and minimise time spent on it, I might ask local group organisers who the most promising “future grantmakers” are and then filter by underrepresented groups in EA or just pick one of the shortlist at random. Others could say if this is at all realistic!
A separate comment is that I get the impression you would have a different perspective to other grantmakers so I would be excited to see what you would fund if you did decide to put some time in.
Worth noting you don’t have to be a US citizen to do this—the $ made me hesitate…
This is awesome—thanks!
Hey Jessica, great to hear about this! I was thinking about doing something similar. Would you consider involving non-researchers working at EA-orgs remotely? I’ve spoken to a few interested people with this profile.
Rather than setting up a charity, a Donor Advised Fund (DAF) is a good option for this purpose. This post may be helpful: https://forum.effectivealtruism.org/posts/qYuehBsAe6Ri6PZvL/a-comparison-of-donor-advised-fund-providers
Hi Ryan, I may be misunderstanding the question so correct me if I’m wrong—are you saying something like: “given that there’s lots of uncertainty about what’s needed this seems in tension with starting an organisation that concentrates on only one user type (e.g. recent generalist graduate) or one domain (e.g. AI Safety)”?
Thanks, great questions! In response:
1) How come you choose to run the fellowship as a part-time rather than full-time program?
We wanted to test some version of this quickly, part time meant:
It was easier to get a cohort of people to commit at short notice as they could participate alongside other commitments
We could deliver a reasonable quality stripped back programme in a short space of time and had more capacity to test other ideas at the same time
With that said, if we were to run it again, we almost certainly would have explored running a full-time program for the next iteration.
2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?
Do you mean non-profits rather than for-profits? If so, I think this is because nonprofits present the most obvious neglected opportunities for doing good. Participants did consider some for profit ideas.
3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?
The latter—we were trying to learn rather than optimise for early success.
4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?
Yes, mostly insofar as the Longtermist space is broader than the x-risk space—there are ideas that might help the long term future or reduce s-risk without reducing x-risk.
5) Was there a reason you didn’t have a public online presence?
I think having an online presence that is careful about how this work is described (e.g. not overhyping entrepreneurship or encouraging any particular version of it) is important and therefore quite a bit of work. We felt we could be productive without one for the time we were working on the project so decided to deprioritise it. If we had continued to work on the project, we would have spent time on this.
Hi Rory, thanks for the comment! We haven’t published those ideas. In terms of classes of organisation, one way to carve up the space is to think about Object-level and Meta-level approaches to generating ideas.
Object-level approaches focus on doing direct work to solve the problem at hand. For example:
developing and deploying technologies
conducting research
advocating for policy change
The main type of impact here comes in the form of tangible changes in actions taken in the real world, in whatever form that might take.
Meta-level approaches focus on improving the capacity for others to solve the problem. This can be done on the EA/longtermist wide-level (building up the movements) or in a specific domain, e.g. building a talent pipeline specifically for bio policy experts. Concrete types of meta work include, for example:
community and field building
the dissemination of ideas and knowledge and values
increasing the resources available to work on object-level approaches
The main type of impact here comes in the form of the change in likelihood that object-level approaches will be impactful.
Hope that’s useful!
Glad to hear!
Roughly this would mean having worked in a relevant area (e.g. bio, AI safety) for at least 1 − 2 years and able to contribute in some capacity to that field. To be clear, some ideas would require a lot more experience—this is just a rough proxy.
Very helpful—thanks a lot Ivy!