Ok, so this doubles as an open thread?
I would like some light from the EA hivemind. For a while now I have been mostly undecided about what to do with my 2016-2017 period.
Roxanne and I even created a spreadsheet so I could evaluate my potential projects and drop most of them, mid-2015. My goals are basically an oscillating mixture of
1)Making the world better by the most effective means possible.
2)Continuing to live in Berkeley
3)Receive more funding
4)Not stop PHD
5)Use my knowledge and background to do (1).
This has proven an extremely hard decision to make. Here are the things I dropped because they were incompatible with time, or goals other than 1, but still think other EAs, who share goal 1, should carry on:
(1) Moral Economics: From when it started, Moral Econ is an attempt to install a different mindset in individuals, my goal has always been to have other people pick it up and take it forwards. I currently expect this to be done, and will go back to it only if it seems like it will fall apart.
(2) Effective Giving Pledge: This is a simple idea I applied to EA ventures with, though I actually want someone else to do it. The idea is simply to copy the Gates giving pledge website for an Effective Giving Pledge, which says that the wealthy benefactors will donate according to impact, tractability and neglectedness. If 3 or 4 signatories of the original pledge signed it, it would be the biggest shift in resource allocation from the non EA-money pool to the EA-money pool in history.
(3) Stuart Russell AI-safety course: I was going to spend some time helping Stuart to make an official Berkeley AI-safety course. His book is used in 1500+ Universities, so the if the trend caught, this would be a substantial win for the AI safety community. There was a non-credit course offered last semester in which some MIRI researchers, Katja, Paul, me and others were going to present. However it was very poorly attended and was not official, and it seems to me that the relevant metric is probability that this would become a trend.
(4) X-risk dominant paper: What are the things that would dominate our priority space on top of X-risk if they were true? Me and Daniel Kokotajlo began examining that question, but considered it to be too socially costly to publish anything about it, since many scenarios are too weird and could put off non-philosophers.
These are the things I dropped for reasons other than the EA goal 1. If you are interested in carrying on any of them, let me know and I’ll help you if I can.
In the comment below, by contrast are the things between which I am still undecided the ones I want help in deciding:
1) Convergence Analysis: The idea here is to create a Berkeley affiliated research institute that operates mainly in two fronts 1)Strategy on the long term future 2)Finding Crucial Considerations that have not been considered or researched yet. We have an interesting group of academics and I would take a mixed position of CEO and researcher.
2) Altruism: past, present, propagation: this is a book whose table of contents I already wrote, and would need further research and spelling out each of the 250 sections I have in mind. It is very different in nature from Will’s book, or Singer’s book. The idea here is not to introduce to EA, but to reason about the history of cooperation and altruism that led to us, and where this can be taken in the future, inclusive by the EA movement. This would be major intellectual undertaking, likely consuming my next three years and doubling as a PHD dissertation. Perhaps, tripling as a series of blog posts, for quick feedback loops and reliable writer motivation.
3) FLI grant proposal: Our proposal intended to increase our understanding psychological theories of human morality in order to facilitate later work in formalizing moral cognition to AIs, a subset of the value loading and control problems of Artificial Generalized Intelligence. We didn’t win, so the plan here would be to try to find other funding sources for this research.
4) Accelerate the PHD: For that I need to do 3 field statements, one about the control problem in AI with Stuart, one about altruism with Deacon, and one to be determined, then only the dissertation would be still on the to do list.
All these plans scored sufficiently high in my calculations that it is hard to decide between them. Accelerating the PHD has a major disadvantage because it does not increase my funding. The book (via blog posts or not) has a strong advantage in that I think it will have sufficiently new material that it satisfies goal 1 best of all, it is probably the best for the world if I manage to get to the end of it and do it well. But again, it doesn’t increase funding. Convergence has the advantage of co-working with very smart people, and if it takes off sufficiently well, it could solve the problem of continuing to live in Berkeley and that of financial constraints all at once, putting me in a stable position to continue doing research in relevant topics almost indeterminately, instead of having to make ends meet by downsizing the EA goal substantially among my priorities. So very high stakes, but uncertain probabilities. If AI is (nearly) all that matters, then the FLI grant will be the highest impact, followed by Convergence, the book and the acceleration.
In any event all of those are incredible opportunities which I feel lucky to even have in my consideration space. It is a privilege to be making that choice, but it is also very hard. So conditional on the goals I stated before: 1)Making the world better by the most effective means possible. 2)Continuing to live in Berkeley 3)Receive more funding 4)Not stop PHD 5)Use my knowledge and background to do (1).
I am looking for some light, some perspective from the outside that will make me lean one way or another. I have been uncomfortably indecisive for months, and maybe your analysis can help.