Unable to work. Was community director of EA Netherlands, had to quit due to long covid.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.
Unable to work. Was community director of EA Netherlands, had to quit due to long covid.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.
Very exciting to read about this, especially the research agenda! I will definitely consult it when deciding on a topic for my master’s thesis in philosophy.
I have a few questions about the strategy (Not sure if this is the best medium for these questions, but I didn’t know where else);
a) Are you planning to be the central hub of EA-relevant academics?
b) What do you think about the Santa Fe Institute’s model of a core group of resident academics, and a larger group of affiliated researchers who regularly visit?
c) Are you planning on incorporating more fields in the future, such as behavioural economics or complexity theory, and how do you decide on where to expand in?
d) Where can I find more information about GPI’s strategy, and are you planning on publishing it to the EA Forum?
Btw, on p. 26 of the agenda there’s an unfinished sentence: “How important is the distinction between ‘sequence’ thinking and ‘cluster’ thinking? What’s ”
I take Greaves’ distinction between simple and complex cluelessness to be in the symmetry (just as you seem to do). However, I believe that this symmetry consists in that we are evaluating the same consequences following from either an act A, or a refraining of act A. For every story of long-term consequences happening from performing act A, there is a parallel story of these consequences C happening from refraining to do A. Thus, we can invoke a specific Principle of Indifference, where we take the probabilities of the options to be equal, reflecting our ignorance. Thus, P(C|A) = P(C|~A), where C is a story of some long-term consequences of either performing or refraining from doing A.
In complex cases, this symmetry does not exist, because we’re trying to compare different consequences (C1, C2, .., Cn) resulting from the same act.
This is an interesting project! I am wondering how valuable you have found it, and whether there are any plans for further development. I can imagine that it would be valuable to
Increase complexity to increase robustness of the model, but then find some balance between robustness and user-friendliness, perhaps by allowing users to view the model on different ‘levels’ of complexity.
Use some form of crowd-sourcing to get much more reliable estimates, ideally weighted by expertise or forecasting ability.
Incorporate some insights from the moral uncertainty literature, so that low probability of something being very bad (e.g. wild animal suffering, or insect suffering) are given appropriate weight.
However, I have no idea how feasible this is, and imagine it would require many and valuable resources (lots of time, money, and capable researchers). Do you already have thoughts on this?
P.S. The link is missing for part IV
I like this post Milan, I think it’s the best of your series. I think that you rightly picked a very important topic to write about (cluelessness) that should receive more attention than it currently does. I do have some comments:
Although I admire new ways to think about prioritisation, I have two worries: Conceptual distinction. Wisdom and predictive power seem not conceptually distinct. Both are about our ability to identifying and predicting the probability of good and bad outcomes. Intent also seems a little tangled up in wisdom, although I can see that we want to seperate those. Furthermore, intent influences coordination capability: the more different the intentions are of a population, the more difficult coordination becomes.
This creates the second worry that this model adds only one dimension (Intent) to the 3-dimensional model of Bostrom’s Technology [Capacity] - Insight [Wisdom] - Coordination. Do you think this increases to usefulness of the model enough? The advantage of Bostrom’s model is that it allows for differential progress (wisdom > coordination > capacity), while you don’t specify the interplay of attributes. Are they supposed to be multiplied, or are some combinations better than others, or do we want differential progress?
I was a bit confused that you write about things to prioritise, but don’t refer back to the 5 attributes of the steering capacity. Some relate more strongly to specific attributes, and some attributes are not discussed much (coordination) or at all (capability).
Further our understanding of what matters
This seems to be Intent in your framework. I totally agree that this is valuable. I would call this moral (or more precisely: axiological) uncertainty, and people work on this outside of EA as well. By the way, besides resolving uncertainty, another pathway is to improve our methods to deal with moral uncertainty. (Like MacAskill argues for)
Improve governance
I am not sure to which this concept this relates to, though I suppose it is Coordination. I find the discussion a bit shallow here as it discusses only institutions, and not the coordination of individuals in e.g. the EA community, or the coordination between nation states.
Improve prediction-making & foresight
This seems to be the attribute predictive power. I agree with you that this is very important. To a large extent, this is also what science in general is aiming to do: improving our understanding so that we can better predict and alter the future. However, straight up forecasting seems more neglected. I think this could also just be called “reducing empirical uncertainty”? If we call it that, we can also consider other approaches, such as researching effects in complex systems.
Reduce existential risk
I’m not sure this was intended to relate to a specific attribute. Guess not.
Increase the number of well-intentioned, highly capable people
This seems to relate mostly to “Intent”as well. I wanted to remark that this can either be done by increasing capability and knowledge of well-intentioned people, or by improving intentions of capable (and knowledgeable) people. My observation is that so far, the focus has been on the latter in term of growth and outreach, and only some effort has been expended to develop the skills of effective altruists. (Although this is noted as a comparative advantage for EA Groups)
Lastly, I wanted to remark that hits-based giving does not imply a portfolio approach in my opinion. It just implies being more or less risk-neutral in altruistic efforts. What drives the diversification in OPP’s grants seems to be worldview diversification, option value, and the possibility that high-value opportunities are spread over cause areas, rather than concentrated in one cause area. I think what would support the conclusion that we need to diversify could be that we need to hit a certain value on each of the attributes otherwise the project fails (a bit like that power-laws arise from success needing ABC instead of A+B+C).
All in all, an important project, but I’m not sure how much novel insight it has brought (yet). This is quite similar to my own experience in that I wrote a philosophy essay about cluelessness and arrived at not-so-novel conclusion. Let me know if you’d like to read the essay :)
Which role would attractor states have in this thinking? Some thoughts:
If an attractor is strong/large, then many different starting points have the same end points. But if it is small, or if we are in between two (or more) attractors, our decisions could make all the difference in the world.
The technological completion conjecture conjects an attractor that we end up with if we are not caught by x-risk attractors.
Can we somehow affect the fragility of history so that we bring it into the center of the goldilocks zone?
Sure! Here it is.
Could you be a little more specific about the levels/traits you name? I’m interpreting them roughly as follows:
Values: “how close are they to the moral truth or our current understanding of it” (replace moral truth with whatever you want values to approximate).
Epistemology: how well do people respond to new and relevant information?
Causes: how effective are the causes in comparison to other causes?
Strategies: how well are strategies chosen withing those causes?
Systems: how well are the actors embedded in a supportive and complementary system?
Actions: how well are the strategies executed?
I think a rough categorisation of these 6 traits would be Prioritisation (Values, Epistemology, Causes) & Execution (Strategies, Systems, Actions), and I suppose you’d expect a stronger correlation within these two branches than between?
I think it would be better to include this in the OP.
Regarding Doing Good Better, is there any follow-up in the pipeline that is more up-to-date?
I find the book a great introduction into EA, but I have had multiple instances where I needed to point out to new members who’d just read the book that for some points “that’s not actually what’s thought anymore”.
I would therefore say that large-scale catastrophes related to biorisk or nuclear war are quite likely (~80–90%) to merely delay space colonization in expectation.[17] (With more uncertainty being not on the likelihood of recovery, but on whether some outlier-type catastrophes might directly lead to extinction.)
You seem to be highly certain that humans will recover from near-extinction. Is this based on solely the arguments in the text and footnote, or is there more? It seems to rest on the assumption that only population growth/size is the bottleneck, and key technologies and infrastructures will be developed anyway.
This is a fascinating question! However, I think you are making a mistake in estimating the lower bound: The fact that chimps are removed by 7 million years of evolution (Wikipedia says 4-13 million) rests on the assumptions that:
Chimpanzees needed these 7 million years to evolve to their current level of intelligence. Instead, their evolution could have contained multiple intervals of random length with no changes to intelligence. This implies that chimpanzees could have evolved from our common ancestor to their current level of intelligence much faster or much slower than 7 million years.
The time since our divergence with chimpanzees is indicative of how long it takes from their level of intelligence to ours. I am not quite sure what to think of this. I assume your reasoning is “it took us 7 million years to evolve to our current level of intelligence from the common ancestor, and chimpanzees probably did not lose intelligence in those 7 million years, so the starting conditions are at least as favorable as they were 7 million years ago.” This might be right. On the other hand, evolutionary paths are difficult to understand and maybe chimps developed in some way that makes it unlikely to evolve into a technologically advanced society. Nonetheless, this doesn’t seem the case because they do show traits beneficial to evolution of higher intelligence, e.g. tool use, social structure, and eating meat. All in all, thinking about this I keep coming back to the question: how contingent is evolution instead of directional when we look at intellectual and social capability? There seems to be disagreement here in the field of evolutionary biology, even though there are many different evolutionary branches where intelligence evolved and increased.
Also, you have given the time periods when a next civilisation might arise if it arises, but how likely do you think that it arises?
Hi Holden, nice initiative.
I have a question about the Research Analyst role. How generalist will they be? I can imagine them being somewhat focused on one or two focus areas besides more general issues such as how to implement moral uncertainty practically.
It seems that OpenPhil wants a more satisfactory answer to moral uncertainty than just worldview diversification before ramping up the amount of grants per year. Is this part of why you are hiring new Research Analysts, and if so, how much will they work on this problem? (This seems like a very interesting but hard problem)
Not sure if interpreting Khorton correctly, but interested anyway: Why focus on undergrad and not on postgrad (or highest level achieved/pursuing)?
What are the working hours like for a position like Research Analyst? Strict/flexible? 40 hours/week or other? What is the overtime like on average, and what is it like on peaks?
I’m very curious about how that improved understanding would come about via grantmaking. Any write-up you have about this? I can see how you’d learn about tractability, and maybe about neglectedness, but I wonder how you incorporate this in your decision-making.
Anyway, this might go a little too off-topic so I’d understand if you replied to other questions first :)
A quick Google search gave me the impression that it isn’t very easy to get a work visa in the US even though it will be sponsored. Is this correct, and do you have stronger requirements for non-US applicants because they’ll be less likely to actually be able to work for you? (I’m completely unfamiliar with the visa system)
Is there a standard contract length you will offer RA’s if there is no trial period?
There is a fair number of people who don’t work in their top pick cause area or even cause areas they are much less convinced of than their peers, but currently they don’t advertise this fact.
I think this is likely to be correct. However, I seriously wonder if the distribution is uniform; i.e. are there as much people working on international development while it’s not their top pick as on AI Safety? I would say not.
The next question is whether we should update towards the causes where everyone who works in it is convinced it’s top priority, or whether there are other explanations for this hypothesis. I’m not sure how to approach this problem.
Thanks for these findings! Especially looking forward to see the strategic analysis.
Some small remarks:
Surprised to see that not many people use Meetup.com when the average boost is 21% to attendance. In Groningen it’s definitely helping us, especially with some more diverse participants than our personal networks (although repeated attendance seems lower for people coming through Meetup.com)
The graph under ‘Practical support and new ideas’ is wrong, it’s the same as the one above.
Percentages would have been a little nicer to read than numbers, as the number of respondents varied
A random idea to help groups: A checklist with ‘easy to do stuff that improves groups’/low hanging fruit, that groups can go through to evaluate themselves. On the list could be things such as (or whatever else you think works best):
Automated sign ups for mailing list
Meetup.com group
Website
If it’s being used, it can be supplemented with short walkthrough or links to how-to’s.