We did not consider the discussion on specific research projects to be within the scope of this post. As mentioned in the beginning, we tried to cover as much as we could that would be relevant to other field builders and related audiences.
It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post
There are a few reasons for why it made sense this way.
As discussed in other parts of this post, a lot of research output has not yet been published. Some teams did publicly share their work (as an example, one of the two teams that worked on “dynamical systems perspective on goal-oriented behavior and relativistic agency” posted their updates on the Alignment Forum: [1] and [2], which we hugely appreciate), some have submitted their manuscripts to academic venues, and several others have not yet. This has been for various reasons including e.g. because they are continuing the project and waiting to only publish at some further level of maturity, (info) hazard considerations and sanity checks, preferences over the format of research output they’d want to pursue and working towards that, or the project was primarily directed at informing the mentor’s research and that may not involve an explicit public output.
From our end, while we might hold preferences for certain insights to flow outwards more efficiently, we also wanted to defer decisions about the form and content of research outputs to the shared judgement between fellows and their respective mentors.
Note that in some of the cases this absence of public communication till now is fairly justifiable, especially in the cases of promising projects that became long term collaborations.
(Fwiw, as we mention in this post, we have also gained a better understanding of how to facilitate outward communication without constraining such research autonomy, we will take into account in future.)
There are also other reasons why detailed evaluation of projects is difficult to do based on partial outputs and mentor-specific inside-view motivations. In the light of all this, we did decide for this reflections post to be a high-level abstraction, and not include either a Research Showcase or a detailed Portfolio Evaluation. Based on what we understand right now, this seems like a reasonable decision.
At the same time, if a project evaluator or somebody in a related capacity wishes to take a look at a more detailed evaluation report, we’d be open to discussing that (under some info sharing constraints) and would be happy to hear from you at contact@pibbss.ai
We did not consider the discussion on specific research projects to be within the scope of this post. As mentioned in the beginning, we tried to cover as much as we could that would be relevant to other field builders and related audiences.
There are a few reasons for why it made sense this way.
As discussed in other parts of this post, a lot of research output has not yet been published. Some teams did publicly share their work (as an example, one of the two teams that worked on “dynamical systems perspective on goal-oriented behavior and relativistic agency” posted their updates on the Alignment Forum: [1] and [2], which we hugely appreciate), some have submitted their manuscripts to academic venues, and several others have not yet. This has been for various reasons including e.g. because they are continuing the project and waiting to only publish at some further level of maturity, (info) hazard considerations and sanity checks, preferences over the format of research output they’d want to pursue and working towards that, or the project was primarily directed at informing the mentor’s research and that may not involve an explicit public output.
From our end, while we might hold preferences for certain insights to flow outwards more efficiently, we also wanted to defer decisions about the form and content of research outputs to the shared judgement between fellows and their respective mentors.
Note that in some of the cases this absence of public communication till now is fairly justifiable, especially in the cases of promising projects that became long term collaborations.
(Fwiw, as we mention in this post, we have also gained a better understanding of how to facilitate outward communication without constraining such research autonomy, we will take into account in future.)
There are also other reasons why detailed evaluation of projects is difficult to do based on partial outputs and mentor-specific inside-view motivations. In the light of all this, we did decide for this reflections post to be a high-level abstraction, and not include either a Research Showcase or a detailed Portfolio Evaluation. Based on what we understand right now, this seems like a reasonable decision.
At the same time, if a project evaluator or somebody in a related capacity wishes to take a look at a more detailed evaluation report, we’d be open to discussing that (under some info sharing constraints) and would be happy to hear from you at contact@pibbss.ai