My gratitude to the many wonders of life makes me highly engaged in preserving a prosperous long term future for all sentient life.
Per Ivar Friborg
Thanks for the feedback and for sharing Yonadav Shavits paper!
Thank you for the examples! Could you elaborate on the technical example of breaking down a large model into sub-components, then training each sub-components individually, and finally assembling it into a large model? Will such a method realistically be used to train AGI-level systems? I would think that the model needs to be sufficiently large during training to learn highly complex functions. Do you have any resources you could share that indicate that large models can be successfully trained this way?
Thanks you for this feedback, and well put! I’ve been having somewhat similar thoughts in the back of my mind, and this clarifies many of those thoughts.
Data Taxation: A Proposal for Slowing Down AGI Progress
The Whole Brain Emulation Workshop link takes me nowhere:
https://foresight.org/foresight-neurotech-workshop-2023?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-
It says “Page not found”.
Seems like the correct link is: https://foresight.org/whole-brain-emulation-workshop-2023/
Personal progress update regarding the cause priority of alternative proteins (resulted from GHI talk):
Question: Is it worth the EA Community trying to accelerate the growth of alt protein production, or should we just allow market forces to move it forward? What are the neglected areas of the alt protein landscape that an EA should get involved instead of purely profit-motivated agents?
Answer: GFI thinks the market will not solve these problems on its own. A particular case of this seems to be fundamental scientific research, where markets need better products, but are not willing to invest in the research themselves.
Update: I initially thought profit-motivated agents would be sufficient to accelerate the growth of alt protein production, but I now doubt that stance, and realize that there likely are neglected areas within alt protein where EAs can have a high marginal impact.
Jonas Hallgren shared this Distillation Alignment Practicum with me, which answers all my question and much more.
Thanks for sharing Harry!
What are some bottlenecks in AI safety?
I’m looking for ways to utilize my combined aptitude for community building, research, operations, and entrepreneurship to contribute with AI safety.
Per Ivar Friborg’s Quick takes
From Practical Projects to Self Development: A Shift in Focus
Ahh yes this is also a good question which I don’t have a good answer to, so I support your approach in revisiting this question over time with new information. With very low confidence, I would expect that there would become more ways to aid with AGI alignment indirectly as the space grows. A broader variety of ways to contribute to AGI alignment would then make it more likely for you to find something within that space that matches your personal fit. Generally speaking, examples of ways to indirectly contribute to a cause could be something like operations, graphic design, project management, software development, and community building. My point is that there are likely many different ways to aid in solving AGI alignment, which increases the chances of finding something you have the proper skills for. Again, I place very low confidence on this since I don’t think I have an accurate understanding of the work needed within the space of AGI alignment at all. This is more meant as an alternative way of thinking about your question.
Humans seem to be notoriously bad at predicting what will make us most happy, and we don’t realize how bad we are at it. The typical advice “Pursue your passion” seems like a bad advice since our passion often develops parallel to other more tangible factors being fulfilled. I think 80,000 Hours’ literature review on “What makes for a dream job” will help you tremendously in better assessing whether you would enjoy a career in AI alignment.
Great question! While expected tangible rewards (e.g. prizes) undermine autonomous motivation, unexpected rewards don’t undermine autonomous motivation, and verbal rewards generally enhance autonomous motivation (Deci et al., 2001). Let’s brake it down to it’s components:
Our behavior is often controlled by the rewards we expect to obtain if we behave certain desirable ways such as engage with work, perform well on a task, or complete an assignments. Conversely, we do not experience unexpected rewards as controlling since we cannot foresee what behavior will lead to the unexpected outcome. Verbal rewards are often experienced as unexpected, and may enhance perceived competence which in turn enhances autonomous motivation. That being said, if verbal reward is given in a context where people feel pressured by it to think, feel, or behave in particular ways (e.g. controlling praise) it will typically undermine autonomous motivation.
I therefore think that thanking volunteers for the work they are doing is unproblematic, and if some information value is included it will enhance autonomous motivation via competence-support (e.g. at an EAG event: “thank you for doing a good job at welcoming the event speakers. We received feedback that they felt relaxed during their stay at the green room, and that they were impressed by the punctuality of you volunteers.”).Assuming that the engagement in writing competitions with financial incentives is driven by the expectance of a tangible external reward, I would expect writing competitions with financial incentives to undermine autonomous motivation unless the rewards are well internalized. The same amounts to gift cards and job certificates. Whether we need financial rewards or not, is a tough question I do not have a good answer to. I believe it is a trade-off between short-term and long-term impact, where financial rewards may improve the outcome of a specific activity, such as a writing contest, but lead to lower quality outcomes in the long run because people no longer engage in those activities voluntarily due to low autonomous motivation.
How to Incubate Self-Driven Individuals (for Leaders and Community Builders)
Thanks for the post Jonathan! I think this can be a good starting point for discussions around spreading longtermism. Personally, I like the use of “low-key longtermism” for internal use between people that are already familiar with longtermism, but I wouldn’t use it for massive outreach purposes. This is because the mentioned risk posed by info-hazard seems to outweigh the potential benefits of using the term longtermism. Also, since the term doesn’t add any information value to people that don’t already know what it is, I am even more certain that it’s best to leave the term behind when doing massive outreach. This post also shows some great examples of how the message of longtermism can be warped and misunderstood as a secular cult, adding another element of concern for longtermism outreach: How EA is perceived is crucial to its future trajectory—EA Forum (effectivealtruism.org).
My point is that I favor low-key longtermism outreach as long as the term longtermism is excluded.
This made me incredibly excited about distilling research! However, I don’t really know where to find research that would be valuable distilling. Could you give me some general pointers to help me get started? Also, do you have examples of great distillations that I can use as my benchmark? I’m fairly new to technical AI since I’ve been majoring Chemistry the last three years, however I’m determined to upskill in AI quickly, where distilling seems like a great challenge to boost my learning process while being impactful.
EA NTNU’s Annual Report 2021/2022
Thanks for sharing Akash! This will be helpful when I start getting in touch with AI safety researchers after upskilling in basic ML and neural networks.
I want to add that I think AI safety research with the intention of mitigating existential risk has been severely neglected. This suggests that the space of ideas for how to solve the problem remains vastly unexplored, and I don’t think you need to be a genius to have the chance of coming up with a smart low-hanging fruit solution to the problem.