Co-Director of Equilibria Network: https://ââeq-network.org/ââ
I try to write as if I were having a conversation with you in person.
I would like to claim that my current safety beliefs are a mix between Paul Christianoâs, Andrew Critchâs and Def/âAcc.
Jonas Hallgren đ¸
How will you address the conflict of interest allegations raised against your organisation? It feels like the two organisations are awfully intertwined. For gods sake, the CEOs are sleeping with each other! I bet they even do each otherâs taxes!
Iâm joining the other EA.
This was a dig at interpretability research. Iâm pro-interpretability research in general, so if you feel personally attacked by this, it wasnât meant to be too serious. Just be careful, ok? :)
It makes sense for the dynamics of EA to naturally go in this way (Not endorsing). It is just applying the intentional stance plus the free energy principle to the community as a whole. I find myself generally agreeing with the first post at least and I notice the large regularization pressure being applied to individuals in the space.
I often feel the bad vibes that are associated with trying hard to get into an EA organisation. Iâm doing for-profit entrepreneurship for AI safety adjacent to EA as a consequence and it is very enjoyable. (And more impactful in my views)
I will however say that the community in general is very supportive and that it is easy to get help with things if one has a good case and asks for it, so maybe we should make our structures more focused around that? I echo some of the things about making it more community focused, however that might look. Good stuff OP, peace.
I did enjoy the discussion here in general. I hadnât heard of the âillusionistâ stance before and it does sound quite interesting yet I do find it quite confusing as well.
I generally find there to be a big confusion about the relation of the self to what âconsciousnessâ is. I was in this rabbit hole of thinking about it a lot and I realised I had to probe the edges of my âselfâ to figure out how it truly manifested. A 1000 hours into meditation some of the existing barriers have fallen down.
The complex attractor state can actually be experienced in meditation and it is what you would generally call a case of dependent origination or a self-sustaining loop (literally, lol). You can see through this by the practice of realising that the self-property of mind is co-created by your mind and that it is âemptyâ. This is a big part of the meditation project. (alongside loving-kindness practice, please donât skip the loving-kindness practice)
Experience itself isnât mediated by this âselfingâ property, it is rather an artificial boundary we have created about our actions in the world for simplification reasons. (See Boundaries as a general way of this occurring.)
So, the self cannot be the ground of consciousness; it is rather a computationally optimal structure for behaving in the world. Yet realizing this fully is easiest done through your own experience, or through n=1 science. Meaning that to fully collect the evidence you will have to discover it through your own phenomenological experience. (which makes it weird to take into western philosophical contexts)
So, the self cannot be the ground and partly as a consequence of this and partly since consciousness is a very conflated term, I like thinking more about different levels of sentience instead. At a certain threshold of sentience the âselfingâ loop is formed.The claims and evidence heâs talking about may be true but I donât believe that justifies the conclusions that he draws from them.
Thank you for this post! I will make sure to read the 5â5 books that I havenât read yet, especially excited about Joseph Heinrichâs book from 2020, had read The Secret of Our Success before but not that one.
I actually come from an AI Safety interest when it comes to moral progress. The question is to some extent for me on how we can set up AI systems so that they continuously improve âmoral progressâ as we donât want to leave our fingerprints on the future.
In my opinion, the larger AI Safety dangers come from âbig data hellâ like the ones described in Yuah Noah Harariâs Homo Deus or Paul Christianoâs slow take-off scenarios.
Therefore we want to figure out how to set up AIs in such a way that automatically improves moral progress in the structure of their use. Iâm also a believer that AI will most likely in the future go through a similar process to the one described in The Secret of Our Success and that we should prepare appropriate optimisation functions for it.
So, if you ever feel like we might die from AI, I would love to see some work in that direction!
(happy to talk more about it if youâre up for it.)
The number of applications will affect the counterfactual value of applying. Now, saying your expected number might lower the number of people who will apply, but I would still appreciate having a range of expected applicants for the AI Safety roles.
What is the expected amount of people applying for the AI Safety roles?
Iâm getting the vibe that your priors are on the world to some extent, being in a multipolar scenario in the future. Iâm interested in more specifically what your predictions are for multipolarity versus singleton given the shard-theory thinking as it seems unlikely for recursive self-improvement to happen in the way described given what I understand of your model?
Great post; I enjoyed it.
Iâve got two things to say, the first one being that GPT is a very nice brainstorming tool as it generates many more ideas than you could yourself that you can then prune from, which is nice.
Secondly, Iâve been doing âpeer coachingâ with some EA people using reclaim.ai (not sponsored) to automatically book meetings each week where we take turns being the mentor and mentee answering the five following questions:
- Whatâs on your mind?
- When would todayâs setting be a success?
- Where are you right now?
- How do you get where you want to go?
- What are the actions/âfirst steps to get there?
- Ask for feedbackI really like the framing of meetings with yourself, Iâll definitely try that out.
Alright, that makes sense; thank you!
Isnât estimated value calculated by the probability times the utility and as a consequence isnât the higher risk part wrong if one simply looks at it like this? (20% to 10% would be 10x the impact of 2% to 1%)
(I could be missing something here, please correct me in that case)
I didnât mean it in this sense. I think the lesson you drew from it is fair in general, I was just reacting to the things I felt you pulled under the rug, if that makes sense.
Sorry, Pablo, I meant that I got a lot more epistemically humble, I should have thought about how I phrased it more. It was more that I went from the opinion that many worlds is probably true to: âOh man, there are some weird answers to the Wignerâs friend thought experiment and I should not give a major weight to any.â So Iâm more like maybe 20% on many worlds?
That being said I am overconfident from time to time and itâs fair to point that out from me as well. Maybe you were being overconfident in saying that I was overconfident? :D
I will say that I thought the consciousness p zombie distinction was very interesting and a good example of overconfidence as this didnât come across in my previous comment.
Generally, some good points across the board that I agree with. Talking with some physicist friends helped me debunk the many worlds thing Yud has going. Similarly his animal consciousness stuff seems a bit crazy as well. I will also say that I feel that youâre coming off way to confident and inflammatory when it comes to the general tone. The AI Safety argument you provided was just dismissal without much explanation. Also, when it comes to the consciousness stuff I honestly just get kind of pissed reading it as I feel youâre to some extent hard pandering to dualism.
I totally agree with you that Yudkowsky is way overconfident in the claims that he makes. Ironically enough it also seems that you to some extent are as well in this post since youâre overgeneralizing from insufficient data. As a fellow young person, I recommend some more caution when it comes to solid claims about stuff where you have little knowledge (you cherry-picked data on multiple occasions in this post).
Overall you made some good points though, so still a thought-provoking read.
Maybe frame it more as if youâre talking to a child. Yes you can tell the child to follow something but how are you certain that it will do it?
Similarly, how can we trust the AI to actually follow the prompt? To trust it we would fundamentally have to understand the AI or safeguard against problems if we donât understand it. The question then becomes how your prompt is represented in machine language, which is very hard to answer.
To reiterate, ask yourself, how do you know that the AI will do what you say?
(Leike responds to this here if anyone is interested)
John Wentworth has a post on Godzilla strategies where he claims that putting an AGI to solve the alignment problem is like asking Godzilla to make a larger Godzilla behave. How will you ensure you donât overshoot the intelligence of the agent youâre using to solve alignment and fall into the âGodzilla trapâ?
I feel like this goes against the principle of not leaving your footprint on the future, no?
Like, a large part of what I believe to be the danger with AI is that we donât have any reflective framework for morality. I also donât believe the standard path for AGI is one of moral reflection. This would then to me say that we leave the value of the future up to market dynamics and this doesnât seem good with all the traps there are in such a situation? (Moloch for example)
If we want a shot at a long reflection or similar, I donât think full sending AGI is the best thing to do.