I did enjoy the discussion here in general. I hadn’t heard of the “illusionist” stance before and it does sound quite interesting yet I do find it quite confusing as well.
I generally find there to be a big confusion about the relation of the self to what “consciousness” is. I was in this rabbit hole of thinking about it a lot and I realised I had to probe the edges of my “self” to figure out how it truly manifested. A 1000 hours into meditation some of the existing barriers have fallen down.
The complex attractor state can actually be experienced in meditation and it is what you would generally call a case of dependent origination or a self-sustaining loop (literally, lol). You can see through this by the practice of realising that the self-property of mind is co-created by your mind and that it is “empty”. This is a big part of the meditation project. (alongside loving-kindness practice, please don’t skip the loving-kindness practice)
Experience itself isn’t mediated by this “selfing” property, it is rather an artificial boundary we have created about our actions in the world for simplification reasons. (See Boundaries as a general way of this occurring.)
So, the self cannot be the ground of consciousness; it is rather a computationally optimal structure for behaving in the world. Yet realizing this fully is easiest done through your own experience, or through n=1 science. Meaning that to fully collect the evidence you will have to discover it through your own phenomenological experience. (which makes it weird to take into western philosophical contexts)
So, the self cannot be the ground and partly as a consequence of this and partly since consciousness is a very conflated term, I like thinking more about different levels of sentience instead. At a certain threshold of sentience the “selfing” loop is formed.
The claims and evidence he’s talking about may be true but I don’t believe that justifies the conclusions that he draws from them.
It makes sense for the dynamics of EA to naturally go in this way (Not endorsing). It is just applying the intentional stance plus the free energy principle to the community as a whole. I find myself generally agreeing with the first post at least and I notice the large regularization pressure being applied to individuals in the space.
I often feel the bad vibes that are associated with trying hard to get into an EA organisation. I’m doing for-profit entrepreneurship for AI safety adjacent to EA as a consequence and it is very enjoyable. (And more impactful in my views)
I will however say that the community in general is very supportive and that it is easy to get help with things if one has a good case and asks for it, so maybe we should make our structures more focused around that? I echo some of the things about making it more community focused, however that might look. Good stuff OP, peace.