I don’t claim you can align human groups with individual humans. If I’m reading you correctly, I think you’re committing a category error in assigning alignment properties to groups of people like nation states or companies. Alignment, as I’m using the term, is the alignment of goals or values from an AI to a person or group of people. We expect this, I think, in part because we’re accustomed to telling computers what to do and having them do exactly what we say (not always exactly what we mean, though).
Alignment is extremely tricky for the unenhanced human, but theoretically possible. My first best guess at solving it would be to automate the research and development of it with AI itself. We’ll soon reach a sufficiently advanced AI that’s capable of reasoning beyond anything anyone on Earth can come up with; we just have to ensure that the AI is aligned and that the one that trained that one is also aligned, and so on. My second-best guess would be through BCIs, and my third would be whole-brain emulation interpretability.
Assuming we even do develop alignment techniques, I’d argue that exclusive alignment (that is, for one or a small group of people) is more difficult than aligning with humanity.at large for the following reasons (I realize some of these go both ways, but I include them because I see them as more serious for exclusive alignment–like value drift):
Value drift.
Impossible specification (e.g., in exploring the inherent contradictions in expressed human values, the AGI expands moral consideration beyond initial human constraints, discovering some form of moral universalism or a morality beyond all human reasoning).
Emergent properties appear, producing unexpected behavior, and we cannot align systems to exhibit properties we cannot anticipate.
Exclusive alignment’s instrumental goals may broaden AGI’s moral scope to include more humans (i.e., it may be that broader alignment makes for a more robust AI system).
Competing AGIs have been successfully created that are designed to align with all of humanity.
Exclusively aligned AGI may still satisfy many, if not all, of the preferences that the rest of humanity possesses.
Exclusive alignment requires perfect internal coordination of values within organizations, but inevitable divergent interests emerge as they scale; these coordination failures multiply when AGI systems interpret instructions literally and optimize against specified metrics.
Alignment requires resolving disagreements over value prioritization, a meta-preference problem. Yet resolving these conflicts necessitates assumptions about how they should be resolved, creating an infinite regress that defies a technical solution.
Sub-hallucination doses of DMT seem preferable to the much longer-lasting effects of psilocybin or LSD. DMT only lasts ~15 minutes as opposed to psilocybin’s ~6 hours or LSD’s 8+ hours. I don’t know how it compares in effectiveness to the other two, but it’s likely similar.
There are practical guides for the extraction of DMT available online, like on erowid.org or for sale on amazon.com. It doesn’t require difficult to get materials or chemical expertise.