I made the brilliant choice to be gender-neutral by calling myself “nobody42” for my EA profile name. Then I realized I couldn’t change this without creating a new account—which I didn’t want to do after already posting. Alas, I am nobody(42).
My interests include ASI safety, consciousness, the intersection of AI superintelligences and the simulation hypothesis (such as whether a future ASI might temporarily partition itself for a unidirectionally-blinded simulation). I’m also interested in aldehyde-stabilized brain preservation, digital minds, whole brain emulation, effective altruism, the Fermi paradox, psychedelics, physics (especially where it intersects with philosophy) and veganism.
Regarding ASI safety and x-risk, I believe that humans are probably capable of developing truly aligned ASI. I also believe current AI has the potential to be good (increasingly ethical as it evolves). For a model, we could at least partly use the way in which we raise children to become ethical (not a simple task, but achievable). Yet I think we are highly unlikely to do this before developing superintelligence, because of profit motives, competition, and the number of people on our planet—and the chances that even one will be deviant with respect to ASI.
In other words, I think we probably aren’t going to make it, but we should still try.
I express my interests through art (fractals, programmed, etc.) and writing (nonfiction, fiction, poetry, essays).
I’m currently working on a book about my experience taking medical ketamine and psilocybin for depression and anxiety.
More good points… I would say to refer to my reply above (which I had not yet posted when you made this comment). Just to summarize, the overall thesis stands since enough words would have needed to have meta-reps, even if we don’t know particulars. It’s easier to isolate individual words having meta-reps in the second and third sessions (I believe). In any case, thanks for helping me to drill down on this!