RSS

nobody42

Karma: 8

I made the brilliant choice to be gender-neutral by calling myself “nobody42” for my EA profile name. Then I realized I couldn’t change this without creating a new account—which I didn’t want to do after already posting. Alas, I am nobody(42).

My interests include ASI safety, consciousness, the intersection of AI superintelligences and the simulation hypothesis (such as whether a future ASI might temporarily partition itself for a unidirectionally-blinded simulation). I’m also interested in aldehyde-stabilized brain preservation, digital minds, whole brain emulation, effective altruism, the Fermi paradox, psychedelics, physics (especially where it intersects with philosophy) and veganism.

Regarding ASI safety and x-risk, I believe that humans are probably capable of developing truly aligned ASI. I also believe current AI has the potential to be good (increasingly ethical as it evolves). For a model, we could at least partly use the way in which we raise children to become ethical (not a simple task, but achievable). Yet I think we are highly unlikely to do this before developing superintelligence, because of profit motives, competition, and the number of people on our planet—and the chances that even one will be deviant with respect to ASI.

In other words, I think we probably aren’t going to make it, but we should still try.

I express my interests through art (fractals, programmed, etc.) and writing (nonfiction, fiction, poetry, essays).

I’m currently working on a book about my experience taking medical ketamine and psilocybin for depression and anxiety.

ChatGPT4 Ap­pears to At­tain Pe­ri­ods of Consciousness

nobody4220 Mar 2024 12:18 UTC
10 points
10 comments15 min readEA link

AI Ex­is­ten­tial Risk from AI’s Per­spec­tive (30-40%)

nobody4220 Mar 2024 12:18 UTC
0 points
1 comment2 min readEA link