RSS

Neurotechnology

TagLast edit: 31 Jan 2024 15:35 UTC by Will Howard🔹

Neurotechnology is any tool that directly, exogenously observes or manipulates the state of biological nervous systems, especially the human brain.

Familiar examples include electrode-based brain-computer interfaces (BCIs), antidepressant drugs, or magnetic resonance imaging (MRI). It has been argued that BCIs specifically could make robust totalitarianism significantly more likely, and that such devices should for that reason be regarded as an existential risk factor.[1]

Further reading

Faraheny, Nita., 80,000 Hours (2023) Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers, The 80,000 Hours Podcast

Hill, N. J. & J. R. Wolpaw (2016) Brain–computer interface, in Reference Module in Biomedical Sciences, Amsterdam: Elsevier.

Hannas, William et al. (2020) China AI-brain research: brain-inspired AI, connectomics, brain-computer interfaces, Center for Security and Emerging Technology.

LessWrong (2021) Brain-computer interfaces, LessWrong Wiki.

Tullis, Paul (2020) The brain-computer interface is coming, and we are so not ready for it, Bulletin of the Atomic Scientists, September 15.

  1. ^

    Rafferty, Jack (2020) A new x-risk factor: brain-computer interfaces, Effective Altruism Forum, August 10.

[Cause Ex­plo­ra­tion Prizes] Jhana meditation

Milan_Griffes12 Aug 2022 5:26 UTC
73 points
6 comments11 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
76 points
12 comments42 min readEA link

Brain-com­puter in­ter­faces and brain organoids in AI al­ign­ment?

freedomandutility15 Apr 2023 22:28 UTC
8 points
2 comments1 min readEA link

[Question] Should peo­ple get neu­ro­science phD to work in AI safety field?

jackchang1107 Mar 2023 16:21 UTC
9 points
11 comments1 min readEA link

Biolog­i­cal su­per­in­tel­li­gence: a solu­tion to AI safety

Yarrow4 Dec 2023 13:09 UTC
0 points
6 comments1 min readEA link

Cause Area: Differ­en­tial Neu­rotech­nol­ogy Development

mwcvitkovic10 Aug 2022 2:39 UTC
95 points
7 comments36 min readEA link

Neu­rotech­nol­ogy and so­cial en­vi­ron­ment in­fluence: An in­tro­duc­tion and a ques­tion.

jwowink11 Oct 2023 13:10 UTC
−1 points
0 comments2 min readEA link

Restrict­ing brain organoid re­search to slow down AGI

freedomandutility9 Nov 2022 13:01 UTC
8 points
2 comments1 min readEA link

Study at­ti­tudes of BCI users to­wards BCIs (18+)(GDPR-reg­u­lated coun­tries)(BCI user)

bci_research_upb13 May 2024 9:21 UTC
1 point
0 comments1 min readEA link
(umfragen.uni-paderborn.de)

#174 – Neu­rotech­nol­ogy already be­ing used to con­vict crim­i­nals and ma­nipu­late work­ers (Nita Fara­hany on the 80,000 Hours Pod­cast)

80000_Hours12 Dec 2023 14:07 UTC
26 points
0 comments16 min readEA link

“In­tro to brain-like-AGI safety” se­ries—just finished!

Steven Byrnes17 May 2022 15:35 UTC
15 points
0 comments1 min readEA link

[Question] Why not to solve al­ign­ment by mak­ing su­per­in­tel­li­gent hu­mans?

Pato16 Oct 2022 21:26 UTC
9 points
12 comments1 min readEA link

Con­nec­tomics seems great from an AI x-risk perspective

Steven Byrnes30 Apr 2023 14:38 UTC
10 points
0 comments1 min readEA link

Assess­ment of AI safety agen­das: think about the down­side risk

Roman Leventov19 Dec 2023 9:02 UTC
6 points
0 comments1 min readEA link
No comments.