RSS

Near-term AI ethics

TagLast edit: 22 Jul 2022 21:00 UTC by Leo

Near-term AI ethics is the branch of AI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. Long-term AI ethics, by contrast, is the branch of AI ethics that studies the moral questions arising from issues expected to arise when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Further reading

Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/​ACM Conference on AI, Ethics, and Society, pp. 138–143.

Related entries

AI alignment | AI governance | ethics of artificial intelligence

  1. ^

    Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/​ACM Conference on AI, Ethics, and Society, pp. 138–143.

  2. ^

    Brundage, Miles (2017) Guide to working in AI policy and strategy, 80,000 Hours, June 7.

Is The YouTube Al­gorithm Rad­i­cal­iz­ing You? It’s Com­pli­cated.

Eevee🔹1 Mar 2021 21:50 UTC
44 points
3 comments1 min readEA link
(www.youtube.com)

Align­ing Recom­mender Sys­tems as Cause Area

IvanVendrov8 May 2019 8:56 UTC
150 points
48 comments13 min readEA link

AI ethics: the case for in­clud­ing an­i­mals (my first pub­lished pa­per, Peter Singer’s first on AI)

Fai12 Jul 2022 4:14 UTC
82 points
5 comments1 min readEA link
(link.springer.com)

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

Barry Grimes15 Sep 2020 8:52 UTC
24 points
13 comments1 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

The Trou­bling Ethics of Writ­ing (A Speech from An­cient Sumer)

Cullen 🔸15 Feb 2021 20:17 UTC
43 points
1 comment1 min readEA link

Pile of Law and Law-Fol­low­ing AI

Cullen 🔸13 Jul 2022 0:29 UTC
28 points
2 comments3 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

GideonF16 Mar 2023 14:37 UTC
59 points
10 comments15 min readEA link

[Question] How could Twit­ter be tweaked to pro­mote more ra­tio­nal con­ver­sa­tions, now that Elon is on the board?

Jackson Wagner6 Apr 2022 16:34 UTC
8 points
23 comments1 min readEA link

[Question] What are the most press­ing is­sues in short-term AI policy?

Eevee🔹14 Jan 2020 22:05 UTC
9 points
0 comments1 min readEA link

A new pro­posal for reg­u­lat­ing AI in the EU

EdoArad26 Apr 2021 17:25 UTC
37 points
3 comments1 min readEA link
(www.bbc.com)

Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

MichaelA🔸1 Jun 2021 8:19 UTC
51 points
3 comments4 min readEA link
(www.nscai.gov)

Ex­is­ten­tial AI Safety is NOT sep­a­rate from near-term applications

stecas13 Dec 2022 14:47 UTC
28 points
9 comments3 min readEA link

‘Surveillance Cap­i­tal­ism’ & AI Gover­nance: Slip­pery Busi­ness Models, Se­cu­ri­ti­sa­tion, and Self-Regulation

Charlie Harrison29 Feb 2024 15:47 UTC
19 points
2 comments12 min readEA link

Public-fac­ing Cen­sor­ship Is Safety Theater, Caus­ing Rep­u­ta­tional Da­m­age

Yitz23 Sep 2022 5:08 UTC
49 points
7 comments5 min readEA link

How we could stum­ble into AI catastrophe

Holden Karnofsky16 Jan 2023 14:52 UTC
83 points
0 comments31 min readEA link
(www.cold-takes.com)

Sin­ga­pore AI Policy Ca­reer Guide

Yi-Yang21 Jan 2021 3:05 UTC
28 points
0 comments5 min readEA link

My pre­limi­nary re­search on the Adtech marketplace

Venkatesh30 Mar 2021 4:42 UTC
2 points
3 comments7 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew Critch15 Dec 2020 12:15 UTC
12 points
1 comment56 min readEA link
(alignmentforum.org)

There are two fac­tions work­ing to pre­vent AI dan­gers. Here’s why they’re deeply di­vided.

Sharmake10 Aug 2022 19:52 UTC
10 points
0 comments4 min readEA link
(www.vox.com)

From Cod­ing to Leg­is­la­tion: An Anal­y­sis of Bias in the Use of AI for Re­cruit­ment and Ex­ist­ing Reg­u­la­tory Frameworks

Priscilla Campos16 Sep 2024 18:21 UTC
4 points
1 comment20 min readEA link

Ar­tifi­cial In­tel­li­gence, Con­scious Machines, and An­i­mals: Broad­en­ing AI Ethics

Group Organizer21 Sep 2023 20:58 UTC
4 points
0 comments1 min readEA link

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

Garrison12 Feb 2024 23:34 UTC
154 points
10 comments6 min readEA link
(jacobin.com)

At­ten­tion on AI X-Risk Likely Hasn’t Dis­tracted from Cur­rent Harms from AI

Erich_Grunewald 🔸21 Dec 2023 17:24 UTC
194 points
13 comments17 min readEA link
(www.erichgrunewald.com)

Hu­man Misalignment

Richard Y Chappell🔸1 Oct 2025 14:01 UTC
15 points
0 comments2 min readEA link
(www.goodthoughts.blog)

AI De­faults: A Ne­glected Lever for An­i­mal Welfare?

andiehansen30 May 2025 9:59 UTC
13 points
0 comments10 min readEA link

80k hrs #88 - Re­sponse to criticism

mark_ledwich11 Dec 2020 8:53 UTC
71 points
18 comments4 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
147 points
28 comments62 min readEA link

Prepar­ing De­spite Uncer­tainty: The Grand Challenges of AI Progress

Andrew Knott7 Nov 2025 10:42 UTC
7 points
0 comments7 min readEA link

[Question] Donat­ing against Short Term AI risks

Jan-Willem16 Nov 2020 12:23 UTC
6 points
10 comments1 min readEA link

In­tro­duc­ing the AI Ob­jec­tives In­sti­tute’s Re­search: Differ­en­tial Paths to­ward Safe and Benefi­cial AI

cmck5 May 2023 20:26 UTC
43 points
1 comment8 min readEA link

Eval­u­at­ing LLMs for Suicide Risk De­tec­tion: Can AI Catch a Cry for Help?

Nanda14 Oct 2025 19:13 UTC
3 points
1 comment17 min readEA link

“This might be the first large-scale ap­pli­ca­tion of AI tech­nol­ogy to geopoli­tics.. 4o, o3 high, Gem­ini 2.5 pro, Claude 3.7, Grok all give the same an­swer to the ques­tion on how to im­pose tar­iffs eas­ily.”

Matrice Jacobine🔸🏳️‍⚧️3 Apr 2025 10:50 UTC
3 points
0 comments1 min readEA link
(x.com)

De­sen­si­tiz­ing Deepfakes

Phib29 Mar 2023 1:20 UTC
22 points
11 comments1 min readEA link

Hawley: AI Threat­ens the Work­ing Man

Remmelt8 Sep 2025 3:59 UTC
17 points
1 comment10 min readEA link
(www.dailysignal.com)

Google’s ethics is alarming

len.hoang.lnh25 Feb 2021 5:57 UTC
6 points
5 comments1 min readEA link

Emo­tion Align­ment as AI Safety: In­tro­duc­ing Emo­tion Fire­wall 1.0

DongHun Lee12 May 2025 18:05 UTC
1 point
0 comments2 min readEA link

[Question] Look­ing for col­lab­o­ra­tors af­ter last 80k pod­cast with Tris­tan Harris

Jan-Willem7 Dec 2020 22:23 UTC
19 points
7 comments2 min readEA link

[Question] Would peo­ple on this site be in­ter­ested in hear­ing about efforts to make an “ethics calcu­la­tor” for an AGI?

Sean Sweeney5 Mar 2024 9:28 UTC
1 point
0 comments1 min readEA link

AI, An­i­mals, & Digi­tal Minds 2025: ap­ply to speak by Wed­nes­day!

Alistair Stewart5 May 2025 0:45 UTC
8 points
0 comments1 min readEA link

Map­ping the Land­scape of Digi­tal Sen­tience Research

Kayode Adekoya19 Jun 2025 13:45 UTC
5 points
0 comments3 min readEA link

Pub­lished: NYU CMEP and NYU CEAP 2024 An­nual Reports

Sofia_Fogel1 Feb 2025 15:28 UTC
16 points
0 comments1 min readEA link

The im­pacts of AI on an­i­mal advocacy

Animal Charity Evaluators11 Sep 2025 14:21 UTC
62 points
2 comments10 min readEA link

No ghost in the machine

finm10 Dec 2025 18:34 UTC
15 points
1 comment45 min readEA link
(finmoorhouse.com)

Should YouTube make recom­men­da­tions for the cli­mate?

Matrice Jacobine🔸🏳️‍⚧️5 Sep 2024 15:22 UTC
1 point
0 comments1 min readEA link
(link.springer.com)

Agen­tic Align­ment: Nav­i­gat­ing be­tween Harm and Illegitimacy

LennardZ26 Nov 2024 21:27 UTC
2 points
1 comment9 min readEA link

Short-Term AI Align­ment as a Pri­or­ity Cause

len.hoang.lnh11 Feb 2020 16:22 UTC
17 points
11 comments7 min readEA link

[Question] How strong is the ev­i­dence of un­al­igned AI sys­tems caus­ing harm?

Eevee🔹21 Jul 2020 4:08 UTC
31 points
1 comment1 min readEA link

The cur­rent AI strate­gic land­scape: one bear’s perspective

Matrice Jacobine🔸🏳️‍⚧️15 Feb 2025 9:49 UTC
6 points
0 comments2 min readEA link
(philosophybear.substack.com)

Ques­tions about AI ethics, en­vi­ron­men­tal im­pact, and per­sonal responsibility

satelliteprocess3 Nov 2025 7:19 UTC
2 points
0 comments2 min readEA link

Solv­ing The Hu­man Align­ment Prob­lem (The Launch of EA So­cial Me­dia Ap­pli­ca­tion)

maximizealtruism18 Oct 2022 4:08 UTC
5 points
1 comment15 min readEA link

AI and An­i­mal Welfare: A Policy Case Study from Aotearoa New Zealand Policy

Karen Singleton20 Oct 2025 20:57 UTC
45 points
5 comments6 min readEA link

Surveillance and free ex­pres­sion | Sunyshore

Eevee🔹23 Feb 2021 2:14 UTC
10 points
0 comments9 min readEA link
(sunyshore.substack.com)

Seek­ing Feed­back: An Ini­ti­a­tive on AI, Men­tal Health, and Alignment

Gina Hafez30 Sep 2025 16:14 UTC
16 points
6 comments6 min readEA link

Align­ing AI with Hu­mans by Lev­er­ag­ing Le­gal Informatics

johnjnay18 Sep 2022 7:43 UTC
20 points
11 comments3 min readEA link

Bias In, Bias Out: Ar­tifi­cial In­tel­li­gence Reflects Real Discrimination

Nanda16 Oct 2025 23:36 UTC
10 points
0 comments7 min readEA link