RSS

Near-term AI ethics

TagLast edit: Jul 22, 2022, 9:00 PM by Leo

Near-term AI ethics is the branch of AI ethics that studies the moral questions arising from issues in AI that society is already facing or will likely face very soon. Examples include concerns about data privacy, algorithmic bias, self-driving cars, and autonomous weapons. Long-term AI ethics, by contrast, is the branch of AI ethics that studies the moral questions arising from issues expected to arise when AI is much more advanced than it is today. Examples include the implications of artificial general intelligence or transformative artificial intelligence.[1][2]

Further reading

Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/​ACM Conference on AI, Ethics, and Society, pp. 138–143.

Related entries

AI alignment | AI governance | ethics of artificial intelligence

  1. ^

    Prunkl, Carina & Jess Whittlestone (2020) Beyond near- and long-term: towards a clearer account of research priorities in AI ethics and society, Proceedings of the AAAI/​ACM Conference on AI, Ethics, and Society, pp. 138–143.

  2. ^

    Brundage, Miles (2017) Guide to working in AI policy and strategy, 80,000 Hours, June 7.

Is The YouTube Al­gorithm Rad­i­cal­iz­ing You? It’s Com­pli­cated.

Eevee🔹Mar 1, 2021, 9:50 PM
44 points
3 comments1 min readEA link
(www.youtube.com)

AI ethics: the case for in­clud­ing an­i­mals (my first pub­lished pa­per, Peter Singer’s first on AI)

FaiJul 12, 2022, 4:14 AM
78 points
5 comments1 min readEA link
(link.springer.com)

Align­ing Recom­mender Sys­tems as Cause Area

IvanVendrovMay 8, 2019, 8:56 AM
150 points
48 comments13 min readEA link

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

Barry GrimesSep 15, 2020, 8:52 AM
24 points
13 comments1 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroffApr 12, 2021, 7:11 PM
60 points
15 comments1 min readEA link

The Trou­bling Ethics of Writ­ing (A Speech from An­cient Sumer)

Cullen 🔸Feb 15, 2021, 8:17 PM
43 points
1 comment1 min readEA link

Pile of Law and Law-Fol­low­ing AI

Cullen 🔸Jul 13, 2022, 12:29 AM
28 points
2 comments3 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

Gideon FutermanMar 16, 2023, 2:37 PM
59 points
11 comments15 min readEA link

All Tech is Hu­man <-> EA

tae 🔸Dec 3, 2023, 9:01 PM
29 points
0 comments2 min readEA link

Ar­tifi­cial In­tel­li­gence, Con­scious Machines, and An­i­mals: Broad­en­ing AI Ethics

Group OrganizerSep 21, 2023, 8:58 PM
4 points
0 comments1 min readEA link

There are two fac­tions work­ing to pre­vent AI dan­gers. Here’s why they’re deeply di­vided.

SharmakeAug 10, 2022, 7:52 PM
10 points
0 comments4 min readEA link
(www.vox.com)

Public-fac­ing Cen­sor­ship Is Safety Theater, Caus­ing Rep­u­ta­tional Da­m­age

YitzSep 23, 2022, 5:08 AM
49 points
7 comments1 min readEA link

Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

MichaelA🔸Jun 1, 2021, 8:19 AM
51 points
3 comments4 min readEA link
(www.nscai.gov)

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew CritchDec 15, 2020, 12:15 PM
12 points
1 comment56 min readEA link
(alignmentforum.org)

My pre­limi­nary re­search on the Adtech marketplace

VenkateshMar 30, 2021, 4:42 AM
2 points
3 comments7 min readEA link

Sin­ga­pore AI Policy Ca­reer Guide

Yi-YangJan 21, 2021, 3:05 AM
28 points
0 comments5 min readEA link

Ex­is­ten­tial AI Safety is NOT sep­a­rate from near-term applications

stecasDec 13, 2022, 2:47 PM
28 points
9 comments1 min readEA link

A new pro­posal for reg­u­lat­ing AI in the EU

EdoAradApr 26, 2021, 5:25 PM
37 points
3 comments1 min readEA link
(www.bbc.com)

[Question] What are the most press­ing is­sues in short-term AI policy?

Eevee🔹Jan 14, 2020, 10:05 PM
9 points
0 comments1 min readEA link

[Question] How could Twit­ter be tweaked to pro­mote more ra­tio­nal con­ver­sa­tions, now that Elon is on the board?

Jackson WagnerApr 6, 2022, 4:34 PM
8 points
23 comments1 min readEA link

How we could stum­ble into AI catastrophe

Holden KarnofskyJan 16, 2023, 2:52 PM
83 points
0 comments31 min readEA link
(www.cold-takes.com)

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

GarrisonFeb 12, 2024, 11:34 PM
154 points
10 comments6 min readEA link
(jacobin.com)

‘Surveillance Cap­i­tal­ism’ & AI Gover­nance: Slip­pery Busi­ness Models, Se­cu­ri­ti­sa­tion, and Self-Regulation

Charlie HarrisonFeb 29, 2024, 3:47 PM
19 points
2 comments12 min readEA link

At­ten­tion on AI X-Risk Likely Hasn’t Dis­tracted from Cur­rent Harms from AI

Erich_Grunewald 🔸Dec 21, 2023, 5:24 PM
190 points
13 comments1 min readEA link
(www.erichgrunewald.com)

From Cod­ing to Leg­is­la­tion: An Anal­y­sis of Bias in the Use of AI for Re­cruit­ment and Ex­ist­ing Reg­u­la­tory Frameworks

Priscilla CamposSep 16, 2024, 6:21 PM
4 points
1 comment20 min readEA link

Solv­ing The Hu­man Align­ment Prob­lem (The Launch of EA So­cial Me­dia Ap­pli­ca­tion)

maximizealtruismOct 18, 2022, 4:08 AM
5 points
1 comment15 min readEA link

[Question] Would peo­ple on this site be in­ter­ested in hear­ing about efforts to make an “ethics calcu­la­tor” for an AGI?

Sean SweeneyMar 5, 2024, 9:28 AM
1 point
0 comments1 min readEA link

Should YouTube make recom­men­da­tions for the cli­mate?

Matrice JacobineSep 5, 2024, 3:22 PM
−1 points
0 comments1 min readEA link
(link.springer.com)

Pub­lished: NYU CMEP and NYU CEAP 2024 An­nual Reports

Sofia_FogelFeb 1, 2025, 3:28 PM
16 points
0 comments1 min readEA link

De­sen­si­tiz­ing Deepfakes

PhibMar 29, 2023, 1:20 AM
22 points
10 comments1 min readEA link

Agen­tic Align­ment: Nav­i­gat­ing be­tween Harm and Illegitimacy

LennardZNov 26, 2024, 9:27 PM
2 points
1 comment9 min readEA link

In­tro­duc­ing the AI Ob­jec­tives In­sti­tute’s Re­search: Differ­en­tial Paths to­ward Safe and Benefi­cial AI

cmckMay 5, 2023, 8:26 PM
43 points
1 comment8 min readEA link

80k hrs #88 - Re­sponse to criticism

mark_ledwichDec 11, 2020, 8:53 AM
71 points
21 comments4 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 19, 2019, 2:58 AM
147 points
28 comments62 min readEA link

Short-Term AI Align­ment as a Pri­or­ity Cause

len.hoang.lnhFeb 11, 2020, 4:22 PM
17 points
11 comments7 min readEA link

Google’s ethics is alarming

len.hoang.lnhFeb 25, 2021, 5:57 AM
6 points
5 comments1 min readEA link

Surveillance and free ex­pres­sion | Sunyshore

Eevee🔹Feb 23, 2021, 2:14 AM
10 points
0 comments9 min readEA link
(sunyshore.substack.com)

Align­ing AI with Hu­mans by Lev­er­ag­ing Le­gal Informatics

johnjnaySep 18, 2022, 7:43 AM
20 points
11 comments3 min readEA link

[Question] Look­ing for col­lab­o­ra­tors af­ter last 80k pod­cast with Tris­tan Harris

Jan-WillemDec 7, 2020, 10:23 PM
19 points
7 comments2 min readEA link

The cur­rent AI strate­gic land­scape: one bear’s perspective

Matrice JacobineFeb 15, 2025, 9:49 AM
6 points
0 comments2 min readEA link
(philosophybear.substack.com)

[Question] How strong is the ev­i­dence of un­al­igned AI sys­tems caus­ing harm?

Eevee🔹Jul 21, 2020, 4:08 AM
31 points
1 comment1 min readEA link

[Question] Donat­ing against Short Term AI risks

Jan-WillemNov 16, 2020, 12:23 PM
6 points
10 comments1 min readEA link