RSS

Ar­tifi­cial intelligence

TagLast edit: 6 Aug 2022 11:28 UTC by Pablo

Artificial intelligence (AI) is the set of intellectual capacities characteristic of human beings exhibited by machines, as well as the area of research aimed at creating machines with these capacities.

Terminology

The literature on AI risk features several commonly used expressions that refer to various types or forms of artificial intelligence. These notions are not always used consistently.

As noted, artificial intelligence is the set of intellectual capacities characteristic of human beings exhibited by machines. But some authors use the term imprecisely, to refer to human-level AI or even to strong AI (a term which itself is very imprecise).[1]

Human-level artificial intelligence (HLAI) is AI that is at least as intelligent as the average or typical human. In one sense, human-level AI requires that the AI exhibits human-level ability in each of the capacities that constitute human intelligence. In another, weaker, sense, the requirement is that these capacities, assessed in the aggregate, are at least equivalent to the aggregate of human capacities. An AI that is weaker than humans on some dimensions, but stronger than humans on others, may count as human-level in this weaker sense. (However, it is unclear how these different capacities should be traded off against one another or what would ground these tradeoffs.[2])

Artificial general intelligence (AGI) is AI that does not only exhibit high ability in a wide range of specific domains, but can also generalize across these domains and display other skills that are wide rather than narrow in scope.[3] “Artificial general intelligence” is sometimes also used as a synonym for “human-level artificial intelligence”.[4][5]

High-level machine intelligence (HLMI) is AI that can carry out most human professions at least as well as a typical human. Vincent Müller and Nick Bostrom coined the expression to overcome the perceived deficiencies of existing terminology.[6][7]

Finally, “strong artificial intelligence” (strong AI) is a multiply ambiguous expression that can mean either “artificial general intelligence”, “human-level artificial intelligence” or “superintelligence”, among other things.[8]

Discussion

Today, AI systems are better than even the smartest people at some intellectual tasks, such as chess, but much worse at others, such as writing academic papers. If AI systems eventually become as good or better than humans at many of these remaining tasks, then their impact will likely be transformative. Furthermore, in the extreme case that AI systems eventually become more capable than humans at all intellectual tasks, this would arguably be the most significant development in human history.

Possible impacts of progress in AI include accelerated scientific progress, large-scale unemployment, novel forms of warfare, and risks from unintended behavior in AI systems.

Further reading

Christiano, Paul (2014) Three impacts of machine intelligence, Rational Altruist, August 23.

Related entries

AI governance | AI alignment | AI safety | human-level artificial intelligence

  1. ^

    An example of the latter is Muehlhauser, Luke (2013) When will AI be created?, Machine Intelligence Research Institute, May 16.

  2. ^

    AI Impacts (2014) Human-level AI, AI Impacts, January 23.

  3. ^

    See Pennachin, Cassio & Ben Goertzel (2007) Contemporary approaches to artificial general intelligence, in Ben Goertzel & Cassio Pennachin (eds.) Artificial General Intelligence, Berlin: Springer, pp. 1–30. The expression was popularized, but not coined, by Cassio Pennachin and Ben Goertzel. See Goertzel, Ben (2011) Who coined the term “AGI”?, Ben Goertzel’s Blog, August 28.

  4. ^

    Muehlhauser, Luke (2013) What is intelligence?, Machine Intelligence Research Institute, June 19.

  5. ^

    Baum, Seth D., Ben Goertzel & Ted G. Goertzel (2011) How long until human-level AI? Results from an expert assessment, Technological Forecasting and Social Change, vol. 78, pp. 185–195.

  6. ^

    Müller, Vincent C. & Nick Bostrom (2016) Future progress in artificial intelligence: a survey of expert opinion, in Vincent C. Müller (ed.) Fundamental Issues of Artificial Intelligence, Cham: Springer International Publishing, pp. 555–572.

  7. ^

    In their influential survey of machine learning researchers, Katja Grace and her collaborators define “high-level machine intelligence” as follows: “we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers.” (Grace, Katja et al. (2017) When will AI exceed human performance? Evidence from AI experts, ArXiv, 1705.08807.) The expression is operationalized further in Muehlhauser, Luke (2015) What do we know about AI timelines?, Open Philanthropy, October (updated July 2016), section 1.

  8. ^

    Wikipedia (2021) Strong AI, Wikipedia, October 18.

Prevent­ing an AI-re­lated catas­tro­phe—Prob­lem profile

Benjamin Hilton29 Aug 2022 18:49 UTC
129 points
17 comments4 min readEA link
(80000hours.org)

[Question] Train­ing a GPT model on EA texts: what data?

JoyOptimizer4 Jun 2022 5:59 UTC
23 points
16 comments1 min readEA link

Deep­Mind: Gen­er­ally ca­pa­ble agents emerge from open-ended play

kokotajlod27 Jul 2021 19:35 UTC
48 points
10 comments2 min readEA link
(deepmind.com)

Why AI is Harder Than We Think—Me­lanie Mitchell

BrownHairedEevee28 Apr 2021 8:19 UTC
41 points
7 comments2 min readEA link
(arxiv.org)

Ques­tions about AI that bother me

Eleni_A31 Jan 2023 6:50 UTC
33 points
6 comments2 min readEA link

Planes are still decades away from dis­plac­ing most bird jobs

guzey25 Nov 2022 16:49 UTC
27 points
2 comments1 min readEA link

Con­nor Leahy on Con­jec­ture and Dy­ing with Dignity

Michaël Trazzi22 Jul 2022 19:30 UTC
34 points
0 comments10 min readEA link
(theinsideview.ai)

AGI mis­al­ign­ment x-risk may be lower due to an over­looked goal speci­fi­ca­tion technology

johnjnay21 Oct 2022 2:03 UTC
20 points
1 comment1 min readEA link

AI timelines and the­o­ret­i­cal un­der­stand­ing of deep learn­ing

Venky102412 Sep 2021 16:26 UTC
4 points
8 comments2 min readEA link

Liter­a­ture re­view of Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence timelines

Jaime Sevilla27 Jan 2023 20:36 UTC
148 points
10 comments1 min readEA link

FLI launches Wor­ld­build­ing Con­test with $100,000 in prizes

ggilgallon17 Jan 2022 13:54 UTC
87 points
55 comments6 min readEA link

What AI com­pa­nies can do to­day to help with the most im­por­tant century

Holden Karnofsky20 Feb 2023 17:40 UTC
103 points
8 comments11 min readEA link
(www.cold-takes.com)

Ar­tifi­cial In­tel­li­gence, Mo­ral­ity, and Sen­tience (AIMS) Sur­vey: 2021

Janet Pauketat1 Jul 2022 7:47 UTC
36 points
0 comments2 min readEA link
(www.sentienceinstitute.org)

In­tro­duc­ing the new Ries­gos Catas­trófi­cos Globales team

Jaime Sevilla3 Mar 2023 23:04 UTC
74 points
3 comments5 min readEA link
(riesgoscatastroficosglobales.com)

Why AI al­ign­ment could be hard with mod­ern deep learning

Ajeya21 Sep 2021 15:35 UTC
134 points
16 comments14 min readEA link
(www.cold-takes.com)

Sen­tience in Machines—How Do We Test for This Ob­jec­tively?

Mayowa Osibodu20 Mar 2023 5:20 UTC
10 points
0 comments2 min readEA link
(www.researchgate.net)

“Tech­nolog­i­cal un­em­ploy­ment” AI vs. “most im­por­tant cen­tury” AI: how far apart?

Holden Karnofsky11 Oct 2022 4:50 UTC
15 points
1 comment3 min readEA link
(www.cold-takes.com)

AI al­ign­ment re­search links

Holden Karnofsky6 Jan 2022 5:52 UTC
16 points
0 comments6 min readEA link
(www.cold-takes.com)

[Question] I have re­cently been in­ter­ested in robotics, par­tic­u­larly in for-profit star­tups. I think they can help in­crease food pro­duc­tion and help re­duce im­prove health­care. Would this fall un­der AI for so­cial good? How im­pact­ful will robotics be to so­ciety? How large is the coun­ter­fac­tual?

Isaac Benson2 Jan 2022 5:38 UTC
4 points
3 comments1 min readEA link

Large Lan­guage Models as Cor­po­rate Lob­by­ists, and Im­pli­ca­tions for So­cietal-AI Alignment

johnjnay4 Jan 2023 22:22 UTC
10 points
6 comments8 min readEA link

Trans­for­ma­tive AI is­sues (not just mis­al­ign­ment): an overview

Holden Karnofsky6 Jan 2023 2:19 UTC
31 points
0 comments22 min readEA link
(www.cold-takes.com)

Misha Yagudin and Ozzie Gooen Dis­cuss LLMs and Effec­tive Altruism

Ozzie Gooen6 Jan 2023 22:59 UTC
45 points
3 comments14 min readEA link
(quri.substack.com)

14 Ways ML Could Im­prove In­for­ma­tive Video

Ozzie Gooen10 Jan 2023 13:53 UTC
8 points
0 comments2 min readEA link
(quri.substack.com)

Le­gal As­sis­tance for Vic­tims of AI

bob17 Mar 2023 11:42 UTC
48 points
17 comments1 min readEA link

Seek­ing Sur­vey Re­sponses—At­ti­tudes Towards AI risks

anson28 Mar 2022 17:47 UTC
23 points
2 comments1 min readEA link
(forms.gle)

Cru­cial con­sid­er­a­tions in the field of Wild An­i­mal Welfare (WAW)

Holly_Elmore10 Apr 2022 19:43 UTC
63 points
10 comments3 min readEA link

Semi-con­duc­tor /​ AI stocks dis­cus­sion.

sapphire25 Nov 2022 23:35 UTC
9 points
3 comments1 min readEA link

[Link post] AI could fuel fac­tory farm­ing—or end it

BrianK18 Oct 2022 11:16 UTC
34 points
0 comments1 min readEA link
(www.fastcompany.com)

A Bird’s Eye View of the ML Field [Prag­matic AI Safety #2]

ThomasW9 May 2022 17:15 UTC
92 points
2 comments36 min readEA link

Ex­plor­ing Me­tac­u­lus’ com­mu­nity predictions

Vasco Grilo24 Mar 2023 7:59 UTC
81 points
11 comments10 min readEA link

An­nounc­ing Epoch: A re­search or­ga­ni­za­tion in­ves­ti­gat­ing the road to Trans­for­ma­tive AI

Jaime Sevilla27 Jun 2022 13:39 UTC
183 points
11 comments2 min readEA link
(epochai.org)

In­tro­duc­ing the ML Safety Schol­ars Program

ThomasW4 May 2022 13:14 UTC
143 points
38 comments3 min readEA link

EA AI/​Emerg­ing Tech Orgs Should Be In­volved with Pa­tent Office Partnership

Bridges12 Jun 2022 22:32 UTC
10 points
0 comments1 min readEA link

Oper­a­tional­iz­ing timelines

Zach Stein-Perlman10 Mar 2023 17:30 UTC
30 points
2 comments1 min readEA link

Make a neu­ral net­work in ~10 minutes

Arjun Yadav25 Apr 2022 18:36 UTC
3 points
0 comments4 min readEA link
(arjunyadav.net)

GPT-4 is out: thread (& links)

Lizka14 Mar 2023 20:02 UTC
84 points
18 comments1 min readEA link

[Question] What are some re­sources (ar­ti­cles, videos) that show off what the cur­rent state of the art in AI is? (for a layper­son who doesn’t know much about AI)

james6 Dec 2021 21:06 UTC
10 points
6 comments1 min readEA link

Paul Chris­ti­ano – Ma­chine in­tel­li­gence and cap­i­tal accumulation

Tessa15 May 2014 0:10 UTC
21 points
0 comments6 min readEA link
(rationalaltruist.com)

The Ri­val AI De­ploy­ment Prob­lem: a Pre-de­ploy­ment Agree­ment as the least-bad response

HaydnBelfield23 Sep 2022 9:28 UTC
38 points
1 comment13 min readEA link

[Question] What’s the best ma­chine learn­ing newslet­ter? How do you keep up to date?

Mathieu Putz25 Mar 2022 14:36 UTC
13 points
12 comments1 min readEA link

[Cross-post] Change my mind: we should define and mea­sure the effec­tive­ness of ad­vanced AI

David Johnston6 Apr 2022 0:20 UTC
4 points
0 comments7 min readEA link

Anony­mous ad­vice: If you want to re­duce AI risk, should you take roles that ad­vance AI ca­pa­bil­ities?

Benjamin Hilton11 Oct 2022 14:15 UTC
72 points
9 comments17 min readEA link
(80000hours.org)

[Question] What will be some of the most im­pact­ful ap­pli­ca­tions of ad­vanced AI in the near term?

IanDavidMoss3 Mar 2022 15:26 UTC
16 points
7 comments1 min readEA link

[Question] Will AI Wor­ld­view Prize Fund­ing Be Re­placed?

Jordan Arel13 Nov 2022 17:10 UTC
26 points
4 comments1 min readEA link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

EA Forum Archives14 Oct 2017 2:41 UTC
30 points
1 comment26 min readEA link
(www.lesswrong.com)

Fa­nat­i­cism in AI: SERI Project

Jake Arft-Guatelli24 Sep 2021 4:39 UTC
7 points
2 comments5 min readEA link

On Generality

Oren Montano26 Sep 2022 8:59 UTC
2 points
0 comments1 min readEA link

A strange twist on the road to AGI

cveres12 Oct 2022 23:27 UTC
3 points
0 comments1 min readEA link

‘Ar­tifi­cial In­tel­li­gence Gover­nance un­der Change’ (PhD dis­ser­ta­tion)

MMMaas15 Sep 2022 12:10 UTC
52 points
1 comment2 min readEA link
(drive.google.com)

AGI Timelines in Gover­nance: Differ­ent Strate­gies for Differ­ent Timeframes

simeon_c19 Dec 2022 21:31 UTC
110 points
19 comments1 min readEA link

“In­tro to brain-like-AGI safety” se­ries—just finished!

Steven Byrnes17 May 2022 15:35 UTC
15 points
0 comments1 min readEA link

Without spe­cific coun­ter­mea­sures, the eas­iest path to trans­for­ma­tive AI likely leads to AI takeover

Ajeya18 Jul 2022 19:07 UTC
215 points
12 comments75 min readEA link
(www.lesswrong.com)

England & Wales & Windfalls

John Bridge3 Jun 2022 10:26 UTC
13 points
1 comment26 min readEA link

Why I’m Scep­ti­cal of Foom

𝕮𝖎𝖓𝖊𝖗𝖆8 Dec 2022 10:01 UTC
21 points
7 comments1 min readEA link

My ar­gu­ment against AGI

cveres12 Oct 2022 6:32 UTC
2 points
29 comments3 min readEA link

Chris Olah on work­ing at top AI labs with­out an un­der­grad degree

80000_Hours10 Sep 2021 20:46 UTC
15 points
0 comments75 min readEA link

6 Year De­crease of Me­tac­u­lus AGI Prediction

Chris Leong12 Apr 2022 5:36 UTC
40 points
6 comments1 min readEA link

How to be­come more agen­tic, by GPT-EA-Fo­rum-v1

JoyOptimizer20 Jun 2022 6:50 UTC
24 points
8 comments4 min readEA link

Why I think strong gen­eral AI is com­ing soon

porby28 Sep 2022 6:55 UTC
14 points
1 comment1 min readEA link

Ex­plo­ra­tory sur­vey on psy­chol­ogy of AI risk perception

Daniel_Friedrich2 Aug 2022 20:34 UTC
1 point
0 comments1 min readEA link
(forms.gle)

First call for EA Data Science/​ML/​AI

astrastefania23 Aug 2022 19:37 UTC
25 points
0 comments1 min readEA link

Catholic the­olo­gians and priests on ar­tifi­cial intelligence

anonymous614 Jun 2022 18:53 UTC
15 points
3 comments1 min readEA link

Carnegie Coun­cil MisUn­der­stands Longtermism

Jeff A30 Sep 2022 2:57 UTC
6 points
8 comments1 min readEA link
(www.carnegiecouncil.org)

Re­sults from the lan­guage model hackathon

Esben Kran10 Oct 2022 8:29 UTC
23 points
2 comments1 min readEA link

The Limit of Lan­guage Models

𝕮𝖎𝖓𝖊𝖗𝖆26 Dec 2022 11:17 UTC
10 points
1 comment1 min readEA link

“Origi­nal­ity is noth­ing but ju­di­cious imi­ta­tion”—Voltaire

Damien Lasseur23 Oct 2022 19:00 UTC
1 point
0 comments1 min readEA link

Google could build a con­scious AI in three months

Derek Shiller1 Oct 2022 13:24 UTC
14 points
17 comments7 min readEA link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Dario Citrini15 Sep 2021 19:05 UTC
23 points
0 comments8 min readEA link

Gen­eral vs spe­cific ar­gu­ments for the longter­mist im­por­tance of shap­ing AI development

Sam Clarke15 Oct 2021 14:43 UTC
44 points
7 comments2 min readEA link

PIBBSS Fel­low­ship: Bounty for Refer­rals & Dead­line Extension

Anna_Gajdova17 Jan 2022 16:23 UTC
17 points
7 comments1 min readEA link

Don’t ex­pect AGI any­time soon

cveres10 Oct 2022 22:38 UTC
0 points
19 comments1 min readEA link

Values lock-in is already hap­pen­ing (with­out AGI)

LB1 Sep 2022 22:21 UTC
1 point
1 comment12 min readEA link

An­nual AGI Bench­mark­ing Event

Metaculus26 Aug 2022 21:31 UTC
20 points
2 comments2 min readEA link
(www.metaculus.com)

EA for dumb peo­ple?

Olivia Addy11 Jul 2022 10:46 UTC
442 points
160 comments2 min readEA link

It takes 5 lay­ers and 1000 ar­tifi­cial neu­rons to simu­late a sin­gle biolog­i­cal neu­ron [Link]

MichaelStJules7 Sep 2021 21:53 UTC
44 points
17 comments2 min readEA link

What if we don’t need a “Hard Left Turn” to reach AGI?

Eigengender15 Jul 2022 9:49 UTC
39 points
7 comments4 min readEA link

Why don’t gov­ern­ments seem to mind that com­pa­nies are ex­plic­itly try­ing to make AGIs?

Ozzie Gooen23 Dec 2021 7:08 UTC
82 points
52 comments2 min readEA link

The His­tory of AI Rights Research

Jamie_Harris27 Aug 2022 8:14 UTC
43 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

High Im­pact Ca­reers in For­mal Ver­ifi­ca­tion: Ar­tifi­cial Intelligence

quinn5 Jun 2021 14:45 UTC
28 points
6 comments16 min readEA link

A Sur­vey of the Po­ten­tial Long-term Im­pacts of AI

Sam Clarke18 Jul 2022 9:48 UTC
63 points
2 comments27 min readEA link

What’s so dan­ger­ous about AI any­way? – Or: What it means to be a superintelligence

Thomas Kehrenberg18 Jul 2022 16:14 UTC
9 points
2 comments11 min readEA link

How Open Source Ma­chine Learn­ing Soft­ware Shapes AI

Max Langenkamp28 Sep 2022 17:49 UTC
11 points
3 comments14 min readEA link
(maxlangenkamp.me)

An­nounc­ing In­sights for Impact

Christian Pearson4 Jan 2023 7:00 UTC
79 points
4 comments1 min readEA link

[Ru­mour] Microsoft to in­vest $10B in OpenAI, will re­ceive 75% of prof­its un­til they re­coup in­vest­ment: GPT would be in­te­grated with Office

𝕮𝖎𝖓𝖊𝖗𝖆10 Jan 2023 23:43 UTC
25 points
2 comments1 min readEA link

How we could stum­ble into AI catastrophe

Holden Karnofsky16 Jan 2023 14:52 UTC
78 points
0 comments31 min readEA link
(www.cold-takes.com)

How to use AI speech tran­scrip­tion and anal­y­sis to ac­cel­er­ate so­cial sci­ence research

AlexanderSaeri31 Jan 2023 4:01 UTC
37 points
5 comments11 min readEA link

[Question] How good/​bad is the new Bing AI for the world?

Nathan Young17 Feb 2023 16:31 UTC
21 points
13 comments1 min readEA link

Should ChatGPT make us down­weight our be­lief in the con­scious­ness of non-hu­man an­i­mals?

splinter18 Feb 2023 23:29 UTC
11 points
15 comments2 min readEA link

How ma­jor gov­ern­ments can help with the most im­por­tant century

Holden Karnofsky24 Feb 2023 19:37 UTC
53 points
3 comments4 min readEA link
(www.cold-takes.com)

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman Leventov2 Mar 2023 11:21 UTC
2 points
0 comments9 min readEA link
(www.jimruttshow.com)

5th IEEE In­ter­na­tional Con­fer­ence on Ar­tifi­cial In­tel­li­gence Test­ing (AITEST 2023)

surabhi gupta12 Mar 2023 9:06 UTC
−5 points
0 comments1 min readEA link

2023 Open Philan­thropy AI Wor­ld­views Con­test: Odds of Ar­tifi­cial Gen­eral In­tel­li­gence by 2043

srhoades1014 Mar 2023 20:32 UTC
19 points
0 comments46 min readEA link

AI safety and con­scious­ness re­search: A brainstorm

Daniel_Friedrich15 Mar 2023 14:33 UTC
9 points
0 comments9 min readEA link

AI Safety − 7 months of dis­cus­sion in 17 minutes

Zoe Williams15 Mar 2023 23:41 UTC
62 points
2 comments17 min readEA link

Would you pur­sue soft­ware en­g­ineer­ing as a ca­reer to­day?

justaperson18 Mar 2023 3:33 UTC
6 points
13 comments3 min readEA link

Miti­gat­ing ex­is­ten­tial risks as­so­ci­ated with hu­man na­ture and AI: Thoughts on se­ri­ous mea­sures.

Linyphia25 Mar 2023 19:10 UTC
2 points
1 comment3 min readEA link

[Question] Is Bill Gates overly op­tomistic about AI?

Dov22 Mar 2023 12:29 UTC
9 points
0 comments1 min readEA link

Ex­plor­ers in a vir­tual coun­try: Nav­i­gat­ing the knowl­edge land­scape of large lan­guage models

AlexanderSaeri28 Mar 2023 21:32 UTC
17 points
1 comment6 min readEA link

[Question] What work has been done on the post-AGI dis­tri­bu­tion of wealth?

levin6 Jul 2022 18:59 UTC
16 points
3 comments1 min readEA link

Philan­thropists Prob­a­bly Shouldn’t Mis­sion-Hedge AI Progress

MichaelDickens23 Aug 2022 23:03 UTC
28 points
9 comments36 min readEA link

New Work­ing Paper Series of the Le­gal Pri­ori­ties Project

Legal Priorities Project18 Oct 2021 10:30 UTC
60 points
0 comments9 min readEA link

Us­ing ar­tifi­cial in­tel­li­gence (ma­chine vi­sion) to in­crease the effec­tive­ness of hu­man-wildlife con­flict miti­ga­tions could benefit WAW

Rethink Priorities28 Oct 2022 16:23 UTC
47 points
6 comments32 min readEA link

[Question] How might a herd of in­terns help with AI or biose­cu­rity re­search tasks/​ques­tions?

Harrison Durland20 Mar 2022 22:49 UTC
30 points
8 comments2 min readEA link

New GPT3 Im­pres­sive Ca­pa­bil­ities—In­struc­tGPT3 [1/​2]

simeon_c13 Mar 2022 10:45 UTC
49 points
4 comments8 min readEA link

AGI Safety Com­mu­ni­ca­tions Initiative

Ines11 Jun 2022 16:30 UTC
33 points
5 comments1 min readEA link

Si­mu­la­tors and Mindcrime

𝕮𝖎𝖓𝖊𝖗𝖆9 Dec 2022 15:20 UTC
1 point
0 comments1 min readEA link

The Ter­minol­ogy of Ar­tifi­cial Sentience

Janet Pauketat28 Nov 2021 7:52 UTC
29 points
0 comments1 min readEA link
(www.sentienceinstitute.org)

Data Publi­ca­tion for the 2021 Ar­tifi­cial In­tel­li­gence, Mo­ral­ity, and Sen­tience (AIMS) Sur­vey

Janet Pauketat24 Mar 2022 15:43 UTC
21 points
0 comments3 min readEA link
(www.sentienceinstitute.org)

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveres6 Oct 2022 11:26 UTC
2 points
12 comments13 min readEA link

[Question] What EAG ses­sions would you like on AI?

Nathan Young20 Mar 2022 17:05 UTC
7 points
10 comments1 min readEA link

Pri­ori­tiz­ing the Arts in re­sponse to AI automation

Casey25 Sep 2022 7:49 UTC
6 points
1 comment1 min readEA link

Chris Olah on what the hell is go­ing on in­side neu­ral networks

80000_Hours4 Aug 2021 15:13 UTC
4 points
0 comments135 min readEA link

Su­per­in­tel­li­gent AI is nec­es­sary for an amaz­ing fu­ture, but far from sufficient

So8res31 Oct 2022 21:16 UTC
35 points
5 comments1 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link

Ret­ro­spec­tive on the Sum­mer 2021 AGI Safety Fundamentals

Dewi6 Dec 2021 20:10 UTC
66 points
3 comments32 min readEA link

Risks from Au­tonomous Weapon Sys­tems and Mili­tary AI

christian.r19 May 2022 10:45 UTC
71 points
10 comments37 min readEA link

Why some peo­ple be­lieve in AGI, but I don’t.

cveres26 Oct 2022 3:09 UTC
13 points
2 comments4 min readEA link

On Scal­ing Academia

kirchner.jan20 Sep 2021 14:54 UTC
18 points
3 comments13 min readEA link
(universalprior.substack.com)

Free Guy, a rom-com on the moral pa­tient­hood of digi­tal sentience

mic23 Dec 2021 7:47 UTC
20 points
2 comments2 min readEA link

We Can’t Do Long Term Utili­tar­ian Calcu­la­tions Un­til We Know if AIs Can Be Con­scious or Not

Mike207312 Sep 2022 8:37 UTC
4 points
0 comments12 min readEA link

“The Physi­cists”: A play about ex­tinc­tion and the re­spon­si­bil­ity of scientists

Lara_TH29 Nov 2022 16:53 UTC
28 points
1 comment8 min readEA link

When 2/​3rds of the world goes against you

Jeffrey Kursonis2 Jul 2022 20:34 UTC
2 points
2 comments9 min readEA link

Ap­ply to be a Stan­ford HAI Ju­nior Fel­low (As­sis­tant Pro­fes­sor- Re­search) by Nov. 15, 2021

Vael Gates31 Oct 2021 2:21 UTC
15 points
0 comments1 min readEA link

Stack­elberg Games and Co­op­er­a­tive Com­mit­ment: My Thoughts and Reflec­tions on a 2-Month Re­search Project

Ben Bucknall13 Dec 2021 10:49 UTC
18 points
1 comment9 min readEA link

Strong AI. From the­ory to prac­tice.

GaHHuKoB19 Aug 2022 11:33 UTC
−2 points
0 comments11 min readEA link
(www.reddit.com)

Pro­ject Idea: The cost of Coc­ci­dio­sis on Chicken farm­ing and if AI can help

Max Harris26 Sep 2022 16:30 UTC
25 points
8 comments2 min readEA link

An Ex­er­cise in Speed-Read­ing: The Na­tional Se­cu­rity Com­mis­sion on AI (NSCAI) Fi­nal Report

abiolvera17 Aug 2022 16:55 UTC
47 points
4 comments11 min readEA link

[Question] Please Share Your Per­spec­tives on the De­gree of So­cietal Im­pact from Trans­for­ma­tive AI Outcomes

Kiliank15 Apr 2022 1:23 UTC
3 points
3 comments1 min readEA link

Sixty years af­ter the Cuban Mis­sile Cri­sis, a new era of global catas­trophic risks

christian.r13 Oct 2022 11:25 UTC
31 points
0 comments1 min readEA link
(thebulletin.org)

GPT-2 as step to­ward gen­eral in­tel­li­gence (Alexan­der, 2019)

Will Aldred18 Jul 2022 16:14 UTC
42 points
0 comments2 min readEA link
(slatestarcodex.com)

13 Very Differ­ent Stances on AGI

Ozzie Gooen27 Dec 2021 23:30 UTC
84 points
27 comments3 min readEA link

Can we simu­late hu­man evolu­tion to cre­ate a some­what al­igned AGI?

Thomas Kwa29 Mar 2022 1:23 UTC
19 points
0 comments7 min readEA link

[Question] Re­quest for As­sis­tance—Re­search on Sce­nario Devel­op­ment for Ad­vanced AI Risk

Kiliank30 Mar 2022 3:01 UTC
2 points
1 comment1 min readEA link

Slides: Po­ten­tial Risks From Ad­vanced AI

Aryeh Englander28 Apr 2022 2:18 UTC
9 points
0 comments1 min readEA link

“AGI timelines: ig­nore the so­cial fac­tor at their peril” (Fu­ture Fund AI Wor­ld­view Prize sub­mis­sion)

ketanrama5 Nov 2022 17:45 UTC
10 points
0 comments12 min readEA link
(trevorklee.substack.com)

FYI: I’m work­ing on a book about the threat of AGI/​ASI for a gen­eral au­di­ence. I hope it will be of value to the cause and the community

Darren McKee17 Jun 2022 11:52 UTC
32 points
1 comment2 min readEA link

Love and AI: Re­la­tional Brain/​Mind Dy­nam­ics in AI Development

Jeffrey Kursonis21 Jun 2022 7:09 UTC
2 points
2 comments3 min readEA link

Rab­bits, robots and resurrection

Patrick Wilson10 May 2022 15:00 UTC
9 points
0 comments15 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
62 points
19 comments27 min readEA link

Sce­nario Map­ping Ad­vanced AI Risk: Re­quest for Par­ti­ci­pa­tion with Data Collection

Kiliank27 Mar 2022 11:44 UTC
14 points
0 comments5 min readEA link

Align­ing AI with Hu­mans by Lev­er­ag­ing Le­gal Informatics

johnjnay18 Sep 2022 7:43 UTC
20 points
11 comments3 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnay26 Oct 2022 1:24 UTC
7 points
1 comment1 min readEA link
No comments.