RSS

Harrison Durland

Karma: 1,884

Re­search + Real­ity Graph­ing to Sup­port AI Policy (and more): Sum­mary of a Frozen Project

Harrison Durland2 Jul 2022 20:58 UTC
34 points
2 comments8 min readEA link

[Question] How might a herd of in­terns help with AI or biose­cu­rity re­search tasks/​ques­tions?

Harrison Durland20 Mar 2022 22:49 UTC
30 points
8 comments2 min readEA link

From Lab­o­ra­to­ries to Lan­guage Models: Can AI Sup­port Ri­gor in the Jun­gle of Policy Anal­y­sis? (Linkpost)

Harrison Durland6 Feb 2024 18:51 UTC
22 points
0 comments2 min readEA link
(georgetownsecuritystudiesreview.org)

[Question] A dataset for AI/​su­per­in­tel­li­gence sto­ries and other me­dia?

Harrison Durland29 Mar 2022 21:41 UTC
20 points
2 comments1 min readEA link

[Question] Would peo­ple like to see “cu­ra­tion com­ments” on posts with high num­bers of com­ments?

Harrison Durland17 Apr 2022 4:40 UTC
18 points
5 comments1 min readEA link

Disen­tan­gling Some Im­por­tant Fore­cast­ing Con­cepts/​Terms

Harrison Durland25 Jun 2023 17:31 UTC
16 points
2 comments10 min readEA link

The COILS Frame­work for De­ci­sion Anal­y­sis: A Short­ened In­tro+Pitch

Harrison Durland7 May 2022 19:01 UTC
16 points
6 comments3 min readEA link

[Question] Why is “Ar­gu­ment Map­ping” Not More Com­mon in EA/​Ra­tion­al­ity (And What Ob­jec­tions Should I Ad­dress in a Post on the Topic?)

Harrison Durland23 Dec 2022 21:55 UTC
15 points
5 comments1 min readEA link

[Question] Would Struc­tured Dis­cus­sion Plat­forms for EA Com­mu­nity Build­ing Ideas be Valuable? (With Pro­to­type Ex­am­ple)

Harrison Durland4 May 2022 18:49 UTC
15 points
9 comments3 min readEA link

[Out­dated] In­tro­duc­ing the Stock Is­sues Frame­work: The INT Frame­work’s Cousin and an “Ad­vanced” Cost-Benefit Anal­y­sis Framework

Harrison Durland3 Oct 2020 7:18 UTC
14 points
0 comments8 min readEA link

[Question] “Epistemic maps” for AI De­bates? (or for other is­sues)

Harrison Durland30 Aug 2021 4:59 UTC
14 points
8 comments5 min readEA link

Fore­cast­ing With LLMs—An Open and Promis­ing Re­search Direction

Harrison Durland12 Mar 2024 4:23 UTC
13 points
0 comments4 min readEA link

[Question] How/​When Should One In­tro­duce AI Risk Ar­gu­ments to Peo­ple Un­fa­mil­iar With the Idea?

Harrison Durland9 Aug 2022 2:57 UTC
12 points
4 comments1 min readEA link

The TUILS/​COILS Frame­work for Im­prov­ing Pro-Con Analysis

Harrison Durland8 Apr 2021 1:37 UTC
11 points
1 comment14 min readEA link

Han­dling Mo­ral Uncer­tainty with Aver­age vs. To­tal Utili­tar­i­anism: One Method That Ap­par­ently *Doesn’t* Work (But Seemed Like it Should)

Harrison Durland5 Jan 2023 22:18 UTC
10 points
0 comments8 min readEA link

[Question] Plat­form for Pro­ject Spit­bal­ling? (e.g., for AI field build­ing)

Harrison Durland3 Apr 2023 15:45 UTC
7 points
2 comments1 min readEA link

How CISA can Sup­port the Se­cu­rity of Large AI Models Against Theft [Grad School As­sign­ment]

Harrison Durland3 May 2023 15:36 UTC
7 points
0 comments13 min readEA link

r/​place is com­ing back on April 1st: EA pixel art, any­one?

Harrison Durland31 Mar 2022 20:50 UTC
5 points
2 comments1 min readEA link

[Question] EA Topic Sugges­tions for Re­search Map­ping?

Harrison Durland5 Mar 2022 16:46 UTC
5 points
2 comments1 min readEA link

Please, some­one make a dataset of sup­posed cases of “tech panic”

Harrison Durland7 Nov 2023 2:49 UTC
4 points
2 comments2 min readEA link