RSS

Red­wood Research

TagLast edit: Jul 24, 2022, 7:23 PM by Leo

Redwood Research is a nonprofit organization focused on applied AI alignment research.

Their current project started in early August 2021.[1]

Funding

As of July 2022, Redwood Research has received over $9.4 million in funding from Open Philanthropy[2] and nearly $1.3 million from the Survival and Flourishing Fund.[3]

External links

Redwood Research. Official website.

Apply for a job.

  1. ^

    Shlegeris, Buck (2021) Redwood Research’s current project, AI Alignment Forum, September 21.

  2. ^

    Open Philanthropy (2022) Grants database: Redwood Research, Open Philanthropy.

  3. ^

    Survival and Flourishing Fund (2021) SFF-2022-H1 S-Process recommendations announcement, Survival and Flourishing Fund.

Cri­tiques of promi­nent AI safety labs: Red­wood Research

OmegaMar 31, 2023, 8:58 AM
339 points
91 comments20 min readEA link

A fresh­man year dur­ing the AI midgame: my ap­proach to the next year

BuckApr 14, 2023, 12:38 AM
179 points
30 comments7 min readEA link

Ap­ply to the sec­ond ML for Align­ment Boot­camp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]

BuckMay 6, 2022, 12:19 AM
111 points
7 comments6 min readEA link

Red­wood Re­search is hiring for sev­eral roles

Jack RNov 29, 2021, 12:18 AM
75 points
0 comments1 min readEA link

We’re Red­wood Re­search, we do ap­plied al­ign­ment re­search, AMA

BuckOct 5, 2021, 5:04 AM
107 points
49 comments2 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 23, 2021, 2:06 PM
176 points
18 comments73 min readEA link

Ap­ply to the ML for Align­ment Boot­camp (MLAB) in Berkeley [Jan 3 - Jan 22]

Habryka [Deactivated]Nov 3, 2021, 6:20 PM
140 points
6 comments1 min readEA link

Takes on “Align­ment Fak­ing in Large Lan­guage Models”

Joe_CarlsmithDec 18, 2024, 6:22 PM
65 points
1 comment1 min readEA link

Red­wood Re­search is hiring for sev­eral roles (Oper­a­tions and Tech­ni­cal)

JJXWangApr 14, 2022, 3:23 PM
45 points
0 comments1 min readEA link

Align­ment Fak­ing in Large Lan­guage Models

Ryan GreenblattDec 18, 2024, 5:19 PM
142 points
9 comments1 min readEA link

Ap­ply to the Red­wood Re­search Mechanis­tic In­ter­pretabil­ity Ex­per­i­ment (REMIX), a re­search pro­gram in Berkeley

Max NadeauOct 27, 2022, 1:39 AM
95 points
5 comments12 min readEA link