RSS

Babel

Karma: 128

Looking for collaboration on progress alignment, or the emulation of human moral progress in frontier AI systems, to prevent premature value lock-in from AI — a neglected existential risk. See our research agenda and recent publication.

[Question] An­i­mal Ad­vo­cacy vs An­i­mal Welfare: How should EA frame its an­i­mal-fo­cused work?

Babel17 Feb 2023 8:56 UTC
10 points
3 comments1 min readEA link

Ba­bel’s Quick takes

Babel2 Dec 2021 2:34 UTC
2 points
10 comments1 min readEA link

[Question] Ar­tifi­cial Suffer­ing and Pas­cal’s Mug­ging: What to think?

Babel4 Oct 2021 15:01 UTC
15 points
4 comments2 min readEA link

[Question] Why doesn’t EA Fund sup­port Pay­pal?

Babel26 Sep 2020 8:00 UTC
7 points
5 comments1 min readEA link

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

Babel8 Jul 2009 12:42 UTC
12 points
0 comments1 min readEA link
(longtermrisk.org)