RSS

yanni kyriacos

Karma: 698

Developing AGI without a worldwide referendum is ethically unjustifiable. A global moratorium is necessary until consensus is reached.

Ap­ply to be a Safety Eng­ineer at Lock­heed Martin!

yanni kyriacos31 Mar 2024 21:01 UTC
31 points
5 comments1 min readEA link

Be skep­ti­cal of EAs giv­ing ad­vice on things they’ve never ac­tu­ally been suc­cess­ful in themselves

yanni kyriacos25 Mar 2024 0:01 UTC
18 points
10 comments1 min readEA link

Am­bi­tious Im­pact launches a for-profit ac­cel­er­a­tor in­stead of build­ing the AI Safety space. Let’s talk about this.

yanni kyriacos18 Mar 2024 3:44 UTC
−5 points
13 comments1 min readEA link

Why I worry about about EA lead­er­ship, ex­plained through two com­pletely made-up LinkedIn profiles

yanni kyriacos16 Mar 2024 9:41 UTC
69 points
25 comments1 min readEA link

[Question] Do peo­ple who ask for anony­mous feed­back via ad­mony­mous.co ac­tu­ally get good feed­back?

yanni kyriacos14 Mar 2024 0:04 UTC
21 points
11 comments1 min readEA link

I have a some ques­tions for the peo­ple at 80,000 Hours

yanni kyriacos14 Feb 2024 23:07 UTC
23 points
15 comments1 min readEA link

[Question] Do An­glo EAs care less about the opinions of Non-An­glo EAs? Is this a prob­lem?

yanni kyriacos23 Aug 2023 23:58 UTC
−3 points
7 comments1 min readEA link

[Question] Am I tak­ing crazy pills? Why aren’t EAs ad­vo­cat­ing for a pause on AI ca­pa­bil­ities?

yanni kyriacos15 Aug 2023 23:29 UTC
24 points
21 comments1 min readEA link

[Question] Go fish­ing to save fish from a worse death?

yanni kyriacos15 Aug 2023 4:11 UTC
11 points
3 comments1 min readEA link

[Question] How use­ful is it to use a slav­ery/​fac­tory farm­ing com­par­i­son as a de­ci­sion mak­ing heuris­tic?

yanni kyriacos11 Aug 2023 22:38 UTC
11 points
4 comments1 min readEA link

[Question] Why doesn’t EA take en­light­en­ment or awak­en­ing se­ri­ously?

yanni kyriacos9 Aug 2023 23:45 UTC
13 points
6 comments1 min readEA link

Yanni Kyr­i­a­cos’s Quick takes

yanni kyriacos13 Jun 2023 0:23 UTC
2 points
76 comments1 min readEA link

Could In­ten­tional Liv­ing (as a cause area) be an EA Blind Spot?

yanni kyriacos11 Jun 2023 1:38 UTC
4 points
2 comments1 min readEA link

Idea: The Anony­mous EA Pollster

yanni kyriacos25 May 2023 4:58 UTC
5 points
0 comments1 min readEA link

An Up­date On The Cam­paign For AI Safety Dot Org

yanni kyriacos5 May 2023 0:19 UTC
26 points
4 comments1 min readEA link

[Question] Who is test­ing AI Safety pub­lic out­reach mes­sag­ing?

yanni kyriacos15 Apr 2023 0:53 UTC
20 points
2 comments1 min readEA link

[Question] Imag­ine AGI kil­led us all in three years. What would have been our biggest mis­takes?

yanni kyriacos7 Apr 2023 0:06 UTC
17 points
6 comments1 min readEA link

If You Have A Short At­ten­tion Span But Need To Read A Lot This Might Be What You’re Look­ing For

yanni kyriacos4 Apr 2023 6:26 UTC
9 points
3 comments1 min readEA link

If EAs won’t go ve­gan what chance do an­i­mals have?

yanni kyriacos12 Mar 2023 22:24 UTC
14 points
14 comments2 min readEA link