RSS

yanni kyriacos

Karma: 1,457

Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).

It is time to start war gam­ing for AGI

yanni kyriacos17 Oct 2024 5:14 UTC
14 points
4 comments1 min readEA link

2/​3 Aussie & NZ AI Safety folk of­ten or some­times feel lonely or dis­con­nected (and 16 other bar­ri­ers to im­pact)

yanni kyriacos1 Aug 2024 1:14 UTC
19 points
11 comments8 min readEA link

[Question] How have analo­gous In­dus­tries solved In­ter­ested > Trained > Em­ployed bot­tle­necks?

yanni kyriacos30 May 2024 23:59 UTC
6 points
0 comments1 min readEA link

If you’re an AI Safety move­ment builder con­sider ask­ing your mem­bers these ques­tions in an interview

yanni kyriacos27 May 2024 5:46 UTC
10 points
0 comments2 min readEA link

Please help me find re­search on as­piring AI Safety folk!

yanni kyriacos20 May 2024 22:06 UTC
7 points
0 comments1 min readEA link

More than 50% of EAs prob­a­bly be­lieve En­light­en­ment is real. This is a big deal right?

yanni kyriacos1 May 2024 3:19 UTC
22 points
17 comments1 min readEA link

Ap­ply to be a Safety Eng­ineer at Lock­heed Martin!

yanni kyriacos31 Mar 2024 21:01 UTC
31 points
5 comments1 min readEA link

Be skep­ti­cal of EAs giv­ing ad­vice on things they’ve never ac­tu­ally been suc­cess­ful in themselves

yanni kyriacos25 Mar 2024 0:01 UTC
22 points
10 comments1 min readEA link

Am­bi­tious Im­pact launches a for-profit ac­cel­er­a­tor in­stead of build­ing the AI Safety space. Let’s talk about this.

yanni kyriacos18 Mar 2024 3:44 UTC
−7 points
13 comments1 min readEA link

Why I worry about about EA lead­er­ship, ex­plained through two com­pletely made-up LinkedIn profiles

yanni kyriacos16 Mar 2024 9:41 UTC
74 points
25 comments1 min readEA link

[Question] Do peo­ple who ask for anony­mous feed­back via ad­mony­mous.co ac­tu­ally get good feed­back?

yanni kyriacos14 Mar 2024 0:04 UTC
23 points
13 comments1 min readEA link

I have a some ques­tions for the peo­ple at 80,000 Hours

yanni kyriacos14 Feb 2024 23:07 UTC
25 points
17 comments1 min readEA link

[Question] Do An­glo EAs care less about the opinions of Non-An­glo EAs? Is this a prob­lem?

yanni kyriacos23 Aug 2023 23:58 UTC
−3 points
7 comments1 min readEA link

[Question] Am I tak­ing crazy pills? Why aren’t EAs ad­vo­cat­ing for a pause on AI ca­pa­bil­ities?

yanni kyriacos15 Aug 2023 23:29 UTC
18 points
21 comments1 min readEA link

[Question] Go fish­ing to save fish from a worse death?

yanni kyriacos15 Aug 2023 4:11 UTC
11 points
3 comments1 min readEA link

[Question] How use­ful is it to use a slav­ery/​fac­tory farm­ing com­par­i­son as a de­ci­sion mak­ing heuris­tic?

yanni kyriacos11 Aug 2023 22:38 UTC
11 points
4 comments1 min readEA link

[Question] Why doesn’t EA take en­light­en­ment or awak­en­ing se­ri­ously?

yanni kyriacos9 Aug 2023 23:45 UTC
13 points
6 comments1 min readEA link

Yanni Kyr­i­a­cos’s Quick takes

yanni kyriacos13 Jun 2023 0:23 UTC
2 points
246 comments1 min readEA link

Could In­ten­tional Liv­ing (as a cause area) be an EA Blind Spot?

yanni kyriacos11 Jun 2023 1:38 UTC
4 points
2 comments1 min readEA link

Idea: The Anony­mous EA Pollster

yanni kyriacos25 May 2023 4:58 UTC
5 points
0 comments1 min readEA link