Ok, I should have been clear in the beginning—what struck me was that the first example was essentially answering the question on doing great harm with minimum spendings—a really wicked “evil EA”, I would say. I found it somewhat ironic.
EM, Effective Malevolence
Did you intend to refer to page 83 rather than 82?
I see it’s indeed page 83 in the document on arxiv; it was 82 in the pdf on OpenAI website
Ok, I should have been clear in the beginning—what struck me was that the first example was essentially answering the question on doing great harm with minimum spendings—a really wicked “evil EA”, I would say. I found it somewhat ironic.
EM, Effective Malevolence
Did you intend to refer to page 83 rather than 82?
I see it’s indeed page 83 in the document on arxiv; it was 82 in the pdf on OpenAI website