Volunteer at EA Finland / professional data scientist / confused about AI safety / interested in communications
Ada-Maaria Hyvärinen
[Creative Writing Contest] [Fiction] The Fey Deal
Don’t be sorry! Feedback on language and grammar is very useful to me, since I usually write in Finnish. (This is probably the first time since middle school that I’ve written a piece of fiction in English.)
Apparently the punctuation slightly depends on whether you are using British or American English and whether the work is fiction or non-fiction (https://en.wikipedia.org/wiki/Quotation_marks_in_English#Order_of_punctuation ). Since this is fiction, you are in any case totally right about the commas going inside the quotes, and I will edit accordingly. Thanks for pointing this out!
I tried out a couple of different ones and iterated based on feedback.
One ending I considered would have been just leaving out the last paragraph and linking to GiveWell like this:“Besides,” his best friend said. “If you actually want to save a life for 5000 dollars, you can do it in a way where you can verify how they are doing it and what they need your money for.”
“What do you mean?” he asked, now more confused than ever.
I also considered embedding the link explicitly in the story like this:
“Besides,” his best friend said. “If you actually want to save a life for 500 dollars, you can do it in a way where you can verify how they are doing it and what they need your money for.”
“What do you mean?” he asked, now more confused than ever.
”I’ll send you a link”, she said.
And the link she send him was this: https://www.givewell.org/However, some of my testers found that this also broke the flow and that moving the link “outside” the story gave a less advertisement-like feeling.
And I also tried an ending that would wrap up the story more nicely (at this point the whole story was around 40% longer and not that well-edited in general):
“You know there are organizations that save people from dying of preventable illness and poverty,” she said. “The best ones can actually save a life for around that much, maybe even less.”
“But how do I know what those organizations are and how much they actually need to save somebody from dying?” he asked. “That sounds even more complicated than coming up with questions about side effects to the fey.”
“You don’t have to do that all by yourself,” she said. “There are people who are working on this stuff. You can see if you agree with their reasoning and conclusions, and then make your own decisions.”
This could be something. For a split second, he wished she wouldn’t have told him that. If what she said was true, he would have to make a choice, again. But it was better to know than to not know. And suddenly the thought of actually being able to save a person who would otherwise die was so overwhelming he had a hard time wrapping his head around it.
“I guess I don’t really like making decisions,” he said.
“I feel you,” she said. “But if you don’t make a choice, that’s actually a decision, too. It just means you chose to do nothing.”
“Yeah,” he said. “I’ve noticed.”
The fey were still in the woods, and would be in the woods, maybe forever. It didn’t matter. Anyway, he had to choose. But at least he could find out what his options were.
This longer ending was most liked by readers that were already quite familiar with EA, so I decided to not go for it, since I wanted to write for people who have not thought and discussed about EA that much yet. But of course, my pool of proof-readers was not that big and everyone was at least somewhat familiar with EA, even if not involved in the movement. It would be interesting to get feedback from total newbies.
This one is nice as well!
Personally I like the method of embedding the link in the story, but since a many in my test audience considered it off-putting and too advertisement-like I thought it it better to trust their feedback, since I obviously personally already agree with the thought I’m trying to convey with my text. But like I said I’m not certain what the best solution is, probably there is no perfect one.
glad to hear you like it! :)
[Creative Writing Contest] The Gifts
Unsurprising things about the EA movement that surprised me
With EA career stories I think it is important to to keep in mind that new members might not read them the same way as more engaged EAs who already know what organization is considered cool and effective within EA. When I started attending local EA meetups I met a person who worked at OpenPhil (maybe as a contractor? I can’t remember the details), but I did not find it particularly impressive because I did not know what OpenPhilanthropy was and assumed the “phil” stood for “philosophy”.
How I failed to form views on AI safety
thanks Aayush! Edited the sentence to be hopefully more clear now :)
Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time).
Interesting to hear your personal opinion on the persuasiveness of Superintelligence and Human Compatible! And thanks for designing the AGISF course, it was useful.
I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said.
To clarify, my friends (even if they are very smart) did not come up with all AI safety arguments by themselves, but started to engage with AI safety material because they had already been looking at the world and thinking “hmm, looks like AI is a big thing and could influence a lot of stuff in the future, hope it changes things for the good”. So they got quickly on board after hearing that there are people seriously working on the topic, and it made them want to read more.
Glad it may have invoked some ideas for any discussions you might be having at Israel :) For us in Finland, I feel like I at least personally need to get some more clarity on how to balance EA movement building efforts and possible cause priorization related differences between movement builders. I think this is non-trivial because forming a consensus seems hard enough.
Curious to read any object-level response if you feel like writing one! If I end up writing any “Intro to AI Safety” thing it will be in Finnish so I’m not sure if you will understand it (it would be nice to have at least one coherent Finnish text about it that is not written by an astronomer or a paleontologist but by some technical person).
Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don’t know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think “minimalistic” style was nice.)
It would be nice to call and discuss if you are interested.
Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
That’s right, thanks again for answering my question back then!
Maybe I formulated my question wrong but I understood from your answer that you got first interested in AI safety, and only then on DS/ML (you mentioned you had had a CS background before but not your academic AI experience). This is why I did not include you in this sample of 3 persons—I wanted to narrow the search to people who had more AI specific background before getting into AI safety (not just CS). It is true that you did not mention Superintelligence either, but interesting to hear you also had a good opinion on it! If I would have known both your academic AI experience and that you liked Superintelligence I could have made the number to 4 (unless you think Superintelligence did not really influence you, then it would be 3 out of 4).
You were the only person who answered my PM but stated they got into AI safety before getting to DS/ML. One person did not answer, and the other 3 that answered stated they got into DS/ML before AI safety. I guess there are more than 6 people with some DS/ML background on the course channel but also know not everyone introduced themselves, so the sample size is very anecdotal anyway.
I also used the Slack to ask for recommendations of blog posts or similar stories on how people with DS/ML backgrounds got into AI safety. Aside of recommendations on who to talk on the Slack, I got pointers to Stuart Russell’s interview on Sam Harris’ podcast and a Yudkowsky post.
I’m still quite uncertain on my beliefs but I don’t think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsafe systems, I don’t know how much this happens but I would not find it very surprising).
I wish I had a better understanding on how x-risk probabilities are estimated (as I said I will try to look into that) but I don’t directly understand why x-risk from AI would be a lot more probable than, say, biorisk (that I don’t understand in detail at all).
Thanks for the feedback! Deciding how to end the story was definitely the hardest part in writing this. Pulling the reader out of the fantasy was a deliberate choice, but that does not mean it was necessarily the best one – I did some A/B testing on my proof reading audience but I have to admit my sample size was not that big. Glad you liked it in general anyway :)