Volunteer at EA Finland / professional data scientist / confused about AI safety / interested in communications
Ada-Maaria Hyvärinen
We personally also recommend engaging with the writings of Eliezer, Paul, Nate, and John. We do not endorse all of their research, but they all have tackled the problem, and made a fair share of their reasoning public. If we want to get better together, they seem like a good start.
I realize this is a cross post and your original audience might know where to find all these recommendations even without further info, but if you want new people to look into their writings, it would be better to at least use full names of the authors you recommend.
Like Elliot, while I think the FLI team has handled the whole thing just fine, I also find it confusing people think the far-right connections of Nya Dagbladet would have been difficult to identify. I didn’t know anything about Nya Dagbladed in advance so I checked it:
The complete English Wikipedia article on Nya Dagbladet:
”Nya Dagbladet is a Swedish online daily newspaper founded in 2012,[1] which has a historical connection to the National Democrats, a far-right political party in Sweden. It publishes articles promoting conspiracy theories about the Holocaust, COVID-19 vaccines, climate change, mobile phone towers, and others. Other common themes include immigration, GMOs, Israel, the EU,[2] and pro-Kremlin propaganda regarding the Russian invasion of Ukraine.[3][4] Markus Andersson is its editor-in-chief.”
The Swedish summary/beginning of the Wikipedia article on Nya Dagbladed:
”Nya Dagbladet är en svensk nätbaserad dagstidning grundad 2012.[2] Tidningen är nationalistisk, vetenskapsskeptisk och partipolitiskt obunden, med historisk koppling till Nationaldemokraterna. Den betecknar sig som humanistisk och etnopluralistisk med en antiglobalistisk hållning.[3] Den refererar ofta pseudovetenskap och vaccinationsmotstånd.”
I tried to check what the newspapers’ tone regarding Jews is, and I found this letter from the editor kind of strange. (If my Swedish does not fail me it claims that the Holocaust memory day is “real antisemitism” as many horrors of the Holocaust didn’t actually happen.)
Also, Per Shapiro has written a commentary titled “Den extrema högern” (“Far right”) in 2021 about people’s negative reactions to his previous article, saying that people on social media accused him of writing in an far-right paper, while (according to Shapiro) the biggest Swedish newspaper is actually a lot more far-right (because it’s editor in chief supports American war crimes and Israeli occupation). What I understand from this (again with my limited Swedish and Google Translate) is that Shapiro both strongly rejects far-right but is well aware that many people perceive writing in NyD as far-right associated. So I wonder what the recent revelations of extremism are that shocked him are – maybe something happened that I cannot identify just by looking at the newspapers post history.
Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
Hi, just wanted to drop in to say:
You had an experience that you describe as burn-out less than a week ago – it’s totally ok not to be fine yet! It’s good you feel better but take the time you need to recover properly.
I don’t know how old you are but it is also ok to feel overwhelmed by EA later when you no longer feel like describing yourself as “just a kid”. Doing your best to make the world a better place is hard for a person of any age.
The experience you had does not necessarily mean you would not be cut out for community building. You’ve now learned more of your boundaries and you might be more able to recognize red flags earlier in the future.
Good luck and I hope you learn something valuable about yourself from the ADHD assessment!
Just to give you a data point from a non-native speaker who likes literature and languages, this quote wasn’t a joy to read for me since it would have taken me a very long time to understand what this is about if I would not have known the context. So I am not sure what you mean by the best linguistic traditions – I think simple language can be elegant too.
Yeah, in Finnish contexts (nude) sauna is a normal option for an afterparty of a professional conference or similar context :) but in these cases, there is a gender separation of sauna turns (or different saunas) for men and women, just like at Finnish public swimming pools. At EA Finland events we have so far followed a quite usual Finnish student and hobby group policy of having a sauna turns for non-men and non-women separately and a mixed turn where everyone is welcome, with the option but not obligation to wear a swimming suit.
I’m still quite uncertain on my beliefs but I don’t think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsafe systems, I don’t know how much this happens but I would not find it very surprising).
I wish I had a better understanding on how x-risk probabilities are estimated (as I said I will try to look into that) but I don’t directly understand why x-risk from AI would be a lot more probable than, say, biorisk (that I don’t understand in detail at all).
Exactly, that’s the idea!
Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time).
Don’t be sorry! Feedback on language and grammar is very useful to me, since I usually write in Finnish. (This is probably the first time since middle school that I’ve written a piece of fiction in English.)
Apparently the punctuation slightly depends on whether you are using British or American English and whether the work is fiction or non-fiction (https://en.wikipedia.org/wiki/Quotation_marks_in_English#Order_of_punctuation ). Since this is fiction, you are in any case totally right about the commas going inside the quotes, and I will edit accordingly. Thanks for pointing this out!
Good observation, I didn’t notice that! Sure makes it harder for non-Swedish speakers to for example become aware of “should I check whether there is a connection to Nationaldemokraterna” if there is no English page that points to that direction.
What comes to Swedish speakers: If the letter of intent would have been signed because of nepotism, the vaccination skepticism part probably would not have come as a surprise since it seems to be a recurring theme in Per Shapiro’s NyD contributions (again, if my Swedish does not fail me). Which to me seems evidence to the direction that nepotism did not influence the decision.
I feel like everyone I have ever talked about AI safety with would agree on the importance of thinking critically and staying skeptical, and this includes my facilitator and cohort members from the AGISF programme.
I think the 1.5h discussion session between 5 people who have read 5 texts does not allow really going deep into any topics, since it is just ~3 minutes per participant per text on average. I think these kind of programs are great for meeting new people, clearing misconceptions and providing structure/accountability on actually reading the material, but they by nature are not that good for having in-depth debates. I think that’s ok, but just to clarify why I think it is normal I probably did not mention most of the things I described on this post during the discussion sessions.
But there is an additional reason that I think is more important to me, which is differentiating between performing skepticism and actually voicing true opinions. It is not possible for my facilitator to notice which one I am doing because they don’t know me, and performing skepticism (in order to conform to the perceived standard of “you have to think about all of this critically and by your own, and you will probably arrive to similar conclusions than others in this field”) looks the same as actually raising the confusions you have. This is why I thought I can convey this failure mode to others by comparing to inner misalignment :)
When I was a Math freshman my professor told us he always encourages people to ask questions during lectures. Often, it had happened that he’d explained a concept and nobody would ask anything. He’d check what the students understood, and it would turn out they did not grasp the concept. When asking why nobody asked anything, the students would say that they did not understand enough to ask a good question. To avoid this dynamic, he told us that “I did not understand anything” counts as a valid question on his lectures. It helped somewhat but at least I still often stayed silent instead of raising my hand and saying “I did not understand anything”.
I feel like the same dynamic can easily happen when discussing AI safety (or any difficult EA concept, really). If people are encouraged to raise questions and concerns they might only raise the “good” ones, and stay silent if they feel like they just did not understand the concepts well enough (like I did in my avoidance strategy 1).
Like I said it is based on my gut feeling, but fairly sure.
Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.
Also, do you happen to know why “prosaic” practices are called “prosaic”? I have never understood the connection to the dictionary definition of “prosaic”.
Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don’t know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think “minimalistic” style was nice.)
It would be nice to call and discuss if you are interested.
I obviously don’t have access to Mikkola’s full interview transcripts, but when I think back to EA Helsinki on 2021, it is possible that none of us who were interviewed told her we’d do anything like that, and would only list serious sounding stuff such as donating and career planning as our EA actions :) this, again, shows the limitations of inspecting a whole movement with a limited interview study like this.
Onni works for Rethink Priorities and is part of the board of EA Finland, but does not actively participate in community building efforts in a hands-on practical basis anymore such as organizing events or so. My impression is that he is relieved that other people are doing it now :)
Good that you asked, since one thing I wanted to highlight with this story was that it is possible to succeed at community building even if it is not your favorite thing or the best personal fit for you considering all abstract possibilities – if you are the only person able to put in effort at a specific time, you are the best person to do it. (And later you can hand it over to others when you discover a new another opportunity that utilizes more of your personal strengths.)
I agree. Weirdly this is not addressed in the thesis.
Interesting, I’ve never heard this before!
I’m really glad this post was useful to you :)
Thinking about this quote now, I think I should have written down more explicitely that it is possible to care a lot about having a positive impact, but not make it the definition of your self-worth; and that it is good to have positive impact as your goal and normal to be sad about not reaching your goals as you’d like to, but this sadness does not have to come with the feeling of worthlessness. I am still learning how to actually separate these on an emotional level.
I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are “objectively” exceptionally impressive Luisa’s article can also come across as intimidating: “if a person who achieved all of this still thinks they are not good enough, then what about me?”
I think Olivia’s post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!