Volunteer at EA Finland / professional data scientist / confused about AI safety / interested in communications
Ada-Maaria Hyvärinen
Unsurprising things about the EA movement that surprised me
How I failed to form views on AI safety
Cultural EA considerations for Nordic folks
Things I didn’t feel that guilty about before getting involved in effective altruism
Local EA groups: consider becoming more than a satellite group
Counterproductive EA mental health advice (and what to say instead)
Effective altruism as a lifestyle movement (A Master’s Thesis)
EA Finland: from a philosophy discussion club to a national organization
[Creative Writing Contest] [Fiction] The Fey Deal
Less often discussed EA emotional patterns
We personally also recommend engaging with the writings of Eliezer, Paul, Nate, and John. We do not endorse all of their research, but they all have tackled the problem, and made a fair share of their reasoning public. If we want to get better together, they seem like a good start.
I realize this is a cross post and your original audience might know where to find all these recommendations even without further info, but if you want new people to look into their writings, it would be better to at least use full names of the authors you recommend.
Like Elliot, while I think the FLI team has handled the whole thing just fine, I also find it confusing people think the far-right connections of Nya Dagbladet would have been difficult to identify. I didn’t know anything about Nya Dagbladed in advance so I checked it:
The complete English Wikipedia article on Nya Dagbladet:
”Nya Dagbladet is a Swedish online daily newspaper founded in 2012,[1] which has a historical connection to the National Democrats, a far-right political party in Sweden. It publishes articles promoting conspiracy theories about the Holocaust, COVID-19 vaccines, climate change, mobile phone towers, and others. Other common themes include immigration, GMOs, Israel, the EU,[2] and pro-Kremlin propaganda regarding the Russian invasion of Ukraine.[3][4] Markus Andersson is its editor-in-chief.”
The Swedish summary/beginning of the Wikipedia article on Nya Dagbladed:
”Nya Dagbladet är en svensk nätbaserad dagstidning grundad 2012.[2] Tidningen är nationalistisk, vetenskapsskeptisk och partipolitiskt obunden, med historisk koppling till Nationaldemokraterna. Den betecknar sig som humanistisk och etnopluralistisk med en antiglobalistisk hållning.[3] Den refererar ofta pseudovetenskap och vaccinationsmotstånd.”
I tried to check what the newspapers’ tone regarding Jews is, and I found this letter from the editor kind of strange. (If my Swedish does not fail me it claims that the Holocaust memory day is “real antisemitism” as many horrors of the Holocaust didn’t actually happen.)
Also, Per Shapiro has written a commentary titled “Den extrema högern” (“Far right”) in 2021 about people’s negative reactions to his previous article, saying that people on social media accused him of writing in an far-right paper, while (according to Shapiro) the biggest Swedish newspaper is actually a lot more far-right (because it’s editor in chief supports American war crimes and Israeli occupation). What I understand from this (again with my limited Swedish and Google Translate) is that Shapiro both strongly rejects far-right but is well aware that many people perceive writing in NyD as far-right associated. So I wonder what the recent revelations of extremism are that shocked him are – maybe something happened that I cannot identify just by looking at the newspapers post history.
How to set up a career advising system for your local EA group
Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
Hi, just wanted to drop in to say:
You had an experience that you describe as burn-out less than a week ago – it’s totally ok not to be fine yet! It’s good you feel better but take the time you need to recover properly.
I don’t know how old you are but it is also ok to feel overwhelmed by EA later when you no longer feel like describing yourself as “just a kid”. Doing your best to make the world a better place is hard for a person of any age.
The experience you had does not necessarily mean you would not be cut out for community building. You’ve now learned more of your boundaries and you might be more able to recognize red flags earlier in the future.
Good luck and I hope you learn something valuable about yourself from the ADHD assessment!
Just to give you a data point from a non-native speaker who likes literature and languages, this quote wasn’t a joy to read for me since it would have taken me a very long time to understand what this is about if I would not have known the context. So I am not sure what you mean by the best linguistic traditions – I think simple language can be elegant too.
Yeah, in Finnish contexts (nude) sauna is a normal option for an afterparty of a professional conference or similar context :) but in these cases, there is a gender separation of sauna turns (or different saunas) for men and women, just like at Finnish public swimming pools. At EA Finland events we have so far followed a quite usual Finnish student and hobby group policy of having a sauna turns for non-men and non-women separately and a mixed turn where everyone is welcome, with the option but not obligation to wear a swimming suit.
I’m still quite uncertain on my beliefs but I don’t think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsafe systems, I don’t know how much this happens but I would not find it very surprising).
I wish I had a better understanding on how x-risk probabilities are estimated (as I said I will try to look into that) but I don’t directly understand why x-risk from AI would be a lot more probable than, say, biorisk (that I don’t understand in detail at all).
Exactly, that’s the idea!
I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are “objectively” exceptionally impressive Luisa’s article can also come across as intimidating: “if a person who achieved all of this still thinks they are not good enough, then what about me?”
I think Olivia’s post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!