Undergraduate in Cognitive Science
Currently writing my thesis on genetic engineering attribution with deep learning under the supervision of Dr Oliver Crook at Oxford University
aaron_mai
[German Podcast] A series on the Ethics of Giving and GWWC
A new Heuristic to Update on the Credences of Others
Thanks a lot for the update! I feel excited about this project and grateful that it exists!
As someone who stayed at CEEALAR for ~6 months over the last year, I though I’d share some reflections that might help people decide whether going to the EA Hotel is a good decision for them. I’m sure experiences vary a lot, so, general disclaimer, this is just my personal data point and not some broad impression of the typical experience.Some of the best things that happened as a result of my stay:
I made at least three close friends I’m still in regular contact with, despite leaving the hotel half a year ago. That is a lot by my standards! I also increased my “network” by at least 20 people from all parts of the world and various professional/academic backgrounds that I’d be pretty happy to reach out to.
I greatly increased my productivity. During my stay, a friend and I came up with an accountability system that increased my productivity on average by >5 focused hours per week over the last year (compared to previous years) and made me generally more healthy (e.g. I would estimate that I now exercise >1h more per week on average).
Relatedly, I managed to create four episodes of a German podcast about EA in my first two months there.
Harder to quantify: a lot of inspiration! I came to the hotel after a year of being more or less in covid-isolation and so I went from that to talking hours every day to researchers and creators with loads of new ideas. I hope that spending so much time around people who were smarter and more knowledgeable than I lead to a decent amount of intellectual growth for me. Somewhat relatedly, I wouldn’t be surprised if a lot of the value of my stay came from many small suggestions other people casually made in conversation (e.g. of some concept, some website or some other product).
I think this varied a lot over time as the group of people changed and especially as we became a smaller group over the summer it became less of a stimulating environment.
Some downsides:
I’m not a fan of Blackpool. I remember it largely as being grimy and dull, especially in winter. The proximity to the sea and the park are quite nice though
I think the hotel rooms were also less comfortable and nice than what I’m used to from home. I think this was fine for the most part since I mostly used my room to sleep.
On balance, I think I benefitted a lot from staying at the hotel and I’m very glad I made the decision to go! Thanks to Greg, Lumi, Denisa, Dave and everyone else who spend time with me while I was there <3
Please reach out if you have any questions you’d rater like to ask me in private.
[Question] On GiveWell’s estimates of the cost of saving a life
[Question] How valueable are external reviews?
Hey! I applied end of april and haven’t received any notification like this nor a rejection and I’m not sure what this means about the status of my application. I emailed twice over the past 4 months, but haven’t received a reply :/
Heuristics for making theoretical progress (in philosophy) from Alan Hajek (ANU)
Most of the researchers at GPI are pretty sceptical of AI x-risk.
Not really responding to the comment (sorry), just noting that I’d really like to understand why these researchers at GPI and careful-thinking AI alignment people—like Paul Christiano—have such different risk estimates! Can someone facilitate and record a conversation?
Thanks, this seems useful! :) One suggestion: if there are similar estimates available for other causes, could you add at least one to the post as a comparison? I think this would make your numbers more easily interpretable.
I’d say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn’t be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?
To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?
Out of curiosity: how do you adjust for karma inflation?
Red team: is it actually rational to have imprecise credences in possible longrun/indirect effects of our actions rather than precise ones?
Why: my understanding from Greaves (2016) and Mogensen (2020) is that this has been necessary to argue for the cluelessness worry.
Increasing our impact on climate change. Talk + Q&A with Violet Buxton-Walsh from Founders Pledge
This is so cool!
However, even if we’d show that the repugnance of the repugnant conclusion is influenced in these ways or even rendered unreliable, I doubt the same would be true for the “very repugnant conclusion”:
for any world A with billions of happy people living wonderful lives, there is a world Z+ containing both a vast amount of mildly-satisfied lizards and billions of suffering people, such that Z+ is better than A.
(Credit to joe carlsmith who mentioned this on some podcast)
Thanks for the post!
I’m particularly interested in the third objection you present—that the value of “lives barely worth living” may be underrated.
I wonder to what extent the intuition that world Z is bad compared to A is influenced by framing effects. For instance, if I think of “lives net positive but not by much”, or something similar, this seems much more valueable than “lives barely worth living”, allthough it means the same in population ethics (as I understand it).
I’m also sympathetic to the claim that ones response to world Z may be affected by ones perception of the goodness of the ordinary (human) life. Perhaps, buddhists, who are convinced that ordinary life is pervaded with suffering, view any live that is net-positive as remarkably good.
Do you know if there exists any psychological literature on any of these two hypotheses? I’d be interested to research both.
I agree that it seems like a good idea to get somewhat familiar with that literature if we want to translate “longtermism” well.
I think I wouldn’t use “Langzeitethik” as this suggests, as you say, that longtermism is a field of research. In my mind, “longtermism” typically refers to a set of ethical views or a group of people/institutions. Probably people sometimes use the term to refer to a research field, but my impression is that this rather rare. Is that correct? :)
Also, I think that a new term—like “Befürworter der Langzeitverantwortung”—which is significantly longer than the established term, is unlikely to stick around both in conversation or in writing. “Longtermists” is faster to say and, at least in the beginning, easier to understand among EAs, so I think that people will prefer that. This might matter for the translation. It could be kind of confusing if the term in the new German EA literature is quite different from the one that is actually used by people in the German community
[Question] Is there a way to save forum posts as PDFs?
This link works for me:
https://openai.com/form/preparedness-challenge
(Just without period at the end)
I find it remarkable how little is being said about concrete mechanisms for how advanced AI would destroy the world by the people who most express worries about this. Am I right in thinking that? And if so, is this mostly because they are worried about infohazards and therefore don’t share the concrete mechanisms they are worried about?
I personally find it pretty hard to imagine ways that AI would e.g. cause human extinction that feel remotely plausible (allthough I can well imagine that there are plausible pathways I haven’t thought of!)
Relatedly, I wonder if public communication about x-risks from AI should be more concrete about mechanisms? Otherwise it seems much harder for people to take these worries seriously.