Open Thread: October — December 2022
Welcome!
If you’re new to the EA Forum:
Consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren’t EA-related at all.
(You can also put this info into your Forum bio.)
Everyone:
If you have something to share that doesn’t feel like a full post, add it here! (You can also create a Shortform post.)
You might also share good news, big or small (See this post for ideas.)
You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Hi all!
I’m Jason (Jay) Dykstra, a radiologist and adoptive dad in MI, US. I’ve been around EA for several years in various groups, but I haven’t engaged the forum until now!
My wife and I have come to LOVE earning to give, donating 80% of our post-tax income to EA-friendly orgs and living off around $50K a year. The “expenditure limit” model (different limits for different folks obviously!) is easier than expected once you build the mindset and SO wonderful seeing the impact around the world, as many of you know. :)
I mentor lots of young adults in easily creating a similar, sustainable mindset for their own journey. With their help I founded blessbig.org, a free online hub like Give Well that uses EA approaches to recommend high impact orgs in more cause groups (e.g. elderly care, first responders, religious orgs) than typical EA sites. So fun to see folks learn, give, and join the world in celebrating the difference!
Thanks for every contribution YOU make to doing good better, and feel free to reach out anytime!
email.jay.dykstra@gmail.com
Hello Jason,
welcome to the forum! Nice to hear you are comfortable with earning to give as your cause area.
I have some questions regarding your donation website.
Where do you get your rating for the projects? Sometimes you are unable to find data from charity evaluation sites, and then you typically list it with:
Impact Evidence – Strong
Relative Need – Stronger
Financial Transparency – Strongest
Financial Efficiency – Strongest
The last two are usually given the best rating, even if you have no external sources which you normally use to justify the rating.
For example:
https://blessbig.org/evangelism-discipleship-missions/
Do you have further information on the difference between strong, stronger and strongest? What qualifies for each rating?
You mention that your team spend hundreds of hours in research, but never says it anything about the conclusion and methods they used specifically. Can you please provide us with more information how you decide to list an organization and how you choose your ratings?
I have different request, please link your sources.
I was curious about your claim that your work (big blessing?) was mentioned in the Washington Post, and I was unable to find it via Google. Only after I used your name in the search, I found the post:
https://www.washingtonpost.com/magazine/2020/09/23/effective-altruism-charity/
Bless Big was not mentioned.
Thank you and have a great weekend. :)
Welcome to the Forum and thank you so much for your hard work donating so much!
Thank you for your encouragement and your work doing good as well!
This is very inspiring. I think you’re making an incredibly positive impact on the world, not just through charity but also by inspiring those around you. Brilliant!
Hello Jay, good to see you here! (from EA for Christians). I haven’t engaged the forum until now either but hope to start.
Hi everyone! I’m a DPhil philosophy student at Oxford University where I write a dissertation on Longtermism (see my bio for some more info). I’ve been involved with EA for several years and just decided to (finally!) create a forum account. Looking forward to commenting on some of the great forum posts instead of just reading them as I did so far :)
Hello Jakob,
welcome to the forum and greetings from Germany. *wave*
MfG
Felix
Hi all. I’ve been digging through EA ideas and content over the past two years and becoming increasingly involved over the past year. Attended EAGxVirtual, which was amazing, and am looking forward to attending my first in-person group event in Manchester, UK. I’ve been working for a homelessness charity for nearly 5 years now (having carried out a variety of roles) and am looking to make a move into a higher-impact role. I’ve got a few forum posts percolating so, hopefully, I’ll be posting some of these soon. Glad to be here in the community!
Hello Ander,
welcome to the Forum. Nice to see you here, I hope the Manchester city group is still flourishing.
Looking forward to your first post!
With kind regards
Felix
Hi Felix,
Thanks! I’m actually attending my first EA Manchester group next Monday. We’ll be discussing the 80k episode in which Rob interviews Andrés Jiménez Zorrilla about the Shrimp welfare Project.
Thanks for the welcome!
Ander
Does anyone here know why Center for Human-Compatible AI hasn’t published any research this year even though they have been one of the most prolific AGI safety organizations in previous years?
https://humancompatible.ai/research
Hi everyone. My name is Dominion Alajemba, I am from Nigeria in Africa.
About a month ago, I stumbled upon a status update on EA for Christians made by Daniel Ajudeonu on the WhatsApp app.
My interest has been sparked, and I’m very happy to know that I can contribute my own little quota to solving the world’s pressing issues. Little drops make the ocean, they say.
I am a Creative, and I’m currently studying Adult Non-formal Education/English and Literature, in the University of Benin, Nigeria. I’m in my last semester.
As a creative, I use graphic design as a medium/channel to express my opinions, thoughts and convictions. I infuse my storytelling, content writing/developing, photography, and video editing skills to this.
I know that with what I have, and what I’ll have, I can contribute immensely to this cause.
Narrowing it down, I have picked particular interests in Plastic Pollution, environmental degradation and deforestation (which are major causes of Climate Change) and food wastage, which falls under Food Security. I do hope I can give my very best in these aspects.
Navigating the entirety of EA and EA for Christians can be quite challenging as there’s a ton of info to go through and consume, so I’d appreciate any help I can get.
Thank you.
Hello Dominion,
welcome to EA and the EA forum! :)
I organized Clean-Ups, study forestry and was into foodsharing (>400k people in Germany which work against food waste), so I really like your cause areas. :D
I would really like to see some of your work, can you please show us some or send me a personal message?
If you need help with navigation, feel free to ask me and I will try to help out. I did not look into the EA for Christians topic in the past, but here is what I found, which could be helpful:
Website:
https://www.eaforchristians.org/
Events:
https://forum.effectivealtruism.org/groups/gjCkZ4dPHs7HzM3ar
With kind regards
Felix
Hi everyone! My name is Nolan and I’m a first year college student at the University of Chicago. I found out about EA through other students and just completed the intro fellowship. I’m particularly interested in animal welfare and AI safety. I’m excited to keep learning about EA and implement it into my life through my career choices and donations.
Welcome to the forum!
Hello! My name is Manuel del Río. I am a 42-year old Spanish trilingual (Galician, English and Spanish) TEFL teacher and satellite school head of studies, and I have been working for the last twelve years in Official Language Schools of the Galician Autonomous Region (Northwestern corner of Spain ). Other work I have done was in the fields of translation and cultural journalism.
I studied at the University of Santiago de Compostela, where I took degrees in History and English Philology, and a Master’s in Literary Theory, Comparative Literature and Cultural Studies.
I recently learned about Effective Altruism as an aftermath of the FTX bankruptcy affair (some marginal good can come out of the bad, I guess) and am interested in learning more about it and possibly getting engaged. I find its minimalistic program of helping others in a rationally optimized way quite appealing, and have informed myself about some of the wonderful work the people in this community are doing. I do have some intellectual qualms about strict utilitarianism as an ethical compass, and have a Confucian affinity for helping those near us and therefore, making distinctions, but I don’t find them incompatible with a juxtaposed broader sweep. One can help the community AND the wider circle of humanity.
I look forward to getting to know more about EA and its community everyone in this community, and getting myself involved.
Have a nice day!
Hi all! I connected with EA in Bonn, where I am currently a visiting researcher in the department of English, American, and Celtic Studies. I live more permanently in New Jersey, where I am professor in the English department with strong interest in environmental issues. I am interested in writing more for general audiences on sustainability and climate communication. Looking forward to getting to know more folks here.
Hi everyone! It seems quite plausible to me that EA can not indefinitely prevent itself from becoming a politically charged topic, once it becomes more prominent in public awareness. What are the current ideas about how to handle this?
It would be great to train a cadre of EAs in political matters, working in NGOs at different levels to accomplish EA goals. I’m kicking around the idea of what that would look like, i.e. an EA NGO that directly interacts with the United Nations, and how we would do training for said organization. That’s something that conceptually is doable with current resources, although not every EA wants to wade hip deep into bureaucracy. I do, and it’s my current job, but it is 100% definitely not for everyone.
Hi, new to the forum! Asking here as it seems to small of a question for its own post:
I finished reading “The precipice” this week, and found the list of current/future existential risks a bit short, especially compared to natural ones. For example, AFAIK space debris might make spaceflight impossible in the near future, making it most likely* impossible to become a multiplanetary species, robbing us of lots of resources, and sooner or later meaning a natural risk will wipe us out on earth (which Ord classifies as an existential risk).
I know space debris has become a hot topic over the last few years, is there any work in an EA context on it? Similarly, have people tried to compile lists with all currently conceivable/knowable existential risks?
* I dont know much about spaceflight, but it sounded impossible to build spacecraft that could withstand the level of debris that would build up once we reach a tipping point and a lot of satellites become debris.
Hi Jan, welcome to the Forum! I don’t think this answers your whole question, but if you haven’t seen it before, you might find this page useful: Existential risk in the EA Forum Wiki.
Welcome Jan! There has been work done on space risks by EAs, but I don’t know if the specific risk you mention has been studied. I’d check out the Centre for Space Governance, the Space Futures Initiative (especially their research agenda) and this 80k Hours problem profile.
Hey everyone. Tristan here from Tasmania. I first heard about EA from a Peter Singer lecture at Melbourne in 2014 on his book TLYCS. Have since completed a Master of Public Health. I’ve been working as a bush walking guide for the last year but currently looking for EA related work. Applying for jobs on the 80,000 hours job board at organizations like GiveWell, GiveDirectly, and AMF.
Also looking to engage with people about EA and related topics in Tasmania, or online. Looking at potentially starting a university or other EA group in Tasmania.
Let me know if you have any advice re looking for EA work/ starting a EA group.
I created an EA group for Mastodon: @EffectiveAltruism@chirp.social
The way groups work: If you follow the group and tag its handle in a Mastodon post, it will automatically boost the post so that all its followers see it.
If you have a Mastodon account, please follow the group and post some content to join the conversation!
Hi Everyone. Dominic (Dom) Leary.
I read Professor MacAskill’s article on a recent flight to Ireland and was touched by his rallying cry to drive charitable initiatives that can help society address the problems that our governments, global institutions, and private sector can not do given the lack of appropriate inter-governmental legal frameworks and misaligned interests of capital markets.
I don’t know how I can contribute but I do know that lone voices carry little influence so I’m reaching out to connect. I am now based in Rome, Italy after having spent the past 30+ years of my career travelling from the UK to SEA, Australasian, US, and back to Europe through the Pre-Post Brexit.
I would welcome meeting other like-minded EA Forum members here in Rome.
Dom Leary
Hello Dom,
welcome to EA and welcome to the forum!
If you want to connect with the local EA scene in Rome, here have some links:
EA Italy website:
https://altruismoefficace.it/
Social media:
https://www.instagram.com/altruismoefficace/
https://www.facebook.com/altruismoefficace
Slack:
http://bit.ly/AltruismoEfficace_Slack
Rome local group:
https://forum.effectivealtruism.org/groups/tqbJiq97F36tqSKEB
”I don’t know how I can contribute”—read more about EA, join a fellowship and meetup with the local EA group. Find out what’s your cause area could be and connect with others on an EAGX (after the intro fellowship).
You can join existing projects, create your own, or just do earning to give and be part of the community. It is very nice to see you here. For starters, maybe add a bio to your forum profile, so others can see what you are interested in. :)
Feel free to contact me if you have any questions about EA or want to discuss a certain topic.
With kind regards
Felix Wolf
Hi everybody! I’m Victoria, I’m currently based in Edinburgh and I heard about EA through LessWrong. I’ve been involved with the local EA group for almost a year now, and with rationalism for a few years longer than that. I’m only now getting around to being active on the forum here.
I was a medical student, but I’m taking a year out and seriously considering moving into either direct existential risk research/policy or something like operations/‘interpreting’ research. When I’ve had opportunities to do things like that I’ve really enjoyed it. I’ve also previously freelanced with Nonlinear and CEA for research and writing gigs.
Long-term I could see myself getting into AI, possibly something like helping build infrastructure for AI researchers to better communicate, or direct AI work (with my neuroscience degree).
See youse all around!
Hello Victoria,
welcome to the forum! Have fun participating here, maybe subscribe to the newsletter. It’s a good way to stay in touch. :)
Greetings.
Hi all! New to the forum, but I’ve been aware of and thinking about EA for a while, and would like to begin engaging a bit with the community. I’m a PhD student in physics at the University of Geneva working on quantum communication.
I’m particularly interested in chatting with anyone with ideas or knowledge about humanitarian use cases for quantum computing (might start a thread about this later, especially if anyone else is interested). My education re: quantum computing has tended to focus more on principles than applications, but when applications are mentioned they tend to fall into the broad categories of finance or pharmaceutical research. I’m hoping to find some different perspectives.
Cheers!
Hi! I initially posted this in the ‘hiring’ thread, but since that’s expired, I want to share it here as well.
I’m now working as a freelance writer and editor, mostly for people in the EA community. I can help you get out those Forum posts that have been languishing in your drafts, or articulate ideas that are bouncing around your head. If you’re interested, email me at ambace@gmail.com, or book a quick chat in my Calendly. Also feel free to message me here on the Forum. There are some writing samples here, here, and on my Forum profile.
Hi, everyone! I am an undergraduate student journalist. Personally, I am particularly interested in global poverty. I’m something of a sympathizer to EA—I really like certain aspects, but not other ones. Right now I’m writing an opinion column about the movement and its ideas that will be partly laudatory and partly critical.
I’m looking to speak with people who are more involved in the movement than I am to get their perspective. If you want to do a brief discussion with me about your experience and thoughts about EA, I’d love to talk to you. Feel free to send me a message here or email me at sericson@mndaily.com.
Hi, I’m a management consultant who has focused on long-term strategic planning for about 20 years. In particular, I work with live teams to think 15-30 years in the future about what they want to realize. As they do so, they shift to being selfless, holistic and kind i.e. unlike the person they are in everyday office politics.
I share ideas about long-term strategic planning offsites/retreats in a newspaper column and a newsletter.
I ’m also a Cornell/AT&T Bell Labs alum living in Kingston Jamaica.
Hey Francis,
welcome to the EA Forum. :)
Hi everyone.
I’m Evan Harper, a community manager at Metaculus. It occurred to me that I spent way too much time writing a question about Top Gun: Maverick today that will probably just drop off the front page quickly, and I wanted to say hi to the EA Forums at some point anyway, so I’m taking the excuse to shamelessly promote. I can also more proudly promote our re-opened forecast on the Raphael Warnock / Herschel Walker Senate election in Georgia. Both these are intended to help bring in new forecasters as part of the ongoing Beginner Tournament. And just in general, we’ve opened something like 40 new public questions in the last 20 days so there’s no shortage of interesting topics in addition to the front page which many of you will already have seen. For example check out this amazing one on zero-carbon aluminium smelting that some mysterious stranger by “@not_an_oracle” just dropped on me almost exactly as-is, perfectly written. My hero.
And hey, once you’ve forecasted on something, then you’ll be a Metaculus forecaster, and I have to listen to questions, concerns, ideas, and suggestions from Metaculus forecasters as part of my job. Say hi!
Hello,
I’m a grant writer and project manager in the UK charity sector, looking to more actively engage with EA. My particular area of interest is drug-resistance and the mitigation of that X-risk. I’m pretty up on the theory behind EA ideas, but have mostly been in the “LessWrong Diaspora” rather than engaging with the EA space specifically.
If anyone knows of UK charities tackling drug-resistance, I’d greatly appreciated pointing in their direction.
My name is Nnaemeka Nnadi. I am a lecturer at Plateau State University and also the chief scientific officer of Phage biology and therapeutic hub, a non-profit focused on isolating phages and phage biobanks for the global phage community. I am also one of the co-lead for Africa Phage forum; the umbrella body for all phage researchers in Africa. Antibiotic resistance is a global issue and will hit Africa the most, Nigeria for example depends on external sources for their drugs. Phages are viruses that feed on bacteria, it has proven effective in Europe and America. As usual, Africa is lagging behind. My goal is to help Nigeria and by extension Africa start the revolution by developing the capacity to isolate and store phages. This I hope to achieve via the Africa Phage forum and the Phage biology and therapeutic hub
I found EA after several attempts to get funded using the regular grant means, although not yet funded via EA. However, I am beginning to have a mind framed towards projects that impact people over and above publications. So ask me questions that relate to how to adapt and tailor your ideas to fit into resource-limited settings. Also if you are interested in biosecurity that relates to one-health and drug resistance. I look forward to learning, re-learning and unlearning on this platform.
Hi,
My name is Andie Rodriguez, I’m a bilingual (English and Spanish) 23yo Puerto Rican, who recently graduated from college. I majored in Philosophy, Economics, and Psychology. Currently I’m looking for full time employment. Although I’ve used the EA job board, it was recommended that I type a post here to see if I can connect with anyone who might have job recommendations (or advice) for a recent grad like me.
I look forward to getting more involved in EA, and getting to learn more about everyone in this community!
Hi Andie,
welcome to the EA Forum. Maybe add a bio, so everyone can see what interests you have.
Is there a specific cause area which you find the most interesting?
Have a nice day. :)
Hey all!
New to the forum and currently working as a HS Math Teacher. Really interested in the movement building side of things and through that lens have started volunteering with Giving What We Can on their Charity Election initiative.
In short, Giving What We Can wants to offer any school full sponsorship to host a charity election, a school-wide election among three charities. Through the election, students vote to direct up to $2,000 in fully sponsored funds to their choice of three highly impactful charities.
If you know of any schools, educators, or students who might like to run a charity election, you can direct them to apply for the grant here or alternatively put them in touch with me at dan.roeder@givingwhatwecan.org
Hey Dan, welcome to the forum. :)
Yesterday I hold my first Giving Game for the local EA group and some students and I was thinking about going into the schools with such an intervention.
I will write you a PN regarding this.
Have a nice day and hope to see you in the comments on some other posts here too. ;)
Hello everyone, my name is Dawson.
I’m a data scientist in Denver, Colorado. I got into EA a few years ago after hearing about it on Sam Harris’s podcast. Thinking about earning to give and the length my dollar could go for others was extremely influential and continues to be fulfilling for me.
I would love to start participating in the conversation. I just launched a Substack—it’s a bit idiosyncratic and not strictly about EA, but I think my inaugural post would be interesting to a lot of people here. Would love to hear any criticism or discussion so I can get smarter about this stuff.
https://orbistertius.substack.com/p/morality-and-marginal-existence
(What’s the policy on self-promotion here? Is it okay to link my own works in these threads or in posts?)
I think it’s fine to share your own work if it’s relevant! And at a glance, that seems very relevant.
Re self-promotion: you can check the guidance I wrote more specifically for advertising jobs — I’d bet that it is relevant to this question.
Moreover, you might want to consider link-posting or cross-posting relevant content.
I’m writing to look for people interested in participating in this contest:
Ban Charles He Contest
Sort of inspired by this post, I’m disappointed by a lot of my forum writing because my content is minor and mentally easy and not that meaningful. (While more forum writing probably isn’t the ideal solution) there is more difficult writing I can do:
Basically, I have “answers” to a number of arguments or viewpoints that are either “used against EA” or just occupy a lot of time on EA online media. I think this is bad.
These arguments or viewpoints are:
Newcomb’s paradox
The repugnant conclusion
This variation of the St Petersburg’s paradox,
Also, I think I have an answer to the “general St Petersburg paradox”, let’s throw that in too
I think my four answers are pretty obvious.
I’m worried this sounds like a blogger and I want to put some skin in the game.
If I’m “very wrong”, I’ll get banned, as determined in the following way:
I’m looking for people willing to critique my answers, especially in a ruthless, crisp way. For example, if my answer is flat out logically wrong, highly irrelevant, or has been dealt with thoroughly so that about 50% of experienced EAs are answering with good knowledge of the content of my answer.
Note that “discursive” sort of replies, with a “additional consideration” sort of flavor, should not count. Replies should undermine the substance of my answer and generally prop up the targeted argument as a valid project or objection.
This is subjective, so I’ll implement this judgment of banning in this way:
I’ll post my “answer”. After I post my answers to each of the three topics, anyone can critique with a “reply”.
If any single “reply” gets an agreement score of 10 or more, I’ll be banned for that number of days.
Note that strong votes are fine, so just 2 people can trigger this condition.
This banning will function additively with replies with distinct arguments, e.g. I can get banned for 60 days if there are 6 replies across my four “answers”.
I’m writing because I’m looking for people who are interested in replying to my answers, making my task harder.
I will wait until I get a few replies to this comment, which will be taken as a sign of interest and real chance for my ban, then write the above at some point.
How about changing this into some sort of a “Change My View” system, or commit to some reward? Not sure why someone would want to put in time to prove you wrong, or to vote correspondingly, in order to ban you.
Also, you might very well be wrong in some important way, but still add a meaningful and novel viewpoint to an important problem.
A lot of your comments have been very well received and possibly have meaningfully contributed to doing more good, but I see that you haven’t made any posts. Maybe you should try more of a “long-form” writing first? Possibly, send it to some people for review first, and edit accordingly, and then publish it. It’s really hard to do justice to any topic, and the four arguments you allude to involve deep questions that have been on the minds of many people.
Hi everyone, I’m Dominik.
I’m a graduate student pursuing a Master’s in Public Sector Innovation from multiple different European universities. I got pointed to EA through Will MacAskill’s recent book (and the entire rabbit hole of literature leading up to it). It has been astounding and encouraging to see that there is an entire community of people out here seeking to apply reason and scientific insight to doing as much good as possible!
Currently, am looking for prospective Master’s thesis topics (TBD until end of November) - and I am open to pretty much everything relating to innovation, governance, technology management, organizational change management, etc.
So if there is a relevant EA research agenda that I could draw from, please feel free to throw any hints my way!
(I just skimmed through the various resources here, but also by 80k hours, etc. And it’s just too much to take in at first sight :)
Greetings from Estonia
Dominik
Hello Guys, I’m Miguel, creator of whitehatStoic ! That focuses on sharing my journey helping the idea of Effective Altruism grow. New To The Forum but have been involved to altruistic projects here in there over the years. Glad to be here and have a nice day!
Hey Miguel,
welcome to the Forum. I am curious, in which projects were you involved so far? Always happy to learn about interesting stuff EAs do.
And one question regarding your YouTube channel, do you have the copyright permission for posting the content of other creators?
Thank you, and also a wonderful day. :)
Hi Felix,
I am currently supporting two projects through donations namely Dr. Jordan B. Peterson and Huberman Lab. Currently, I’m trying to see if red teaming EA ideas, similar to those ones in Superlinear or AI x-risk is something I can be a part with moving forward.
As far as the YouTube Channel, That is an experimental project of mine—sharing what I study at a time which I know will become useful to other people too, as my goal is to share probably in this lifetime is to share useful and truthful ideas. I did not personally asked for permission in posting them—but have seen numerous times that Dr. Jordan Peterson encourages people to share the ideas as long as the context is complete and YouTube filters the permissions and all of the videos uploaded never had copyright issues.
I’m trying to learn as I go in this journey to be honest and do not have all the answers, I would appreciate guidance and comments from everybody—especially more experienced content creators in the EA Community.
Thank you Felix, Looking forward for your reply!
Hello :) I’m Mischi. I’m currently based in Berlin and I’ve first heard of EA through Will MacAskill’s conversations with Sam Harris. I’ve been quite excited to discover just how much is going on in the movement and how many people are involved, although I realize I probably don’t know the half of it yet!
Hey Mischi,
welcome to EA and the forum. :)
If you want to connect with the local EA scene, write me a PN and I will give you the links.
Greetings.
Hi everyone, I’m new to the forum.
Having read a few forum posts over the past couple of weeks, I wanted to become a bit more active and start commenting.
I’ve recently started the process of reviewing EA career options (over at 80k hours), and plan to pivot to a more EA-aligned role in a year or so. I’ve studied mechanical engineering, and I’m currently pursuing a PhD related to engineering and climate change. Also, I’m in the process of becoming a certified systemic coach (for coaching individuals and teams). I live in Braunschweig, Germany.
Looking forward to get in touch with people here, to get ideas about a new job/role, and to interesting discussions!
Hello Malte,
welcome to the forum and to EA! We have local groups in Hannover, Göttingen and Magdeburg. Hannover has an online reading group, Göttingen has weekly meetings in person & hybrid (we are doing a career workshop at the 13/11/22). Magdeburg seems to be inactive since 2020.
So, if you want to connect, this would be a possibility. I can give you the contact information for each group if you want.
Also, today is the last day to apply for the career fellowship from EA Germany:
https://ea-germany.notion.site/EA-Career-Planning-Program-03cb0f1f6b9a4ae0b8d723316b558b8f
Feel free to contact me if you want to know more or talk about something different. :)
Greetings from Hildesheim.
Hi Felix,
Thank you for your kind and helpful reply! I’ll send you a private message regarding your offers.
Cheerio Malte
Edit: There is now a local EA group in Braunschweig: https://forum.effectivealtruism.org/groups/TGgHpPNHnM4sS6Gcc. I’m leaving this comment here mostly so that people stumble upon it if they search the forum for Braunschweig (or Wolfsburg). If you read this and are from the Braunschweig area, get in touch! =)
I used a model I fine-tuned to generate takes on Effective Altruism. The prompt is “effective altruism is.” Here are its first three:
I’m somewhat concerned about the use of AI models to [generate propaganda? conduct information warfare?]. Here, the concern is this could be used to salt the earth by poisoning the perceived vibe to make certain demographics dislike EA before they can engage with it deeply.
I find it important to note the model was not designed to be harmful. It was finetuned to generate self-deprecating humor. Nevertheless, amplifying that capability seems to also amplify the capability to criticize EA.
I’m interested in what mitigations people have in mind. One way could be at the epistemic level: To teach people to engage kindly with new ideas.
I have some moderately useful comments if you’re interested.
Some basic questions: Are you running this on GPT-NeoX-20B? If so, how are you rolling this? Are you getting technical support of some kind for training? Are you hand selecting and cleaning the data yourself?
was unclear. It should be:
This model was not fine-tuned specifically for Effective Altruism. It was developed to explore the effects of training language models on a twitter account. I became surprised and concerned when I noticed it was able to generate remarkable takes regarding effective altruism, despite not being present in the original dataset. Furthermore, these takes are always criticism.
This particular model is fine-tuned OpenAI davinci. I plan to fine-tune GPT-EA on GPT-NeoX-20B. A predecessor to GPT-EA (GPT-EA-Forum) was trained using a third-party API. I want to train GPT-EA on a cloud platform so I can download a copy of the weights myself. I am not receiving technical support (or funding for GPU costs), it could be helpful. The dataset was selected and cleaned by myself, with input from community members, though I’m still looking for community input.
Beautiful
I think this comment is confusing and unseemly, as it might be supporting the negative content in the parent comment. This is a thread for newcomers, making this concern greater.
I want to downvote this comment to invisibility or report this to a moderator, but the comment is made by a moderator, and he has two cats, presumably trained to attack his enemies.
Thanks, you may be right.
To be clear, I found these generated examples clearly false and misleading but still something that I can understand people who feel like this, so I found it a bit funny and uncanny, while I totally agree with JoyOptimizer’s concern.
They are only trained to be maximally adorable and cuddle my enemies to a fluffy friendship!
(And generally, you or anyone else should feel free to report comments from moderators if you find them damaging the forum’s discussion. It may well happen, and it’s important to get that kind of feedback)
Is there an EA subgroup focused on philosophical pragmatism? EA has a strong utilitarian image. There’s a lot more nuance though one problem with utilitarianism is that if the ends justify the means, then sufficiently impactful ends can justify any means—even very problematic issues like the recent SBF revelations.
Philosophical pragmatism offers a lot of insight for the EA movement. That would see ends and the cost benefit calcs that are beloved around here as one important though not all encompassing factor in decision-making.
After graduating with a masters in economics from Oxford in July 2022, I’ve been a charity entrepreneur. This is my uncut journey on that path Follow me on my journey as I document everything behind the scenes of being a charity startup founder!
YouTube playlist https://youtube.com/playlist?list=PLatl23CKLmapNvrypAYgxOHwevQY231Oq
Twitter @simonsallstrom
WHAT IS THE PURPOSE OF OUR CHARITY?
DirectEd Development Foundation—direct donations for education. Our purpose is to identify high potential students in sub Saharan Africa, starting in Kenya and Ethiopia, and facilitate the resources needed for them to successfully enter a career as remote software engineers, all evaluated rigorously through close partnerships with academic researchers.
We are innovating the use of emergent technologies in order to reduce costs, increase transparency and empower our students. Through the use of self-sovereign identifiers, we enable students to take agency over their identity and to build reputation. The combination of this type of privacy preserving digital identity infrastructure with smart contracts is what enables peer-smart contract-to-peer distribution of scholarship funds conditional on students reaching learning milestones, thus enhancing accountability and trust and removing costly middlemen and the leakages associated with which.
https://directed.dev/ https://linktr.ee/directeddevelopment
Please feel free to reach out to me on LinkedIn, Twitter, email, pigeon mail, snail mail… you name it!:) https://uk.linkedin.com/in/simon-sällström-3659b616b
Hi Simon,
your playlist starts with video no. 3, 2, 1 and ends with 4. Is this the intended order?
The following documents are missing on your Google Drive:
Verification of educational institutions Kenya:
https://directed.notion.site/Verification-of-educational-institutions-Kenya-d6f819c3d56e4656a5089404c1e95aea
Risk Assessment Report—Kenya
https://directed.notion.site/Risk-Assessment-Report-Kenya-aae33491da424b14a45e295b3b91df09
I have moral and safety concerns regarding your project and especially the Cardano blockchain. In your risk assessment report, you did not acknowledge the risks of proof of stake blockchain and NFTs.
I like the progress update and github page where you list your goals and if you reached them.
In your funding proposal F6, you stated this timeline:
“Phase 3: August 2022 - October 2022 (12 months)
Completion of both the donor side dApp and recipient mechanism for receiving funds.
First pilot testing in high schools and universities in Ethiopia. 30 students registered on the platform.
Finalized research proposal submitted for large grant application to properly fund the project.
Initiate meetings with larger organisations such as the World Bank, UNHCR or the Red Cross who might be interested in the technology for related uses.”
https://cardano.ideascale.com/c/idea/369558
Do you have a page where you have the track record based on the timelines you stated?
For someone reading the forum, here two articles on Cardano in the African educational system.
Why a Little-Known Blockchain-Based Identity Project in Ethiopia Should Concern Us All
https://www.cigionline.org/articles/why-a-little-known-blockchain-based-identity-project-in-ethiopia-should-concern-us-all/
Ethiopia’s blockchain deal is a watershed moment
https://qz.com/africa/2011167/forget-bitcoin-why-ethiopias-blockchain-deal-is-pivotal
Let’s hope (?) this does not fail too badly.
Hey Felix!
Thanks for the many good comments:) As you know, there are so many things going on with a charity startup and difficult trade-offs to make. One of them being proper tracking against our roadmap and communicating what we’re doing (as opposed to focusing on doing), especially at an early stage like this.
The playlist order is now corrected!
(keeping in mind my caveat) I’ve had a look at some of the drive links and updated. We’ll use PDF + Notion upload from now on so that we don’t rely on individual team members (who likely ran out of drive space for his own documents but I’ll have to ask Moses about it).
When it comes to risks of proof of stake and NFTs, I’d rather mention those in a more general risk assessment report (which has had low priority status). I’ll look into adding that for the Q2 2023 OKRs!
The most recent proposal is the one for F8. Let me be very candid on this one though. Projecting 12+ months forward without virtually any experience on the ground in Ethiopia nor with dapp development was more like an exercise in wishful thinking than a proper and well-informed roadmap. The F8 one was more realistic. I’ll be making a page where you can track progress against the Roadmap. Right now we have a Progress page with OKRs but the link to the roadmap isn’t very well made. https://directed.notion.site/Progress-Updates-81f92ecebefb4f289e80e1703ff73d2a
Just to give you my background on this. I met with the Project manager for the Ethiopia deal when I was in Addis two months ago. However, I lack the full technical background to explain all the details of DID/SSI/Atala Prism but I understand it on a high level, and more than the vast majority (if not all) of tech journalists who cover Cardano.
The Cigi-article is unfortunately written by someone who has actually no clue about what Self-Sovereign Identifiers (SSI) are and what the Ethiopia project really is about. Just to bring up the most egregious confusion that the authors illustrates: they mistakenly believe that SSI is a blockchain identity technology (and that the credentials platform IOHK has been contracted to build is going to have personal data on-chain). It is not. It is not a blockchain identity system (unlike so-called Soul-bound tokens). It is decentralised public key infrastructure. Blockchain only as a tiny part of it. For those interested in learning more about self-sovereign identifiers (AKA Decentralised Identifiers). Here is the official W3C standards recommendation for those who want to learn what it is, approved in July 2022 https://www.w3.org/2022/07/pressrelease-did-rec.html.en
The QZ article on similarly showing very superficial understanding of the technology. The statement “The project will build digital identity solutions on the Cardano blockchain. ” is not factually correct. I am happy to elaborate.
NB, Atala PRISM is not a completed product. It is in development and is not even yet registered as an official DID.
“Let’s hope (?) this does not fail too badly.” I hope you don’t mind me being honest but I think this comment was a bit unnecessary.
We are doing the piloting of our bootcamps and the end-to-end on-chain scholarship granting in the next month and during Q1-Q2 this year!
Please do sign up to our monthly newsletter to get quick updates on that https://directed.notion.site/Newsletter-signup-34e6bced04534d3981ece8312caea717
Really cool idea. I’ll be watching eagerly
I posted a compilation of all EA memes from October here, so you don’t have to wade through Facebook to encounter EA memes.
David, let’s connect—also a management consultant exploring a pivot to AI!
Hi, new to the forum. Looking forward to joining the EA community.
What’s the GiveWell of AI Safety?
To complement Tyler’s comment—the field of AI safety is not similar to that of global health and poverty in this regard. When looking at health interventions, you’re considering solutions to widespread problems, and time scales of a few decades at most. In contrast, AI safety (from the EA perspective) mostly deals with future technologies, and has made little measurable progress in mitigating their dangers. There’s no direct evidence you can use to judge AI safety orgs with high confidence. So you’re going to, at maximum, get evaluations which are much less robust, and have much more disagreement about them.
There isn‘t one exactly, but poking around the grants made by Open Philanthropy and EA funds will give you a good idea of what orgs and projects look promising to the experts who disburse those funds.
What are some negative and positive sentiments about OpenAI within the EA community?
I know OpenPhil made them a 30m grant in 2017, but Holden no longer seems to be on the board and a bunch of people left to create Anthropic. Whats up with that?
Is there a central or topic-specific bounty board?
(I’m personally looking for AI interpretability tasks). I know there are distributed opinions on what’s important but I’d like a central place for them and for them to be prioritized in some way:
- bounty price sizes
- (expert) authority
- popularity (votes or prediction market)
I know there’s a job board, but I’d like a content/problem-focused board instead.
Is https://www.super-linear.org/ similar to what you’re looking for?
(Hastily written, sry)
I would love to see more of the theories of change that researchers in EA have for their own career! I’m particularly interested to see them in Global Priorities Research as its done at GPI (because I find that both extremely interesting and I’m very uncertain how useful it is apart from field-building).
Two main reasons:
It’s not easy at all (in my experience) to figure out which claims are actually decision relevant in major ways. Seeing these theories of change might make it much easier for junior researchers to develop a “taste” for which research directions are tractable and important.
Publishing their theories of change would allow researchers to get more feedback on their project and perhaps realise earlier why it’s not that important. (This point of course applies not only to researchers). As a result of the second point, it seems less likely that EA researchers get off-track following intellectual curiosities rather than what’s most important (which I suspect researchers, in general, are prone to)
Hello! New to the forum, though I’ve been lurking the blogs/podcasts/articles around the EA community for a while!
Wondering if there are any other folks out here coming from the design/technology industry space, like I am? Would love to hear from anyone about their journey and finding their niche within the community! :)
Anyone work at GiveWell? I was somewhat intrigued by the new senior researcher position though curious to learn more about the direction of the organization. Specifically, I’d be curious into how deep GiveWell is open to looking into state capacity issues. I saw the relatively recent jpal work and curious if they’d be open to doing more of that work.
https://www.givewell.org/about/jobs/senior-researcher
There should be a small crack team dedicated to identifying rich people who’d be good candidates for EA benefactors and recruiting them, now that SBF and FTX are down for the count. If you’re interested in doing this, message me and I can give you some names privately.
I’m not sure they would call it “recruiting”, but there already are large parts of existing nonprofits that talk to current & future ultra high net worth individuals, such as Longview Philanthropy, Founders Pledge, Generation Pledge, and Open Philanthropy. But there are only a very limited number of potentially sympathetic ultra high net worth individuals, and you don’t want to put them off effective giving, so it’s important to do it right. As such, I definitely would not suggest starting a new crack team to try to do this work. Instead, it’s better to talk to the existing groups that cater to UHNWIs, first.
Also, the EA movement really needs to diversify its assets. It’s a mistake for so much of our collective assets to be tied up in Meta and crypto, although this is somewhat unavoidable – Dustin owns Dustin’s assets, not the EA movement. When seeking out more big benefactors, we should look beyond tech.
My understanding is Dustin has already diversified out of Meta to some large degree (though I have no insider information).
I’d be happy to help however I can :)
Hi all, I’m Michael Simm. I am a nonprofit entreprenuer focused on disruptive systems (eg, understanding and using emerging technologies to make the future better). I’d love to see how much if at all people in EA are familiar with disruptive technologies and how open they might be to learning about a new one, one that might impact EA greatly.
When I first became interested in disruptive technologies, I had my focus on climate change. I quickly identified electric cars, solar panels, and energy storage (particularly batteries) to be on the verge of upending reliance on fossil fuels and global transportation systems. Then, I ran across Dr. Tony Seba, who was one of the only people to accurately predict the massive price declines of solar, electric cars, and batteries. He’s now doing fantastic research into the coming disruptions in energy, transportation, and other areas with an organization called RethinkX.
RethinkX has predicted, among other things, that there will be almost no gas cars sold by 2030, and that the animal agriculture industry is headed into widespread bankruptcy (which would be very good for animal welfare interests). They’ve found that disruption generally happens when any system proves 5X better than the incumbent one, thus opening a huge opportunity space.
The nonprofit I founded, is designed to leverage the disruptive potential of the most cost-effective anti-poverty intervention in developed countries (guaranteed income), and use it to make a big impact against homelessness and then poverty over time. This could sound far-fetched, but I think that our pilot project is likely to outperform a lot of EA developing world interventions in cost per QALY. I’m working on a major post to introduce it later this week so please reach out if you’d be interested in contributing.
Hello Michael, welcome to EA and the forum. :)
Thank you for the teaser of your post coming next week, I am currious about your theory of change and how you plan to outperform existing high impact charities.
If you want me to read your post beforehand for a short feedback I can help you.
Greetings
Felix
PS: nice to hear you participate in the fellowship, have fun in the discussions. :) I am currently doing a longtermism fellowship organized by EA Munich from Germany.
Thanks for helping me edit the post that I just finished!
I wasn’t kidding about having a plan to not just outperform GiveWell Top Charities, but fully fund all of them—as a side project no less...
Introducing The Logical Foundation, A Plan to End Poverty With Guaranteed Income
Please check it out (and upvote so more people see it) and try to find any holes in the plan, that’s largely why I put it here.
PS: I really did have such a great time with my EA program this fall, I actually got our 501(c)(3) determination letter in the middle of it. Do you have any thoughts on the ‘longtermist Implications’ section? Maybe even share it with your cohort?
Both the vibes and possibly the reality of tech layoffs/demand for SWE is falling, at least on Hacker news.
https://news.ycombinator.com/item?id=33463908
https://news.ycombinator.com/item?id=33025223
https://news.ycombinator.com/item?id=33083279
https://news.ycombinator.com/item?id=32312303
This seems useful to flag (might be big enough that it seems like an opportunity for the various orgs that are looking for SWE and other talent, even after considering fit/alignment/lemons).
Here’s my latest thinking on making EA more effective:
https://medium.com/@mnemko/maximally-effective-altruism-5910308265ff
Hey everyone!
while thinking about extreme poverty I came across the thought that some people in extreme poverty might not know how to spend their money as effectively as possible to have the highest chance of survival or most healthy life.
Maybe EA could create guides directly for those people in order to help them on how to manage their income as effectively as possible.
This is just an idea that came to my mind and I have no certainty about this concept, I would love to hear other EAs opinions on it. Please tell me if I am completely wrong.
greetings
Sebastian
This book contains studies of poor people, how they spend their money, and what could help them the most: https://www.goodreads.com/book/show/10245602-poor-economics?ac=1&from_search=true&qid=3qdox2Dpdo&rank=1
FYI
I agree Sebastian! Your reasoning is the same as the popular proverb: Give a man a fish, you feed him for a day. Teach a man how to fish, and you feed him for a lifetime.
Hi All. I’ve known about effective altruism for quite a while and it’s helped me a lot. Mostly found out about it after a breakup I had and improved how I viewed the world to be more objective and rational. I found this video and was wondering if anyone else had it. I think it raises some points but that it also has a lot of flaws which shouldn’t be overlooked. I commented on my criticisms. Know they’re are definitely more academic and sophisticated responses to this topic but think it’s approachable and also gives insight into how people who are very much outside the community think.
In my opinion it’s ludicrous narrativizing. Sam had no incentive to do what he did he was just winging it and defrauding people and lying about everything. He’s going to jail now.
There’s no brilliant billionaire manipulation scheme he’s just some asshole who was using a charity / philanthropic movement as cover.
He’s Medoff giving to Jewish charities so that rich Jewish charity donors will trust him with their investments. Video woman is saying that refutes Jewish philanthropy somehow. Like the way to pursue the interests of Jewish philanthropy was to steal a bunch of Jews’ money and put it into a ponzi scheme and then burn it down and blow everything up, because that’s just what those sort of people would do.
It’s a ludicrous left-wing narrative to slander rich people. Slander rich people all you want, but do it for better reasons.
So someone I know has studied social media (e.g. working advising the C-level at companies who are building tools for a state-level entity to monitor extremists in social media).
Someone I know has run an online community created from scratch, that grew to thousands of people. At times, there were disputes and controversies, and they came under attack and they managed it well (probably ???).
I’m writing to give some tools to the moderator:
Hot spaces
You can create hot spaces with certain looser norms than the rest of the forum. Then you can direct discussions, people to those spaces.
The strategy of using these spaces are useful because:
This gives a natural outlet to heated discussions, and the explicit looser norms can make the discussions healthier
It pulls out heated discussion from other spaces, so they are less disruptive
The space gives knobs and levers for moderators
You can control how visible these spaces are for the public, e.g. possibly in the way “community” tags have different visibility
You can announce that it is open for X days, will be closed in X days
You can adjust and edit the norms and discussion inside of it as you go
Making these spaces and setting these boundaries also expresses agency, control and vision for the moderator. Expression of this is healthy and builds respect and further agency.
If you want to do this, these hot spaces can be made in many different ways, e.g. posts, threads, sections, tags. Probably some thought about the form it could take is useful. Further planning beyond designing the space is usually less rewarding.
Greater variety of EA Newsletter emojis
This past EA newsletter used only two emojis, a down arrow (⬇️) and an anchor (⚓), while it talked about the AI Worldview Prize (🤖🧠🏆), asteroids (☄️), prize-winning criticisms of effective altruism (🏅❌:ea-bulb:), articles (📃), news(📰), and announcements (📢), among others.
This communicated the following message:
‘You have to scroll down, where you have to pay attention (where the anchor is).’
Rather than:
‘There is a lot of interesting content in this newsletter, someone paid attention to make it visually concise and fun for you. But, don’t rely on (visual) oversimplifications, see for yourself.’
The former is more conducive to limited critical thinking, while the latter can stimulate it.
Further, the arrow-anchor setup can be understood as normalizing abuse (as the only viable option) (an arrow can symbolize a direction without the request for or agreement to such and an anchor can symbolize threat of the use of force and limited consideration, since it is a heavy sharp object not related to the topics). The normalization of abuse could worsen epistemics within EA and limit the community’s skills in cooperation on positive impact.
In general, viewers can pay the most attention to the portrayal of threats, even if that is not apparent or they are not consciously aware of it. Under threat/stressed, viewers may be more likely to click on content, seeking to resolve the negative feeling that compels them to action.
Another reason why viewers may be paying attention to content that can be interpreted as abusive but where that is not prima facie apparent is that they seek assurance in the positive intent of/ability to trust the resource (or advertisement). For example, if one feels that an ad is threatening abuse but the text is positive, they can be more likely to read it, to confirm positive intent/seek trust.
These attention captivation techniques motivate impulsive/intuitive decisionmaking (based on chemical/hormonal processes?) and limit reasoning and deep thinking. These techniques can also motivate impulsive sharing of content, because it evolutionarily makes sense to share threats first and because people seek to affirm positive intent when they share the resource with others who will likely not describe the possible abuse.
According to this theory, using setups that can be interpreted as threatening but not at first apparently is the most effective way of growing the EA community.
However, it can be also that the newsletter audience more likely engages with and shares content that is conducive to reasoning and deep thinking.
For instance, the High Impact Professionals newsletter uses descriptive emojis and the organization is popular in EA.
While conducting an RCT on the variety of emojis and readership/click-through rate/thoughtfulness of a response requested by the newsletter can be a bit too much, it is one way to test the hypothesis.
--
Let me actually also illustrate what I mean on the example of the image used in this post. The image can cause distress but that is not at first apparent.
The image has feminine symbolism, the flowers and possibly the light. The viewer has not requested or agreed to view this symbolism but viewed it (these are prominent). Highlighted is also the figure’s chest. These two aspects can engage the viewer, who may be compelled to pay further attention.
The leaves on the left side of the image resemble reptiles and birds hiding with the possibility of attack. That can cause cognitive dissonance, because birds and reptiles are considered likely (due to evolution and media) to attack than mammal predators by humans. The leaves near the flower in the bottom left corner resemble a bird with its beak directed toward the figure (who does not pay attention to it). The reader can be compelled to look at the leaves to assess for any threat and freeze in the anticipation of/to prevent the bird’s action.
Some of the figure’s fingers can be considered as disfigured. From the perspective of the viewer, the second to the left finger on the figure’s hand near the flower is bent and the thumb on the same hand elongated. The other hand is the one that would ‘confirm’ that there is nothing weird. The hand looks relatively normal, except for the swollen second finger from the top (that also can make one think of literal or metaphorical rotting) and the thumb with the small red pointy end.
That thumb can be considered as a ‘hidden weapon’ of the feminine figure. That can make people think of betrayal by those who are traditionally trusted (females). Another form of betrayal/weapon can be the left flower, which is ‘going’ from the side in the general direction of the viewer, like a snake with an open mouth. The viewer may be compelled to look at it, to make sure that it does not go at them. If you zoom in on the inside of the flower (the violet, purple, yellow, and red shapes), further attention captivation can be analyzed.
A viewer of this image can become aware of their body and consider it vulnerable. That is because of the bent back of the figure but prominent/highlighted chest. The figure’s right side of the chest is the ‘assurance’ of limited prominence, while the left side portrays significant prominence. (This could be vice versa but that perception can be limited.) This is gender neutral, although the shape can allude to male body fat, which is portrayed as something which should be covered, due to vulnerability (often used in advertisement).
The figure looks like an authority which is practically impossible to be convinced by reason and must be obeyed, by the facial expression. One may regret engaging with this environment but can be more compelled to ‘stay’ since seems pointless to ‘argue against.’
The vertical blue stripe on the right side of the image, which coincides with the figure’s sleeve, can be interpreted as AI threat. It is like the flickering of the screen. The figure embodies the ‘appropriate’ reaction to this, which is to do nothing and advance the norms that one cannot argue against.
There are other things that I could and could not analyze.
Of course, one can disagree and simply say that it is a normal image of a lady.
However, I suggest that one stares at the image in peace for a few minutes and observes their emotions and impulses (including motions and intended motions). If the above can be leading, a different DALL-E or prominent advertisement image can be used. One can feel negative emotions/negatively about an environment and physical sensations (such as finger twisting). That is a good reason to understand these techniques rationally but not emotionally and avoid long emotionally focusing on state-of-art AI images (but look e. g. on groups of fashion models where techniques relate mostly to gender norms, body image judgment, and racial stereotypes).
If one is quite aware of these techniques, considered using various alternatives in the newsletter, and still choses the arrow-anchor framework, then they have the reasoning for it. However, if one is simply influenced by AI and unknowingly advances an abusive spirit, possible impact of the newsletter should be related to its intended objectives and alternatives considered.
--
It can also be argued that an arrow and an anchor is nothing like a complex advertisement but that powerful people may like a form of traditional power, while their intents are good. I watched interviews with the 100 top Forbes billionaires and while many enjoy traditional exhibits of power and their intents are good, perhaps only four would actually enjoy abusive newsletter marketing, of which two would not understand it as anything that should be felt or suboptimal for anyone, and one would not seek to advance the abuse further. Two seems vulnerable to being influenced by this marketing, if they happen to be subscribing, which is very unlikely for one and possible but not very likely for the other.
I have also listened to podcasts with prominent EA funders and while impactful work can be a must, abuse is not (rather, positive relationships and impact is). So, using abusive newsletter emoji marketing is unlikely to please EA funders but can motivate them to repeat this ‘tone from the top.’
--
In conclusion, the EA newsletter emojis can be reviewed.