Open Thread: June — September 2022
Welcome!
If you’re new to the EA Forum:
Consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren’t EA-related at all.
(You can also put this info into your Forum bio.)
Everyone:
If you have something to share that doesn’t feel like a full post, add it here! (You can also create a Shortform post.)
You might also share good news, big or small (See this post for ideas.)
You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Other Forum resources
- 3 Jul 2022 5:53 UTC; 67 points) 's comment on The Future Might Not Be So Great by (
- Open Thread: October — December 2022 by 12 Oct 2022 10:41 UTC; 19 points) (
Could we have a higher karma discount for community posts? Almost every post on the frontpage for the past few weeks has been a community post, and I think karma inflation has outpaced the effect of the current discount. It would be nice for the EA forum frontpage to not be an endless pit of meta...
In case you didn’t know, you can customize your frontpage so that you see less community posts.
Oh sweet! I’ll definitely do this. I would still support changing the karma discount factor since it’s not just about what I see, but this is a good start...
+1!
Hey Y’all, I’m Mar! I’m an undergraduate student studying Arts and Humanities with a focus in community engagement and a minor in Sustainable Natural Resource Recreation Management.
Learning about EA was entirely a coincidence. I started seeing more and more recommended tweets that interested and/or inspired me. I like the emphasis on reason and rationality, the acceptance of people of diverse world views, and that everyone seems genuinely kind. I followed more accounts, and saw “EA” in many of the bios. I had literally no idea what that meant, and didn’t think much about it until I noticed the majority of people I followed were identifying as Effective Altruists. I started to read more about it, and I like many of the ideas and values.
I feel lucky to have found EA. I’ve been frustrated with my college and degree path. I’m in a small liberal arts program that produces and encourages unrealistic, unproductive, echo-chamber ideas and culture. Some examples are: a non-acceptance of political or economic ideas other than communism or socialism, valuing identity politics over factual information, none of my community engagement classes involving doing community engagement, and hypocritical ideas on the value of human lives. I initially loved this program and believed much of that myself, until living in a kinda-anarchist housing cooperative caused me challenge and re-evaluate a lot of the social and economic beliefs I held. EA addresses a lot of the concerns I have about the world, and does so in a way that feels a lot less harsh and judgemental than my my college community.
I’m drawn to EA because of the positive critical thinking, the acceptance of diverse beliefs, the kindness, and the action. I want to learn more about global sustainability, philosophy, and AI and how it can be beneficial in community engagement.
Some Concerns I Have: There seems to be a really strong emphasis on EA’s being smart and good at science things. I’m a liberal arts girl, and I’m worried that there isn’t much value in that here. Also, I could totally be wrong, but it seems to be a more male dominated community. Which isn’t necessarily bad, just confusing to me.
Random Fun Facts About Me: As if this intro isn’t long enough, here’s more.
I have a cat! Her name is French Toast (I will gladly provide unlimited cat pics)
I’m a vegetarian! And I have a pretty bad dairy allergy so my diet is like 99% vegan
Some things I like to talk about: Decriminalizing drugs, gender and feminism in country music, consent/sexual violence/and sex work, making art
Sociology major from a small liberal arts college here, and resonate with a lot of what you’re saying. I wrote about my journey into EA here, in case that’s helpful:
https://forum.effectivealtruism.org/posts/kE3FRC5gq9QxMrn3w/what-drew-me-to-ea-reflections-on-ea-as-relief-growth-and
Fwiw my view is thay you can add a lot, but you’ll have to work a little bit harder to figure out where the best place is for you.
In my case, I have a good fit with community building and operations-type roles, and now product management also.
I wrote my undergrad thesis on social movements, which was partially influenced by EA and I think helped me think more systematically about movement building. I think figuring out ways to explore areas you are particularly interested in through classes could be really interesting (one of the benefits of liberal arts is flexibility on coursework and essays!)
Welcome to the forum! I don’t know where you go to college, but UMich appears to have an active EA group. Often local connections are helpful.
One idea: if you are interested in research, Lizka (who manages the forum and worked with me last year in a research fellowship) wrote up a list of humanities research ideas for longtermists.
More generally, doing good effectively is hard, and can be scary sometimes. None of us have all or most of the right answers, even if we use big words and fancy numbers and sound confident about specific subclaims. We’re all in this difficult, potentially arduous journey of trying to figure out how to do the most good. There’s no guarantee that we’ll figure it all out, but we just have to give it our best shot, and approach these questions with curiosity, and dedication, and careful thought.
Hi! Unfortunately I’m about an hour and a half away from U of M, and my college doesn’t seem to have an EA group. I will definitely check out the list of humanities ideas though, thank you!
Kudos to you for caring and wanting to contribute!
Regarding your vision to contribute from the fields of art and humanities, I can tell you my personal thought on the matter—which only represents me, but maybe you will identify with it: with the progress of human knowledge and technology, the possibility of human society (and of every detail in it) to change the world is growing at an exponential rate.
This means that the most effective thing that can be done today for the sake of the future, is to pass on to future generations the desire to improve the world. Every effort we make to improve the world will translate into greater impact if we invest it in preserving the ideology. For example, if each generation invests 95% of its effort in spreading the ideology (of EA and related ideologies) to future generations, and 5% in searching for solutions with the tools we have today (the 5% can be seen as a kind of “dividends” from the investment of preserving the ideology), I believe That the future will be much better than if we put most of our efforts into finding solutions right now.
Fortunately for you, the humanities and arts can contribute greatly to efforts to preserve and propagate ideologies.
But this is not a simple task at all! For it to be effective, the messages themselves need to be relevant for the long term—more conceptual and less specific (for example, probably environmental advice that is important will not be relevant in 100, 200 or 5,000 years; but the understanding that all animals deserve rights will be relevant in the future as well).
In addition, the task of transferring ideas to the distant future is very difficult in itself. But it is possible: it has happened with a number of philosophical ideas, social and political ideas and religions.
I believe this could be a direction for what you are looking for.
Good luck!
Welcome to the forum!
I think your concerns are definitely valid. EA has very much a quantitative, numbers-focused bent to it, and that tends to attract a lot of men, which probably goes a long way towards explaining the demographics. (EA is around 70% male, according to the 2020 survey) For instance, computer science is ~80% male, and is also a very common degree path among EA’s. So you’re definitely right both that men are more common, and that there’s a strong emphasis on scientific ability and intelligence in EA.
That said, as you mentioned, one of EA’s greatest strengths is that they’re open to other ways of viewing things, so having a different view can sometimes be a good thing. It’s tough in some ways, since you might not want to follow the same path as other EA’s and it might be harder to find a niche. However, there’s diminishing returns to even highly-valuable skill sets, so having a rarer skill set is often a good thing, since less people can do what you can do.
To use some made up numbers (because EA’s love numbers :P) if EA values computer science 10x more than liberal arts, but there’s 20x more computer scientists in EA than liberal arts majors, a liberal arts skillset could be twice as valuable as a computer science one.
Hi Mar,
Even a liberal arts program can change the world! I’m working on pain and I need anecdotal evidence that Dr David Hanscom’s methods work. You can help if you know anyone with chronic pain- the problem is that the solution seems a bit off the wall, you just write your thoughts/emotions/feelings out for 5 minutes on a piece of paper, rip it up and throw it away.
More details here: https://stuartwiffin.substack.com/p/pain-and-what-to-do-about-it
If you can get someone to try it, please post results here.
Thanks,
Stuart
Here’s me just jumping in to break the ice and lower the bar to entry for other folks thinking about maybe posting something in this thread!
I’m Rob, new around here.
I read Doing Good Better a few years ago, got interested in EA and started channeling >50% of my charity donations through EA. But wasn’t very active in the EA community. This year I decided to take the EA online intro course. And with the course—plus all the reading and podcasts around it—I’ve been gravitationally sucked into the EA universe (in a good way).
My specific intention right now is to work on Improving Institutional Decision-Making as that is pretty close to what I do for living at the moment.
My background is in psychology (undergrad) & then marketing (17 years professional career) & now coaching leadership teams (inc. around their strategy and key decisions).
If anyone is based in Australia and especially Sydney, let me know—I have checked out EA Sydney on facebook and meetup and haven’t seen many upcoming events.
Hey Rob,
welcome to EA and the forum. Nice to hear you are already engaged so deep with the material. I suggest just writing the local group, maybe they have meetings and are not advertising them correctly.
Have a nice day. :)
Hi all! I’m a 40 year old mom who just (re)discovered 80,000 Hours, and through it this forum. I am look at a career change (though I don’t know if I really have had a “career” up to this point), and I have the luxury of taking some time to figure out where I could be most useful and enjoy what I’m working on.
I really like the concept of using my time/skills/money to do the most good, though I find that (I think like many people), I place slightly different value on some parts of EA vs. some of the introductory posts I have read. So far, I am interested in social justice issues, promoting/preserving democracy, what future government might look like, and the general preservation of human life through access to preventative medicine and stable food and water supply.
I have background/some skills in many areas, including electronics repair and manufacturing, inventory management, web design, copy writing, video editing, photography and photo editing, box office/front-of-house, retail customer service, shipping coordination, planning and facilitating children’s programming. I don’t really see how this might get me a job at a very EA-centric company, but who knows.
For now my goal is to learn and connect, so if you have any recommendations for things I should read or just want to say hi, please send me a message.
Welcome Danielle,
nice to see you here! The EA movement can help you with your career choices. I really recomment getting in touch with a group either local or virtual.
If you want to learn more about EA, a Fellowship is a good way to learn together and share your different views and arguments or if you want to learn more which career would be a good fit for you book an 1o1 with someone. Don’t be shy to hit on people and write them if you have an idea or a question.
The community is here for you. :)
PS: Reading recommendation time. I belive in you can only give a book away as a present which you have read yourself, so I will recommend you Sapiens: A Brief History of Humankind : Harari, Yuval Noah. The title gives its premise away but it really helped me to extent my view on our species and the timescales we are dealing with (important for longtermism and our own historical perspective). I recommend it partially because I just finished Homo Deus from the same author and it makes a good addition to the first book.
Also maybe look here:
I scraped all public “Effective Altruists” Goodreads reading lists
Hi, Folks! I’m Wahhab Baldwin, a 78-year-old retired software developer and manager and minister. I have donated at least 10% of my income for decades, strongly favoring effectiveness. I ran into EA through the Podcast interview with William MacAskill on “People I (Mostly) Admire.”
I strongly affirm much of EA, but I disagree with certain elements, and hope I am able to have some enlightening conversations here. I hope tomorrow to write a post on longtermism. As a preview, I will argue that we must discount a future good compared to a present good. It is better to save a life this year than to save a life next year. If we discount at the conservative rate of 2% per year, then a life 1000 years from now should be valued at 1⁄600,000,000 of a life today, meaning (imo) that we should really focus only on the next century. But before you argue, read my more detailed post! I look forward to our conversation. (Now at https://forum.effectivealtruism.org/posts/xvsmRLS998PpHffHE/concerns-with-longtermism).
Welcome to the EA forum Mr. Baldwin.
I like to link to stuff others pointed out so it is easier to get to the content. Here is a link to the podcast episode of People I (Mostly) Admire with William MacAskill:
Episode 86, A Million-Year View on Morality (52:31)
”Philosopher Will MacAskill thinks about how to do as much good as possible. But that’s really hard, especially when you’re worried about humans who won’t be born for many generations.”
https://freakonomics.com/podcast/a-million-year-view-on-morality/
“If we discount at the conservative rate of 2% per year [...]”.
This argument strikes me as one from Richard A. Posner presented in his book: Catastrophe: Risk and Response (https://www.amazon.com/Catastrophe-Risk-Response-Richard-Posner/dp/0195306473/).
This idea is known in the community but I am looking forward for your post and the discussion beneath it. :)
Also big kudos to donating based on effectiveness over the past decades (and donating at all). I think this could also deserve a post, on your history of figuring out which donation is effective and how you choose between them.
I look forward to reading your post!
Hi! My name is Liza Munk, and I found out about EA through an ad on the podcast, Dear Hank and John. I’m working on making a career pivot away from academia but not yet had luck finding a new job, so I found it really refreshing to stumble upon a more optimistic message about my ability to find a high impact career. My MA and PhD are in ethnomusicology, the cultural anthropology of music, and I did my research as a Fulbright Fellow in Amman Jordan. It was never boring! But I grew increasingly frustrated with academia and am so looking forward to finding work that is collaborative at its core, respectful of work-life boundaries, economically secure, and remote (if not local) so that I can stay close to my family. I have a lot more reading, thinking, and brainstorming to do before I can narrow down which high impact area(s) exactly I’m most passionate about, but I can tell you how nice it was to hear a reference to cultural anthro on the podcast—having a high impact on the Ebola epidemic. I would love to connect and talk more! You’ll find my contact info on my profile.
Not really EA-related, but your research sounds really interesting (I’m a musician) and I’d love to hear more about it :)
Dear Hank and John is my favourite podcast. I’m so glad you found EA through it!
Which podcast episode covers EA? That’s cool!
dear hank & john 002 - It’s a Humor Podcast! (a very brief mention at question 1)
https://nerdfighteria.info/v/210501999/
This redditor asked a related question:
”Did Hank and/or John ever comment on effective altruism? Where?”
https://www.reddit.com/r/nerdfighters/comments/nrikvp/did_hank_andor_john_ever_comment_on_effective/ (it says 1 comment but I am unable to find it, mabye I dont know reddit too well)
A Beautifully Foolish Endeavor by Hank Green could be considered EA related:
https://ea.greaterwrong.com/posts/5aM8qQE3Pq9D8HxrR/fiction-about-ai-risk
There was a cooperation with their charity event:
”Join our Project for Awesome workshop to make videos for high-impact charities, in order to win money for them and introduce them to a wider audience! During Project for Awesome, a community-driven charitable event created by Hank and John Green, thousands of people create videos advocating for various charities. Then, people vote for their favorite videos during a 48-hour continuous live stream. Last year, the effective altruism community won about $190,000 split across 7 charities: The Against Malaria Foundation, The Good Food Institute, GiveDirectly, Clean Air Task Force, The Humane League, Givewell Maximum Impact Fund, and Wild Animal Initiative. Three of the EA community’s videos were featured, and John gave a shoutout to EA.”
https://www.gatech.edu/event/2022/02/17/project-awesome-video-making
So they know GiveWell and Effective Altruism.
Very cool. Thank you, Felix!
I/Leo have now incorporated the new elements from Big List of Cause Candidates: January 2021–March 2022 update into Big List of Cause Candidates.
Updating existing content is great :-D
Hello everyone,
My name is Bary, I’m currently recovering from a 2-year-long anxiety and depression episode, which I partly attribute to me joining EA around that time. I am really thankful for this community, and for the amazing people here that helped me recover.
I’m writing a piece for the red teaming contest about community mental health. If since joining or learning about EA, you suffered from a mental health problem, even a minor one, or felt less satisfied with your job or life, I would like to have a 1:1 with you. Feel free to ping me here, on barylevi@gmail.com, or on Discord Bary#6478
Of course, if you feel like talking about your experience can cause you distress, please pass this opportunity. I am not a mental health professional and I will not be able to help.
Hi Bary! While my chronic condition isn’t a mental illness, it still dictates what most of my engagement with EA can look like. I’m really glad to see comments and projects like yours, and I’d like to encourage others in EA to feel safe to talk about their illnesses and disabilities.
Excited to see the result—think this is a great contribution to the community. I suspect that a lot of us are heading into longtermist work (ex: xrisk reduction) without proper psychological footing and that more support in the movement would go a long way towards reducing burnout...good luck, Bary!
Hello!
-
Is there a way to search for sequences in this forum? There are some sequences that seem really good, like “Winners of the First Decade Review”, but I’m struggling to find sequences other than happening to find a post that’s part of a sequence, and then clicking on the sequence banner at the top. Any help would be great! :)
(Perhaps this has been talked about before, and I’d be grateful for being pointed in the right direction if so—though when I quickly searched just now, I wasn’t able to find an answer.)
There’s a link to “library” in the menu on the left of the homepage.
It links to https://forum.effectivealtruism.org/library which has a list of sequences at the bottom.
Hi. I’m based in the UK, and run a sizeable reuse/regifting movement called Freegle (www.ilovefreegle.org).
After reading too much Peter Singer, I donated a kidney and am involved in the Give A Kidney charity, which promotes non-directed (aka “altruistic”) kidney donation.
None of this is effective as cataract operations, of course.
Hi Edward, you seem like a cool guy! Welcome here!
Fwiw, looks like reciprocity.io (a Rationalist and EA dating site) has recently been upgraded. Thank you Buck for your work on this
Hello everyone, I am a new forum member.
I am 29, live in Geelong, Australia and discovered the EA community through reading LessWrong a few years ago. I have paid attention to it since then, as well as read some of the canonical books and set up a regular automated donation. With this considered, I have since realised I should be more proactive in reaching out and connecting with people. Especially as I am currently going through a transitional phase in my career.
Originally I completed a bachelor’s and master’s in exercise science and worked within the strength and conditioning industry from the age of about 21-27. After that, I worked in the mental health industry for 12 months and now work in government. This transition came about during COVID-19 after I took the plunge of learning to code. I loved the challenge and after gaining some basic skills, I decided to re-enrol at university for an undergraduate in artificial intelligence.
From about 24 onwards I had fantasised about changing careers and working in a space that was more pressing and impactful—particularly AI alignment—but felt I had already chosen my path and had to stick with it. (How naive of me!) Learning to code made me feel that maybe I hadn’t missed my chance and the sooner I started making moves, the better. Although I still have a long way to go in getting my technical knowledge up to par, I am just taking it one small step at a time. Please let me know if you have any advice for someone at this stage of development who can start to get involved, contribute, and build skills that will have a scalable impact.
I also keep a blog where I try to arrange and distil my thoughts about topics that interest me, such as self-development, rationality, and productivity. If you’re interested, you can find it at ThereforeThink.blog.
Nice to meet you all!
Hi all! My name’s Tom. I’m an A level student in the UK, currently studying Politics, Philosophy and Environmental Science. My main forays into the world of altruism include volunteering at Oxfam for a period and donating to the odd charity here or there. Admittedly though, Oxfam isn’t a very effectively targeted organisation, and the charities I have previously donated to have been more based on superficial measures such as my respect for people who endorse them or have been motivated by disaster responses.
The issue I’m most concerned about is Climate Change, which I notice doesn’t seem to be as big an issue for many members of the EA community. My subject A level choices were largely influenced by my aspirations to help tackle Climate Change, and ideally I’d like to move into an effective career in this area ASAP. However, some of the stuff I’ve read in the EA community, particularly some of the advice of 80000hours.org has made me contemplate this plan, but I think it is something I’ll have to contemplate further.
I seem to remember that I initially found Effective Altruism through the reddit page after I google search on some adjacent topic that I can no longer remember. I initially thought it seemed like a good idea which reflected lots of questions and ideas I’d had myself, but haven’t been able to productively apply thus far. However, it wasn’t until a video I watched recently by YouTuber Ali Abdaal where he discusses taking the Just Giving Pledge, that I started to dig deeper and read more about the community. Upon discovering EA I realised that I’d been on the cusp of discovering it several times, such as when I first heard about GiveWell (which I found out about through a sponsored Simon Clark as I recall) and another time when I read the synopsis of, and almost bought the book Moral Uncertainty by William MacAskill, Krister Bykvist and Toby Ord.
More recently I’ve started a Sustainability group at my college to try and educate students on climate change and reduce the College’s emissions, however as mentioned above, I am beginning to question the effectiveness of climate interventions such as these.
I have many questions ideas I’d like to discuss relating to EA. The following is a list of a few things things I’m thinking about right now, but might write more detailed posts on:
Does 80000hours.org work with careers advisers in educational institutions?
In Doing Good Better, William MacAskill sets out the view that carbon offsets are more efficient than reducing ones on footprint, however what is the limit to this, and would are its drawbacks? If there are few, surely more people and governments would take this carbon sequestration approach.
If we should donate to the best causes, surely it doesn’t make sense to donate to more than one cause? However, isn’t there something to be said for supporting less effective causes so that would-be beneficiaries don’t feel neglected?
If anyone knows of any information/resources on these topics than please send them my way!
Hello Tom,
welcome to the EA community. I also encountered EA related topics countless times before joining (Give Well, 80k, LessWrong, …). Nice to see you here. :)
Now to your questions.
I dont know if 80k works with career advisers in educational institutions, but I know that they will answer your question fast and in detail if you write them directly:
https://80000hours.org/about/contact-us/
Doing good better is published in 2015. Carbon capture and sequestration is still future tech and does not resolve the problem with rising climate gases.
Simon Clark did a video on this topic lately, condemning the tech:
Have a look into the IPCC report and look for yourself:
https://report.ipcc.ch/ar6wg3/pdf/IPCC_AR6_WGIII_SummaryForPolicymakers.pdf
Why not just donate to the best cause?
The best cause is specific to you. It is good practice to splitt your donations on different cause areas, since you have different interests and want to improve the world in different areas.
If everyone would be like a perfect robot and is only donating to the best cause at the time every donation would go to them, thus making everyone else neglected which would make them the best cause in return.
You don’t have to donate strictly to one cause area, feel free to decide in which area you want to have an impact and then search where you could make the biggest impact.
Nice to see that you have read quite some EA literature and that you are working together with your students to make the world a better place. The saying goes in the education for Sustainable Development:”think globally, act locally.”
Welcome Tom! Excellent subjects to dive into and further yourself! Stick at it!
Hello!
I’m Arjun Khemani, a student in high school. I have a blog and podcast that revolves around science, philosophy and the human condition.
Signing up for the Forum had long since been an overdue after hearing all its incredible comments! But I’m glad to be here now.
Problems of interest to me include changing the way education is done and making humanity a multi-planetary species.
Looking forward to having interesting discussions on doing the best we can.
You can DM me on Twitter. (I love meeting new people!)
New member here. I teach American government and learned about effective altruism through The Scout Mindset and Julia Galef’s podcast.
Hi everyone! First time writing on the forum. I’m a 2021 college graduate living in St. Paul/Minneapolis, Minnesota, USA. I learned about EA through an introductory ethics course I took while completing my philosophy degree.
I’m still figuring out some early career strategies that can help me identify what would be a good fit for me while also adapting to a post-grad life. I’ve taken the GWWC Pledge and volunteer for farmed animals, I’m interested in reducing suffering of all animals (human and non-human) and I’m also learning more about existential risk and longtermism.
A topic I’m going to be writing about/working on is bodily autonomy. I’ve realized many of the issues that I care about connect back to bodily autonomy in some way or another and I want to explore the connections more through writing and research. I’m excited to continue learning, working, writing, and connecting with the larger EA community!
Hi everyone! I’m new to this forum. I’m a professor at ASU who studies cooperation. I’m also an author, a podcaster and I produce educational video content. I am excited to explore the connections between my research and EA. This morning I made a post about need-based transfers, one of the topics that I work on, and how those might fit within EA frameworks. I’m looking forward to interacting with you all and learning from the community.
Hello!
I posted this in a short form, but I guess it’s also good to post it here.
And for a bit of background, I came upon EA a couple of months ago and found it very meaningful. I attended my first meetup last week and also start the Giving What We Can pledge in September.
Nice to meet you all!
Hi Fabien,
I used a method by Dr David Hanscom to cure chronic pain and it worked (spectacularly) for me and several other people. I would like to run a trial to see if I can get some solid data on whether something we don’t fully understand could nevertheless help millions of people with severe, long term pain.
No “boosterism”, I’m not going into this to “prove” it works but to find out.
If you’ve got free time, I would love to have you onboard- I need to contact pain groups and influential people who can help, such as Scott Alexander, Julie Rehmeyer, Andreas Gobel, Isobel Whitcomb, Julia Wilde, Nick Whitaker etc, and basically form an online team to work together on this project- with someone like Andrew Gilman to advise on methodology.
I would then need to run the experiment and collate the results- the method is completely free, takes 5 minutes and sounds completely wacky.
But, if you’re here, I assume you’re a Feynman fan- https://www.lesswrong.com/posts/W9rJv26sxs4g2B9bL/transcript-richard-feynman-on-why-questions
If we look at Ignaz Semmelweiss https://en.wikipedia.org/wiki/Ignaz_Semmelweis we can see that not all ideas are treated equally.
If you’re at all interested by this, but have questions, please get in touch. If you’re (understandably) sceptical, please find someone you know with chronic pain and ask them if they’ve got 5 minutes to help you decide what path to go down. If it works for them, please tell me- also if it doesn’t.
I don’t want to imply that this must be a barrier to action, but how much time have you spent digging in to questions relevant to cause prioritization? Your priorities might change as you investigate more.
Here are a couple flowcharts—if you haven’t engaged with a particular question before, like really grappled with whether animals have moral status, you might find your priorities change as you think through these considerations.
https://forum.effectivealtruism.org/posts/TCtbuGC3yBisToXxZ/a-guided-cause-prioritisation-flowchart
http://globalprioritiesproject.org/2015/09/flowhart/
Thanks, I have started to dig into the causes mainly through listening to podcasts and it really shifted my perspective on many causes; and actually let me to that post, but these flowcharts are new to me—I’ll dive in, thanks !
“Hello World!”
My name’s Fazle.
I’m an 11th grader(2nd grade senior high) student in Indonesia.
I first came across EA from the book ‘Impact’ by Christen Brandt and Tammy Tibbetts.
I bought this book alongside 2 other books, ‘Immune’ & ‘Kingdom of Speech’
Forgot the specific reason why I choose Impact & Kingdom of Speech out of all available books I could choose, but one thing for sure is ‘to broaden my horizons.‘
Immune coz its Kurzgesagt’s.
At the time, when ‘Impact’ mentioned Effective Altruism, I have no idea what is it.
Is this some sort of system? A method to do things? etc. I didn’t get it at all.
That’s when, around 2 days later, Kurzgesagt uploaded their most recent video in which at the end of the video, they mentioned that EA is a global movement. Then I looked up EA. Knowing that I could be a part of this change-making community, here I am!
I’m interested in reshaping the education system. This outdated system… If only this system were as updated as our technological advancement… Won’t it make it easier to have a better world?
Oh well. I might be inexperienced and can’t help much. But my long ahead is still pretty long… damn I can still barely see my starting point.
I’m happy that I could be a part of this community, and I wish to learn more through all you guys!
:)
Love the passion Fazle. Welcome brother!
Welcome Fazle,
a good way to learn more about EA is to participate in the Intro Fellowship. Take a look here:
https://www.effectivealtruism.org/virtual-programs/introductory-program
I watched the Kurtzgesagt video too and was so hyped if it would bring new people to the plattform. Nice to see you here. I hope you can get the necessary information for you out of here and learn something. :)
Hi Fazle!
I think local groups are a great way to engage with EA. I’ve met a few EAs from Indonesia, but I’m not sure about the state of local groups there. Maybe you can contact Bradley and ask if there are local or online meetups in your area :)
Welcome!
[FOUNDING ENGINEER JOB OPPORTUNITY]
Dear all,
My name is Drew Schneider and I am the co-founder and CPO at Chariot. Chariot is a payment network for the $420B+ in charitable donations. We build the financial infrastructure that connects nonprofits to complex charitable assets. Chariot is a mission-driven organization aiming to get more dollars into the hands of nonprofits around the world.
We are currently backed by Y Combinator (backed Airbnb, Stripe, Instacart, Coinbase), Spark Capital (backed Twitter, Snapchat, Plaid, Slack, Oculus), SV Angel (backed Google, Facebook, Stripe, Square, Dropbox), and top tier angel investors like Mike Massaro, Ofek Lavian, and Angela Duckworth.
We are currently hiring a founding engineer to join our team. If you or anyone you know is interested please check out our job description here.
Every line of code you push will lead to millions of more dollars going to nonprofits :)
Best,
Drew Schneider
P.S. If you have any questions feel free to email me at drew@givechariot.com
Hi Drew! You might want to post in the Who’s hiring thread for more visibility
Hi 👋 my name is Sarah, I’m 42 and I live in Calgary Canada.
I first heard about ‘doing good better’ from Sam Bankman-Fried, it was a very new and interesting topic to me and finally got me into EA.
Since I was 22 I’ve started donating a small portion of my income through sponsorship, the first family I sponsored was a mom with two very young kids who lost their dad in an accident. Today I have 4 families which introduced to me through a trustable friend and all are living in a middle eastern country. I believe in the world as a whole and I know a few dollars really can make a difference in someone else’s life.
My goal is to help as many people as I can either through sponsorship or a one time donation, specially to the families who are dealing with illness. I would like to also volunteer but haven’t figured out where yet!
I’m not donating because it feels good, I believe I am given to give, and through donation I’m doing what I’m supposed to do.
Im excited to be here, to learn from others and to see the opportunities that I might be able to help, and if there is an EA community in Calgary I would love to meet them.
Hello Sarah,
welcome and thank you for your donations! Even if it makes you feel good to donate and you do it because of this good feeling it would be completely fine to feel this way. If not, also great. :)
I am not part of the EA community in Canada but they seem to have seven local groups which you can join if you want.
https://effectivealtruismcanada.com/groups/
Feel free to explore the forum and take a look into the handbook.
https://forum.effectivealtruism.org/handbook
Hey everybody!
One of my friends is interested in learning more about EA and I am trying to find good resources to recommend to her. The thing is, her English is only so-so; her preferred language is Spanish. I found a couple websites that give brief overviews of some EA ideas, but I am having a hard time finding comprehensive EA texts in Spanish.
Does anyone know of any EA resources in Spanish that could be helpful?
Thanks!
Maybe you can ask some of the local groups https://www.altruismoeficaz.org/grupos
Ayuda Efectiva (https://ayudaefectiva.org/la-ayuda-efectiva) seems to have a lot of Spanish-language content, and their blog seems pretty active https://ayudaefectiva.org/blog , but I’m not sure if it counts as “comprehensive EA texts”
I hadn’t heard about Ayuda Efectiva, but it looks like a great introductory resource and I’ll definitely send it to her. Reaching out to those groups might also be a good idea. I appreciate the help!
Hello All,
I have been giving through Givewell to a charity for years (on and off) since hearing William MacKaskill on Sam Harris’s podcast in 2016.
I would absolutely love to get any feedback from the EA community on a movement / platform, called Value Life, that I would like to conceptualise. Or even where I could or should potentially post?
A high-level overview is provided in my first Substack newsletter here (which contains a short (<4min) video about what the Value Life community and platform is, or is striving to be).
https://valuelife.substack.com/p/value-life-newsletter-25-aug-2022?utm_source=email
A direct link to YouTube is as follows:
The Value Life presentation was actually complete a few months before Mr MacKaskill made his first appearance on Sam’s podcast (in 2016) and at the time I did not put two and two together and join this community, but now I am fully motivated with content to share and promote.
I am actively looking for a job in the EA movement if possible too, but ideally, I would like any feedback to see if the idea for the Value Life platform has any legs (I know the movement with respect to the mentioned “Deep Trust” should—which is the primary purpose of the YouTube mini-series (a few more episodes are in the pipeline to wrap-up in the coming weeks)).
If you want more information about me:
My mission statement in a nutshell, is as follows:
“To be actively involved with an ethical and moral organization and community, championing the causes to further humanity and make the world a safer, cleaner and happier place for children forevermore.”
I am a Civil Engineer by training and having passed my Professional Engineering (PEng) designation in Vancouver, BC, Canada, (have my black belt in Taekwondo) and I mention this because I am honour bound (having sworn oaths) to serve society and believe this is a strong foundation to further humanity.
I am also a certified Professional Project Manager (PMP). I have worked on multi-billion dollar projects and managed multi-million dollar civil infrastructure and big tech projects simultaneously.
I invite you to review my work experience and recommendations on my Linkedin profile from previous clients, employers and colleagues.
https://www.linkedin.com/in/ross-mcmath-b-eng-hons-p-eng-pmp/
I would welcome any request to connect.
I realise that is a lot for a short introduction, but any response, advice or direction would be very much appreciated—I would love to know what you think!
Thank you very much in advance for your time.
I Value Life—Do you? ;-)
Ross McMath
Hello Ross,
welcome to the EA community. Thank you for your short introduction and nice to hear that you changed your donation pattern since 2016 with the use of Give Well.
I practice kickboxing, I figure I could learn something from you already. :D
I am currently struggeling with what your goal of this post is. Maybe split it in parts next time. One for your biography, one for getting feedback an your Value Life and one where you promote yourself for a EA related job.
Value Life is this foundation?
https://noitidart.github.io/value-life-foundation-donate/ (VLF)
If you are looking for a job, take a look at 80k.
https://80000hours.org/
80k has a jobboard and they can give you career advice if you scheduel a 1o1 with them.
https://80000hours.org/job-board/
I recommend you read the handbook:
https://forum.effectivealtruism.org/handbook
https://80000hours.org/key-ideas/
And read an EA related book:
https://www.effectivealtruism.org/resources/books
https://80000hours.org/book-giveaway/ (get one for free here)
If you have any questions feel free to ask.
With kind regards
Felix Wolf
Dear all,
My name is Arturo Macias. I am a 45 years old economist working at Banco de España, the Spanish Central Bank. I have recently finished my Ph.D (see my ORCID account for my published papers: https://orcid.org/0000-0002-1623-0957) and consequently I have recovered a substantial amount of free time.
While I have a great deal of simpathy to the whole Effective Altruism movement my main interest is related to intitutional desing and economic estabilization. In my view among the main existential risk bottlenecks for this Dangerous Century, a critical one is institutional stagnation. E.O Wilson famously said: “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology”. Regarding the Paleolithic emotions, I can not advance any solution (this is for geneticists), and regarding the godlike tecnologhy, after Aug 6th 1945 nobody can.
Regarding the medieval institutions I think I can make some modest contributions and thats is why I am here.
Kind Regards,
Arturo
Welcome!
Welcome Arturo.
Changing our institutions to fit the existencial risks is a big part of EA. I hope you find a group where you can participate, learn something new and change the world to be a bit better than before. :)
Hey everyone! My name is Stew, and I’m a 22 year old recent college graduate. I first heard of effective altruism through the media I consume (podcasts, written media, etc.) and became further interested in the topic after taking a philosophy class at my university. I’ve lurked on this forum before and browsed sites such as 80,000 hours, but the recent momentum EA has gained in the mainstream has prompted me to create an account. Looking forward to interacting with all of y’all! Cheers.
Welcome!
Hey, I’m Jeffrey Arana. Raised and reside in Hudson County, NJ. I live right across Manhattan so you might have seen me in EA NYC events.
I first heard about EA through the Peter Singer TED Talk and then went down that rabbit hole. I kind of fell off and would donate 10% of my paychecks here and there until Sam Harris interviewed Toby Ord a year ago and got me thinking about having a career in EA.
Some personal non-EA things about me: I love to play and watch football (I’m sorry, soccer), penny board around NYC and my neighborhood, try cooking some new vegan dishes, and read books.
I’m an undergraduate in economics at Rutgers University-Newark but will be transferring to the New Brunswick campus in Spring 2023 and hopefully help restart the EA chapter there.
Some of the career options I’m mostly interested in are research into global priorities, global health, and poverty. Policy and governance domestically and internationally are also intriguing to me too. Earning to give would be something interesting to try while an undergraduate.
Although I don’t see myself dedicating my career to AI or factory farming I consider them both top priorities as well (I’m vegan) and looking forward to learning about AI safety!
Oh hey! Another Rutgers guy! I’m still working in Jersey so let me know if I can help out in some small way with your EA Chapter.
Hi everyone!
About me
Experience with EA
Question to the community
Hi Sibo!
You might be interested in talking with https://effectivethesis.org/ and applying for career coaching at https://80000hours.org/
Thank you for the tips! I’ll check it out :)
Hi everyone.
I’m Riccardo working on a mobile game with a group of activists.
I decided to look into the EA forums again after listening to an interview with Will MacAskill on Ali Abdaal’s YouTube channel, if you haven’t seen it, it’s one of the best interviews I have seen with Will.
I checked out the forums years ago, so I thought I’d give it another try and look a bit through the most popular posts and see what people here are up to.
Looking forward to see how things have developed since the last time
Here is the video Riccardo mentioned:
The Most Important Book I’ve Ever Read | Ali Abdaal (00:12:10)
Ali describes himself as a former doctor who turned to be an entrepreneur and centers his videos about making money.
@Riccardo, welcome back to the forum. I hope you found something interesting to read.
Here is the video Riccardo mentioned:
I did not know about the second channel. Thanks for pointing it out. Riccardo said he watched an interview, my mistake.
This youtuber seems to be popular, I saw some folks mentioning him here. Maybe I will give him a try.
Hello all! I’m Lauren. I’m a startup founder working on an app to make it easy to measure impact. We ultimately want everyone who is putting effort into creating good to be able to measure it, prove it, improve it, share it, and scale it… We want to make Measurement/Evaluation (M/E) woes a thing of the past so that we can collectively put an end to ineffective and harmful effort. #shouldbeeasyright
Would love to connect with anyone here who also works in M/E or needs help with your impact practice/ data collection/ measurement/ reporting. Very excited to have come across this community!
-Lauren Schwaar and the Fathom Performance team
Are any of the more philosophically incline EA’s into the pragmatic school? There’s obviously a (very!) strong consequentialist strain here though I could see engaging with pragmatists like Dewey, Rorty and others to be super helpful to this movement.
Hello everyone.
I have been following the 80000hours podcast and Peter Singers work for years. So naturally I had considered creating an account here as well, but what ultimately pushed me over the edge was that I wanted to point out a typo xD
My philosophy can mostly be described as utilitarian to the extent that this doesn’t interfere with truthfulness, which fits well with EA to a point (I also value certain virtues and principles, but that shouldn’t be too relevant). The main reason why I decided against joining this forum for the longest time is its policy regarding information hazards.
The EA cause I care most about is animal rights.
I am not sure how active I’ll be here if at all. But just in case I wanted to introduce myself anyways. Nice to meet you all.
Welcome to the EA forum Aithir. :)
I want to learn more about truthfulness, but a quick google search stranded into meaningless. Can you please recommend something to learn more about this principle and why you think it is more important than the concerns about spreading potential harmful information?
I am afraid I don’t have a good answer to that. I just find it insufferable when people try to surpress (by the speaker believed to be) true statements for any reasons. My thought process is just that the consequences of a true statement don’t change that it is true and so you get to say that it is true. This just feels very natural to me. To psychoanalyze myself a bit, I score relatively high (33/50 - indicting significant Autistic traits) on the Autism-Spectrum Quotient Test and exceptionally low (0th percentile) on the aspect Compassion of OCEAN Agreeableness.
I think the degree to which you can argue yourself into a different ethical position is limited by your psychological predispositions. People more empathetic than me ironically might just not be able to relate to my position.
Hi everyone,
In this recent critique of EA, Erik Hoel claims that EA is sympathetic towards letting AGI develop because of the potential for billions of happy AIs (~35 mins) . He claims that this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation).
Is this true, or is it a misrepresentation of why EA funding goes towards alignment? For example, perhaps it is because EAs think AGI is inevitable or it is too difficult to delay/prevent?
Thanks very much!
Lucas
Interesting, thanks both!
I can’t speak for the donors, but only trying to prevent AGI doesn’t seem like a good plan. We don’t know what’s required for AGI. It might be easy, so robustly preventing it would likely have a lot of collateral damage (to narrow AI and computing in general). Doing some alignment research is nowhere near as costly, and aligned AI could be useful.
While I am also worried by Will MacAskill’s view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that “this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)”.
Hi everyone!
I am a budding product designer for blockchain/web 3.0 projects who just recently discovered the Effective Altruism community.
Having done some work with Decentralized Autonomous Organizations (DAOs), it has become clear to me that they could become a vital part of EA of the future because they can enhance transparency and governance of effective altruism communities and projects.
They can also help to scale projects and fundraising campaigns.
As a member here I hope to contribute content which aims to educate community members who are new to cryptocurrency and blockchain and also expand upon my thoughts on how blockchain technology can make EA even more effective and why we must pay attention to DAOs and Web3.
I’m seeing that two posts are now starred. If I recall correctly, LW mods used to leave a comment explaining why when they did that, which I thought was a neat thing to do. I’d also appreciate a way to turn it off.
Hi Nuño, thanks for this comment. I’ll be trying to write comments going forward (I wrote one for one of the posts I curated).
Re: turning it off, I’ll pass that on to the rest of the Forum team—thanks for the feedback!
Cheers
Part of it is that it, like the pinned posts, just stay there, rather than disappearing for a while once I’ve seen them.
Another part is that you now have pinned posts, curated posts and recommendations, instead of just having just one section.
I agree that this is not ideal. We’re hoping to clean this up and improve the structure.
Hello, I’m a french citizen living in Bourges, France. I own a bachelor in wildlife conservation. I learned about EA through my personal curiosity about artificial intelligence and I also was wondering how to help biodiversity on the long term. I don’t work because of health problems but that won’t stop me from doing my part as a volunteer.
Hi all,
I am new to the forum, but not new to the ideas behind EA. I read Will’s book back in 2015 when I met him in the US, and have been inspired to do the most good with my career. I have lurked on the forum for a while, but would like to meet more in the community so finally introducing myself. :)
I currently work on digital at the UNDP, but also potentially open to new opportunities.
I am curious to learn more about organisations working on longtermism policy research and advisory, if there are any?
I would love to connect with others working in the space especially if you have ideas you would like to bounce around. Here’s my Linkedin! https://www.linkedin.com/in/greenjcb/
Cheers
Jamie
Hi Jamie
For Longtermism see UNESCO’s declaration of the Responsibilities of Current Generations towards future Generations.
Also the Welsh Assembly in the UK has a working Future Generations Commissioner www.futuregenerations.wales
Best Regards
Trev Prew
trevorprew.blogspot.com
Hi folks,
newcomer here.
I am looking for advice on what to do when i have -finally, at long last- finished my masters degree in bioprocess engineering at a major university in southern germany (KIT, Karlsruhe).
There is a lot of cool research going on here and staying at my cozy environment, working on a doctorate degree seems tempting. But staying in academia until i am fourty (which isn’t too long anymore) could hamper my ability to find a job later.
I could most likely find a well-paying job in a major Pharma company, but working as a small gear in a big company without a real bigger-picture-cause could stifle my motivation.
There sure is a lot of cool stuff to do in biotech, but is there a company/institution (preferrably in the Karlsruhe/Heidelberg/Stuttgart—Area) that somewhat aligns with EA principles?
Also i’d love if you shared your favorite biotech-articles that don’t revolve around cultivated meat.
Hello Schwabilissimus,
welcome to EA and congratz to finishing your masters degree!
You can connect with the local community in Germany here:
https://forum.effectivealtruism.org/community
There are multiple ways to get career advice in EA, here are some examples:
https://80000hours.org/speak-with-us/
https://www.effektiveraltruismus.de/bibliothek/effektive-berufswahl
You can learn from written text, listen to podcasts about career building or schedule a 1o1 with someone in the EA sphere.
I hope you have fun in the forum and find new insights for your journey.
Hi all. Vastav here. I live in India, and am pursuing my undergraduate studies in Economics and Computer Science. I am primarily interested in issues around urban economics, judicial functioning , and quality of higher education in developing countries.
I found EA while looking for interesting topics for my undergraduate dissertation. The ITN framework really clicked for me, and helped me find a suitable passion and research agenda for myself.
For now, I hope to pursue a PhD in economics, contributing to research and solutions in areas that pique my interest. At the same time, I am creating a research-to-action pipeline by conducting small-scale pilots and case studies to refine my toolkit of solutions. I look forward to interacting with the community on the ways to create and sustain such institutions, and engage with policymakers and decision makers in developing countries.
Is it possible to subscribe to a sequence?
If not, I’m suggesting this feature :)
Hello! I’m Kyle, based in Pittsburgh, U.S.A. I am an interdisciplinary researcher and communications professional, also with significant experience in United States and United Nations policy. I am only a few months old to the EA community. However, I have been working for most of my career so far in EA-adjacent efforts in climate change. I currently work for myself in that field. But, in conjunction with career counseling with 80,000 Hours and others, I am job hunting for an explicitly EA position, ideally in AI safety/alignment. I would love to connect with more people in this community! You can find my LinkedIn, professional website, and other contact info in my profile.
Hi everyone, there are animal rights groups working to introduce new ballot measures in the US in California and Oregon that could help animals in a significant way. In California, we are working to introduce a ballot initiative that will probably be a ban on new animal factory farms in all of California. We are having several possibilities tested by Faunalytics. We are also preparing to first pass a more radical measure banning animal factory farms in a semi-rural California county. This is a beta test for the California state ballot initiative. In Oregon, a group is already collecting signatures to introduce a ballot measure that would ban killing farmed animals in all of Oregon. We need more help with making sure we collect enough signatures from voters in a limited amount of time to get these initiatives on ballots so voters can vote on them later. If you’d like to help get these ballot initiatives introduced, please text me to +1 (650) 863-1550 or message me on WhatsApp or Signal. Thank you! -Rasa Petrauskaite
As Ukrainian woman who was forced to leave the country with two kids, I wonder how I can help Ukrainians to donate more efficiently. Ukrainians become more and more conscious and altruistic nation, but I am sure that most volunteers and donors are not efficient enough. I appreciate any information links on this cause. Thank you.
I’m far from any expert in the subject but the GivingWhatWeCan organization seems good at researching that.
Anyone interested in an EA unconference on gather dot town during EA global? Seems like an easy way to open up the event to those who weren’t selected by the annointed and/or can’t make it logistically. https://forum.effectivealtruism.org/posts/dsCTSCbfHWxmAr2ZT/open-ea-global
I’m super new to EA and I think something like that would be very beneficial for me.
Has EA engaged with the geopolitical risks of AI / AGI? For example regardless if AGI is developed, there are different risks if advanced AI is led and controlled by the CCP verus Open AI.
Hi everyone! I’m trying to learn more about Land Use Reform (YIMBY) Advocacy as a cause area. Are there any cost-effectiveness estimates of the work on this?
Hi Emre!
You might be interested in the Land Use Reform forum topic, and the posts at the bottom of that page
Thank you Lorenzo!
I saw an estimate of I believe from the Gated City book that suboptimal land use policies in the US cost in the neighborhood of ~2% of GDP. So very high!
Hi: I’m Sharmake Farah that has been in the forum since February.
I got to EA by getting a free book called The Precipice, which introduced genuinely novel concepts to me like existential risks, longtermism and our potentially very long future ahead. While I got used to the scales, it’s still large enough to matter.
I made my first post today, focusing on AI ethics and alignment fields fighting, and why that’s bad. Here it is: https://forum.effectivealtruism.org/posts/AARnvz99hiEytnA9k/there-are-two-factions-working-to-prevent-ai-dangers-here-s
My biggest impact so far is in reframing what matters the most in a healthy way. I got much less worried about politics and got to see a broader perspective than my own country’s near term politics.
Have any prominent EA philosophers engaged with critiques of consequentialism? Are there canoncial examples of those that people might point to? I don’t really have a desire to write a several thousand word essay though would be curious what more philosophically inclined EA’s think of works like “Beyond Consequentialism” by my old prof Paul Hurley. ( https://www.amazon.com/Beyond-Consequentialism-Paul-Hurley/dp/0199698430 )
You might be interested in this thing from within the EA community which I think might be one of the deepest possible cuts against consequentialism: Logical Decision Theory (or any solution to newcomb’s problem), but afaik, no one’s written about this angle on it, because it’s pretty arguable that it’s just advocating for a different kind of consequentialism.
But I don’t totally buy those arguments: LDT advocates doing things that will have bad outcomes, when being the kind of person (or the kind of decision theory) who would do those things gets it better outcomes on average across all possible worlds. In human-level application, this ends up looking a bit more like advanced virtue ethics than consequentialism, to me. On the other hand, I’ve seen it argued that regular consequentialism ends up looking like virtue ethics too.
@Lizka, a cool forum feature would be a Google Doc/.docx → Forum Post converter. There are free .docx → Markdown converters online, but they’re inconvenient for footnotes and some of the other EAF features.
From what I understand, you can now copy-paste a google doc with footnotes using “Publish to web”.
Ah! I was ignorant of this. Thank you!
I’m glad people are finding the new feature useful (I’ve fought with Forum post footnotes before and was really excited about the development). And thanks @Lorenzo for sharing the link!
Hey all! 😊
My intuition leads me to think that focusing on improving life quality and eradicating poverty in all forms (not just extreme) would be highly impactful. This is because lifting more people out of poverty means that they have more free time to pursue self actualization, and therefore more likely to pursue EA and an impactful career. It’s essentially the idea behind egoistic altruism (cute video below).
Essentially, I feel that egoistic altruism would be a highly effective purpose to orient my career around. However, my search results haven’t turned up a lot of existing research into whether poverty alleviation (as a means to leveraging the number of humans working on important problems) is /actually/ an effective cause to pursue.
If you have any thoughts, articles or even just intuitions about this yourself, I’d love to hear them. Bonus points if you have insights on whether technology is an effective means to reduce poverty. Please feel free to interrogate any presumptions I’m making.
Thanks! 💜
Jack
Welcome! And thanks for posting this.
I agree that it’s valuable to recognize the downstream impact that making the world a better place can have on your own livelihood. However, the feedback cycle is likely to be decades or longer, and might be hard to use as motivation on a daily basis. In my experience it’s valuable to have more immediate motivators, such as the welfare you might be creating directly with a job focused on making the world a better place, the team you might be working with, or a passion for the technology or product you are working on itself.
Technology is surely a mechanism for reducing poverty (think of the industrial revolution, green revolution, vaccines and drugs). However, a large chunk of our technological output also goes towards marginal quality of life improvements for the already-rich.
Hi everyone!
I’m director of learning for a French ed tech startup focused on evidence-based learning.
I believe I found out about effective altruism proper from Vox around 2015. Had also gone down the existential risk rabbit hole around 2013-2014 while a student but did not make the connection between those issues at the time.
I recall EA arguments influencing my career decision in 2016 as I had just left my previous company and was considering many opinions—to this day, I am thankful for being introduced to a framework that helped me reason through my choice at the right time.
I had never engaged with the intellectual community up until now. What ultimately brought me back to EA was my news and social media consumption. The zeitgeist of the past five or so years appears to have been largely driven by negative passions—anger, terror and despair—passions I myself have indulged in far too often. Rather than rein in or redirect those energies towards the common good, many of the most influential (often highly educated) people in our societies ended up amplifying them.
With every fleeting sentiment turned into a wild conceptual exaggeration, I started wondering—is there anything better on the menu? Reading articles, listening to podcasts such as 80K Hours, I have realized the EA community can help turn the tide.
As much as I agree with the basic principles of EA, I have found the most hope in its general disposition - purposeful, constructive, and prosocial ; based in reality, curious, and open to new evidence. To strive to be of use; to engage with the world as it is to that end; whatever the affliction, this must be what the cure looks like.
General areas of interest include:
Scaling effective instruction—My main area of professional training. While there is much warranted skepticism within the EA community when it comes to spending resources on education, there is also growing evidence that some ways of teaching are more effective than others, and that scaling them works. The potential societal returns of such an approach (especially in lower-income countries) may therefore be underestimated. One often neglected aspect of this debate I have observed in my work is that the space not taken up by effective instruction tends to create a vacuum for methods that are both inherently appealing to funders and counterproductive (anything from a focus on learning styles to pure constructivism, where infants are being expected to “create” knowledge when it would be much faster to just share it with them).
Improving decision making—With a particular focus on key decision makers. I see much promise in improving elite education (turning credentialism into a quest for usefulness), optimizing democratic incentives , and stage directing “the room where it happens” (from EU summits to White Hour “war rooms”, it is striking how many influential decisions are made in bad meetings, with poor structure / access to relevant data).
Cause exploration—Not only on which areas to pursue but also on how to pursue them. For instance, we are far from being able to tell whether a given course of action is likely to increase or decrease the chance of great power conflict.
Hope I can help in some way!
The prospect of Russia’s grotesque invasion of Ukraine leading the use of beyond conventional weapons has produced anxiety in the past. I don’t think this anxiety is justified.
Recently, there’s been media reports suggesting the possibility of this escalation again, after sham claims of annexing Ukraine territory.
Despite these media reports, the risk of this escalation is low. The report here from the “ISW” points this out, with detailed considerations.
https://www.understandingwar.org/backgrounder/special-report-assessing-putin%E2%80%99s-implicit-nuclear-threats-after-annexation
As a brand new newcomer to EA, only a day or two, my fairly ignorant first impression is to wonder whether EA culture is too intellectual.
It could be argued that the most successful effective altruism effort in the history of Western civilization has been Christianity. (FYI, I’m not Christian) As example, in the United States Catholic Charities is the second leading provider of services to the needy, topped only by the federal government. And that’s not counting the very many Protestant charity projects.
Certainly there is an intellectual component to Christianity, but it’s hard to imagine that Christianity would have come to dominate the Western world if there wasn’t a much more accessible way to access Christianity as well.
Honestly, my fairly ignorant first impression, subject to revision at any time, is that EA will have appeal for well intended university based intellectuals, and is likely to be mostly ignored by the wider public. Is this a problem? I don’t know. But what the example of Catholic Charities might teach us is that it’s simple messages which most engage the broad public, and it’s the broad public which has most of the money, and thus most of the power to do good.
Ideally EA will, like Christianity, be presented in a great variety of forms so as to connect with a great variety of well intended people. Just as with Christianity, there can be EA scholars, and simpler EA folk who just want to be nice as well.
Hi and welcome to the community!
I have a few rambly thoughts :D
As far as I know, focusing on exposing potential high-achievers to EA has been both a consequence of EA history (EA comes from academia), and explicit strategy (high-potential people would probably have more impact, so it makes sense to focus on them). This means that the community inherits a lot of norms from academia.
But I feel like there’s a lot of room for the broader version of EA that’s palatable for broader society. Giving what we can is best positioned to do this, but I think the main points to spread outside the core community are roughly:
If you donate, be thoughtful about where you donate, some orgs have much more impact than others.
You should care a little bit about all of humanity.
I think Christian charities seem really trustworthy, since everyone knows Christianity is serious about compassion.
Thank you for the welcome, and your rambly thoughts. :-)
Yea, that makes sense. Rome wasn’t built in a day as the saying goes, and EA had to start somewhere, and it’s good that it did.
Sadly, some ancient institutions such as the Catholic Church seem to have become expert at demolishing their own moral authority, so it’s good that other means of reaching towards the same charitable goals are being established. I like it that EA appears to be ( a first impression) nether religious or secular, and instead just charitable.
All that said, it seems it would be wise for EA activists to keep in mind that the Catholic Church is, in spite of all it’s troubles, still truly massive, global, and very well established in the third world where so much of the need exists. If a partnership is not already being explored, there might be an opportunity there.
People with Twitter accounts might want to vote on my poll asking whether my EA forum comments should be more or less critical.
Hi friends!
I’m Nita, and I’m passionate about limiting sentient suffering through disease prevention. I was initially drawn to effective altruism because I consider it a natural extension of minimalism and intentional living.
I have been sick for half my life with various conditions including chronic cystitis, dysautonomia, inflammatory bowel disease, and Ehlers-Danlos syndrome.
My goal is to translate my experiences into educational resources in order to effect positive change and encourage patient advocacy. Towards this end, I host a health podcast aimed at advancing scientific literacy around topics like longevity and metabolism.
Looking forward to connecting with all of you!
Can anyone point me towards resources to help with finding an actually competent psychiatrist for ADHD and depression? It’d be even better if anyone has experience with and can recommend a Texas-based psychiatrist.
Hi everyone! I have a question for moral philosophers here! People in animal advocacy sometimes debate about some type of “contrary to duty imperatives” in which an advocate makes a demand that might “countenance” some constraint violation. For example: “Please do not consume any animal products, but if you are going to eat some at all, at least leave fish, chicken and eggs off your plates” or “Stop using cage-eggs”. I found it surprisingly difficult to find academic discussion around the morality of uttering these sentences, so if you are aware of anything useful, please share!
Hello, although I am not a philosopher, I know and read philosophy, and I hope I can answer your question. (In addition, I am new to the forum—this is my first response—so I apologize if I deviated from the norms or if I misunderstood the conversation).
I believe that the logic behind such sentences is a utilitarian view (Philosophy of Jeremy Bentham): “a little damage” is better than “complete damage”. However, even with utilitarian considerations we must consider possible consequences of specific actions. For example: Is it possible that following a statement like this that you demonstrated, a third party will see and be legitimized to cause “little damage” instead of “zero damage”? Or, is it possible that the listener will thus be able to deal better with the conscience, and this will delay his future transition to “zero harm”? On the other hand, it is possible that without sentences like these, the listener will be silenced and will not be willing to think about the subject at all, and thanks to the gradual transition, he will be able to make a big change, which is difficult to make at once.
These considerations move the discussion from philosophy to psychology and cognition. I believe that scientific articles on cognitive dissonance, intuitive morality, and autosuggestion would be useful here
Is there a good resource on moral views? I’m reading moral uncertainty at the moment, and Will touches on utilitarianism, consequentialism, etc. I thought to myself that it would be helpful to find a resource that lists most mainstream views and maybe help people navigate through them to understand them better and add credence to them. I could not find such a website.
If anyone can point me to a good resource that can help me better understand these moral views, that would be awesome. Merci!
You may want to check out “Introductions, Handbooks, Collections” section of University of Oxford’s ethics reading list. I personally prefer Shelly Kagan’s Normative Ethics.
got the book! Thank you emre!
Happy to help!
Reflecting a bit on the EA call for criticisms, one obvious challenge to the movement is cities like San Francisco. That’s a city with tons of EA aligned folks and yet spending a lot of philanthropic dollars to alleviate hunger in Africa or address AGI risk in the future while there’s massive human immiseration all around the city just seems extremely dystopian. Here’s a good Atlantic article on the issues in SF for those that are curious: https://www.theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/
Say more about why it feels dystopian? Totally agree that the situation in SF is bad. but it feels good to me that EA aligned people see how much worse it is in other places, and don’t prioritize what’s near to them over where the most suffering is.
Would it be altruistic to walk past a person dying in the streets so you can close an extra deal, make an extra buck and send that to a far away place to help a few more people? That seems rather cold and calculating and not in a good way.
There definitely are people suffering to a high degree in SF btw. The homeless situation is quite dire.
It similar doesn’t seem altruistic to help people on the streets close to you and ignore poor people in a different country just because they are far away.
No situation short of “help everybody” really sounds appealing. Short of solving all problems in the world, prioritizing problems is necessary. What’s wrong with prioritizing problems based on where the most suffering is?
Im curious, why do you think EA has not taken off into the main stream ? Rationally it should be the #1 charitable cause in the world but its not. Why do you think this is ?
In a comment above I wondered whether EA culture is too intellectual to have broad appeal.
Are we already past the precipice? Would we know if that was the case? I really wonder. Things seem to be spiraling. Many of todays conditions mirror the malaise and misery of the 1930s but the capacity for destruction is orders of magnitude higher. I wonder if we need to act with much greater urgency and boldness to be more proactive about XR..…
What might your plan be?
Has anyone fully dug into the AI credulity risk? By that I mean the XR scenario where a technology A) believes they’ve invented AGI and B) acts on the belief in Ozymandian fashion. Note A does not require actually inventing AGI. It just requires that like the recent Google employee that they believe it to be so. And note B does not require that they actually have evil intent. Often in fact evil can be done by a desire to do good. I believe that this is a nontrivial risk and compounded by the availability of technologies, the potential future upside to AGI could blind even well intentioned persons to do horrific deeds in the service of the brave new world. The logic is simple as it is elegant and horrifying: What is a finite number of present human lives against an infinity of lifetimes in the beautiful garden of a well aligned AGI? Would not the monster be the person who doesn’t do what’s necessary to ensure that future? What wouldn’t be justified to create such a utopia?
Anyone interested in real estate investing want to potentially be my mentor?
I was thinking we would eventually talk on the phone maybe twice a month.
I am a young electrical engineer, childfree, in Wisconsin. I’m plant based at home; try to eat vegan when going out to eat, but usually end up eating vegetarian. I used to donate 1% in college (per GiveWell’s student pledge), but since then I’m more focused on end of life donation (Earning to Give). Or maybe I should set up a trust or something.
I am analytical and resourceful. I am refining my cash on cash return and cash flow calculations for REI. I own a duplex, and should sell when I move to a bigger city (probably in 2023). For prospective properties, I have analyzed some deals in Milwaukee and Chicago. Although, I should find a web crawler/scraper on GitHub to automatically extract this information into a spreadsheet.
Can you help write test prompts for GPT-EA? I want testcases and interesting prompts you want to see tried. This helps track and guide the development of GPT-EA versions. The first version, GPT-EA-Forum-v1 has been developed. GPT-EA-Forum-v2 will include more posts and also comments.
Hi! New here. I had a question about donations and the processing fees. Is there a specific rate or fee that Stripe charges for credit/debit donations? I’m trying to gauge which route to go given the fee structure + admin overhead.
I believe I read somewhere that manual transfers + wires have more admin overhead such that they may only be beneficial if the donation exceeds $1000. Is that true?
I believe my options to donate are only a Wire Transfer with my bank. It only offers Zelle, Bill Pay, and Wire Transfers. The wire transfer comes with a reasonably high fee. So I’m trying to assess which is more beneficial for the o
Hi Alex! Strip charges 2.9% + $0.03 for most debit or credit card payments (plus extra for currency exchange or international cards). Stripe also supports ACH (eCheck) payments, which all bank accounts support, with a fee of 0.8% or $5.00, whichever is less. If the organization’s Stripe account already has ACH allowed, this should be no trouble for them.
Givewell asks to receive donations under $1000 through card payment and over $1000 through check. However, not all charities are Givewell—some are happy to receive a check at $500, for instance.
Lastly, don’t forget to check to see if you can use a workplace charity matching program, as many of them cover credit card fees in addition to matching your donation.
Hello
Who I am isn’t very important, but like you, I endeavor to make the world a better place. To this end I write to who ever I think could make a difference and write essays on my blog. It all started when I stared into the darkness whilst standing in a railway truck that transported Jews to Auschwitz. After this harrowing experience, I thought “is this the way it is going to be? War and holocausts forever, as long as there are humans alive? Current events suggest yes, but I believe humanity can learn and avoid such a future. See my essay “A history of the world in a single object”.
Donating to charities is fine and will do a lot of good in the world, but I also feel it is like passing the buck, avoiding our direct responsibility to our fellow humans and other inhabitants of planet earth whether living or to live. I think that it is vitally important that we understand human nature, so that we can learn to change for a much more positive future. (I did get one of my essays published as the lead letter in the Economist Magazine March 5th − 11th 2022, if this is all sounding a bit wacky).
So I would be grateful if you could use some of your valuable time to read some of my work on my blog and consider my ideas.
Trevor Prew
Sheffield UK
trevorprew.blogspot.com
I would like to address the cyberbullying that downvoted my comment here: https://forum.effectivealtruism.org/posts/jhCGX8Gwq44TmyPJv/ea-s-culture-and-thinking-are-severely-limiting-its-impact?commentId=WNqs7wH8WBEQq6xjb#WNqs7wH8WBEQq6xjb
Saying “it might be worthwhile to consider the source” is very different than “an ad homin” attack. Honestly the norm of never considering the source ignores the very practical cui bono school of public discourse and really just speaks again to the EA implicit academic naivete and assumptions.
In an ideal perfect compulsive nit picky utopian world which will never exist because it’s just silly, :-) down voting would be accepted and encouraged, but at least briefly explaining the down vote would be a required part of the process.
But until such a system is in place, I definitely intend to down vote this post, and there’s no way I’ll be telling anyone why. You’ll just have to guess. Which won’t be that hard. :-)
The moderation team is issuing Phil Tanny a 1-year ban for repeated violation of Forum norms (even after warning). This user repeatedly violated our norms and we didn’t see any attempts on their behalf to follow our Forum’s norms more after we warned them the first time by messaging them and responding to their comments. Some examples:
This comment is unnecessarily rude (the “thank you” in particular is sarcastic).
They posted other comments that are unnecessarily rude (e.g. calling the criticism contest “blowharding.”)
Their reaction to being downvoted was hostile (accusing people of malicious intent or school-like behavior), unproductive, and off-topic.
Thank you, I think that’s the right decision. I think bans for this type of behaviour could improve the forum.
I agree with the decision because of his harmfwl response to being downvoted, but… honestly, I feel like several of those points are exaggerated/misunderstood. This is not a critique of the decision, but perhaps a suggestion for how he might have been partially misunderstood.
I agree that he responded immaturely to the downvotes, and I agree that it was counterproductive to call the contest “blowhard”. On the other hand, I feel like a lot of the downvote-ganging was immature in return. Or just insufficiently charitable.
In the comment you say he was sarcastically saying “thank you”, I actually think he meant it sincerely, since this is similar to how I would express myself (emphasising the thanks to make sure it wouldn’t be misunderstood as just a formality).
I also don’t think hubris is a good reason to downvote someone. Regardless of the other things he was doing, he was also efficiently filtering for the attention of people like me—someone eager to explore the ideas of someone who tries exceptionally hard to not conform (and I thought he might especially have interesting perspectives due to being 70).
Ok, so I’m adapting to the environment I find myself in, and am proud to announce I’ve radically changed my attitude to down voting. Instead of whining and trying to swim upstream against the cultural tide, I’m going to embrace local norms, and strive to be the Down Voting Champion of the forum! This will of course involve both receiving and giving as many down votes as I can.
So far I seem to be excelling at receiving down votes, but my performance at giving down votes is sorely lacking. Must improve on that!
The down votes I give will of course come without any kind of explanation, because that is what is expected of me by the community. In fact, no one will ever know whether I even read the posts I am down voting, because I strive for perfection in all things.
Now personally, and I know I’m biased about this, but I think this post is worth at least 10 down votes, maybe more. So c’mon people, don’t be stingy, be altruistic, a cooperative community member, and help me march forward towards my goal. It only takes a second, pound that down vote button a few times for me, will ya please? Remember, I have delivered on providing down voting worthy content here, and deserve to be rewarded. I’m sure you agree.
Next, and this seems important. If you should know who the current reigning Down Voting Champion is, please let me know so I can stalk their posts and up vote them without mercy. If that’s cheating, then you know what to do. DOWN VOTE ME!!!
What are you trying to accomplish here? No one said that it was bad to give explanations for downvotes, only that it’s ok not to do it. No one said that downvoting (or getting downvoted) should be an end goal—evaluating and signalling comment quality is the goal. Your comment reads to me like a sarcastic rant based on (wilful?) misunderstandings.
Yes, it is a sarcastic rant, correct! What I’m trying to accomplish is have a little fun, a change of pace from whining. You’ve apparently not down voted me as requested, so I’m afraid I have no choice but to punish you by up voting your comment above. That’ll teach you! :-)
Ha, I just got here and I already have 122 down votes! I’m on the path to down vote victory! But, ok, ok, if you have more down votes, go ahead and brag here, I’m trying to be fair about this.
True story: I was appointed Lil Abner on Sadie Hawkins Day in high school, given my reputation for being rather dense I guess. High school and Lil Abner seem relevant to the subject of down votes....
Phil, you’ve been making a lot of posts in very short order since you joined. The enthusiasm is great! But have you considered taking the downvotes as a sign that maybe you should increase the threshold in quality for what you decide to post? I.e. take what you would’ve posted, and only post the most substantial and informative 25% of those.
As it is, it feels kind of like an indiscriminate information dump, and I for one am already tuning out most of what you write, which I think neither of us wants.
Hi Erich, thanks for your ongoing engagement, your feedback is appreciated.
Well, instead of “indiscriminate information dump” I view my posts as a summation of more than twenty years of thinking and writing. I do agree there is a fair amount of repetitive duplication in my posts here so far, which I do regret, but that is because I am searching in earnest for people capable of understanding , engaging and adding to the insights being shared. Some progress there, which is great.
I don’t expect anybody to read everything I write, but if you feel ANY of it is worthwhile, why not engage that fraction of the posts, and ignore the posts you find less interesting? I’m totally cool with that.
Your point on quality is well received, so let’s look at that more closely. I don’t have data on this, but my guess is that many or most members here are somewhere around a half to a third my age. Where true, I’ve been thinking and writing about such subjects since before they were born. To some significant degree that I can’t quantify, THAT who has been put in charge of evaluating my posts and feeding reputation data in to the forum software.
To debunk myself, I have to agree that as I am only a novice geezer (age 70) I’ve not yet fully grasped that the inexperienced judging the experienced is the way of the world, always has been, and there’s nothing anyone can do about it, certainly not me. I seem to be about half way to the goal of accepting this eternal reality in a cheerful manner. I’m getting there, but as this thread reveals, there’s work to do yet.
I would like to offer a simple solution to my fellow members. If something I’ve written sucks, as it very well might, join the thread and rip what I’ve written to shreds. I’m very receptive to such a process.
As you’ve seen, I’m afraid I don’t have much respect for high school style hit and run anonymous popularity contest mechanisms, nor those who use them.
Wow, people are down-voting you to oblivion all over the place for the ill-founded reason that you don’t look humble enough to them. I haven’t read much, and I haven’t been enlightened by what I’ve read so far, but feel free to book a conversation with me if you wish to tell me about what you think are the most important pieces of wisdom you have to share. It’s likely to be one-off, but the potential downside is negligible compared to the potential upside. : )
FYI the forum team gave him a 1 year ban for not following forum norms and he may not ever read your comment: https://forum.effectivealtruism.org/posts/LpCewmJgosEaz7ZkW/open-thread-june-september-2022?commentId=jRYMLyCg2sKfvjg9j