Open Thread: Spring 2022
Update, 12/7/21: As an experiment, we’re trying out a longer-running Open Thread that isn’t refreshed each month. We’ve set this thread to display new comments first by default, rather than high-karma comments.
If you’re new to the EA Forum, consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren’t EA-related at all.
(You can also put this info into your Forum bio.)
If you have something to share that doesn’t feel like a full post, add it here!
(You can also create a Shortform post.)
Open threads are also a place to share good news, big or small. See this post for ideas.
- Open Thread: June — September 2022 by 22 Jun 2022 11:24 UTC; 30 points) (
- 18 Jan 2022 18:08 UTC; 12 points) 's comment on EA Librarian: CEA wants to help answer your EA questions! by (
Hello, my name is [name redacted] and I’m a new member here. Or, more accurately, I’ve been here for a little over a month but I haven’t introduced myself yet because writing on a public forum is mildly anxiety-inducing for me. However, I have lately been attempting to become better at things I’m bad at by doing things that make me uncomfortable; this has included applying for an internship at Redwood Research despite not feeling terribly qualified, spending an afternoon using food to entice college students into discussing the merits of not eating meat despite being socially anxious, and now making an introduction post despite my instinctive aversion to doing so.
I am an undergraduate at a non-prestigious state university in the US studying computer science and math. I’m also working part-time as a research assistant to a professor working on a reinforcement learning project. The latter is a fairly recent development, and I find it a somewhat astounding one; about a year ago I was expecting to spend my time at university using scholarship funds to eke out a meagre existence, whereas now I will be making enough money that I could eat out 2-3 times per day (not that this seems like the best course of action). I have briefly considered donating some of my newfound income, but this seems rather unwise considering I am still not exactly financially secure.
Approximately a year ago I read Nick Bostrom’s book Superintelligence, which greatly changed how I viewed the world. I would like to work in AI alignment and am trying to take steps to get there, but I’m not sure if I’m going about it in the right way. If anybody is willing to offer advice that would be appreciated. I am also looking for friends who are similarly interested in positive impact, as I am currently rather lacking in that department.
I am not the best at writing in these sorts of contexts, so I will cap off my introduction by shamelessly plagiarizing myself from when I’ve written in other contexts.
Here’s something: “When I was a small child I was very interested in the notion of being immortal. One of my early unpleasant memories was crying in my bed because one day my cat would die, and then (hopefully at the very least separated by a good number of years), I would die as well. This was quite horrifying to my seven-year-old self, and I liked to shield myself from that by imagining things like being a vampire, or an immortal magical cat. Sometimes I did both simultaneously; memorably, at some point I came up with a vampire cat that, rather than drinking blood, imbibed the color purple for sustenance. This served the double purpose of reducing suffering for other cats, and of ridding the world of my least favorite color. The fact that it would be swimming in my innards did not, apparently, bother my younger self.”
And here’s another thing: “Something else I find amusing is doing things in the vein of: Coming up with the string, “sad bad mad lad fad,” and then trying to picture what that would look like. In this case, I envisioned a phenomenon wherein male youth become violently disillusioned with society and take to the streets to cause mayhem and destruction. It’s only a fad, though. This year the lads are using homemade explosives to destroy public infrastructure in a manifestation of their rage at the unfulfilled promises of the American Dream, Fight Club style, but next year? Who knows? Maybe there’ll be an uptick in anime cosplay?”
I feel sufficiently introduced now, and thus I will take my leave. I would like to reiterate that I am looking for friends, so if anyone reading this finds me to be plausibly friend-worthy I would appreciate it if you sent me a private message.
Welcome to the Forum! This is a great introduction :-)
If you don’t subscribe to the Alignment Newsletter, that seems like a good way to get regular updates on steps you can take. The EA and Open Philanthropy newsletters also feature related opportunities from time to time, though I’m not sure how much they catch that the AN misses.
Thanks for pointing me in the direction of these resources! I just signed up for the Alignment Newsletter.
As a note, the AN link you posted actually doesn’t work, though I was able to figure it out with my fabulous tech skills (the URL has a ] at the end). Here’s a working link, for posterity.
Welcome to the forum!
I recommend signing up here to be informed about the next round of the AGI Safety Fundamentals course. If you don’t get accepted the first time, it’s worth applying down the line as they seem to be increasing the number of people that they accept in every iteration (I didn’t get in the first time).
You might also want to consider booking a call with AI Safety Support or applying to speak to 80,000 hours.
Hello, I’m new to this forum, met a bunch of EA folk in London at the EA Global drinks a couple of weeks ago, and have been EA adjacent for a while, so happy to chat and link up on projects of mutual interest. Most of my personal giving is in humanitarian and development, also investing in green tech through crowdfunding platforms.
I’m currently Head of Global Health Communications & Stakeholder Engagement at UK’s National Institute for Health & Care Research (NIHR) , previously over 20 years senior leadership in universities, research institutes, international NGOs, charities and funders, mainly in bioscience, health, and international development. Full career history on https://www.linkedin.com/in/patrick-wilson-323b591b/
I am active in various science communication networks and rationalist/ish groups. I enjoy football and samba. I blog at https://pathfindings.substack.com and I’m currently writing a popular (I hope!) science book on advances in bio-gerontology and the future of humanity. If you want to get a flavour of some of my writing, I just cross-posted a recent blog on https://forum.effectivealtruism.org/posts/h2EaaDchr9QYuKz9z/rabbits-robots-and-resurrection.
Just wanted to flag that AI scientist Timnit Gebru has written a tweet thread criticizing the AI safety field and the longtermist paradigm, quoting the Phil Torres Aeon essay. I would appreciate it if someone could put out a kind, thoughtful response to her thread. Since Gebru is a prominent, respected person in the mainstream AI ethics research community, inconsiderate responses to her thread (especially personal attacks) by EA community members run the risk of making the movement look bad.
The thread arose from this related conversation about sentient AIs being compared to people with disabilities (where everyone agreed that such analogies are harmful)
Thanks for noting! Habiba responded: https://twitter.com/FreshMangoLassi/status/1485769468634710020
This response feels like it is making unnecessary concessions in an attempt to appease someone who will probably never be satisfied. For example, Habiba says
But this is not at all obvious! There are strong arguments that the contemporary ‘harms’ of tech are vastly overstated, and even if they were not, it seems unlikely that we should be working on them, given their vastly lower scope/neglectedness/tractability than other issues EAs focus on. I would be very surprised if any credible CBA suggested that short-term tech harms were a better cause area that third world poverty, factory farms and existential risks.
Similarly, Habiba contrasts
with
But these by no means need to be in conflict. I think any reasonable evaluation of EAs will find many who are quite unemotional, and do do a lot of number crunching—the later is, after all, a core part of cost-effectiveness estimates, and hence the EA movement. But that doesn’t mean they don’t “truely care”—it’s that number-crunching is the best way of executing on that caring.
Despite what seem to me like large concessions, I doubt this sort of approach is ever going to convince people like Gebru. If you look at her argumentative approach, here and elsewhere, it makes use of a lot of rhetorical/emotional appeals and is generally skeptical of the role of impartial reason. Her epistemic approach seem incompatible with that which the EA movement is trying to promote. For example, it is natural for EAs to want to make comparisons between things—e.g. to say “depression is worse than malaria, but it’s cheaper to fix malaria”—in a way that seems profane to such people. I’m not sure if the suggestion here is the result of simply misunderstanding the nature of analogy, but it is clearly not the case that we can take arguments about historical moral progress, replace ‘disabled people’ with ‘sighted, hearing people’ and act as if this does not change the argument! Such comparisons are a necessary part of EA thought. Similarly, the principle of charity—of trying to understand others’ points of view, and address the most plausible version of it, rather than attacking strawmen and making ad hominem accusations—is an important part of EA epistemic practices.
Rather than trying to paper over our differences, I think we should offer a polite but firm defense of our views. Pretending there is no conflict between EA and ‘woke’ ideology seems like it could only be achieved by sacrificing a huge part of what makes EA a distinct and valuable social movement.
Your comment has aged well.
I really like her response :)
I find it a bit frustrating that most critiques of AI Safety work or longtermism in general seem to start by constructing a strawman of the movement. I’ve read a ton of stuff by self-proclaimed long-termists and would consider myself one and I don’t think I’ve ever heard anyone seriously propose choosing to decrease the risk of existential risk by .0000001 percent instead of lifting a billion people out of poverty. I’m sure people have, but it’s certainly not a mainstream view in the community.
And as others have rightly pointed out, there’s a strong case to be made for caring about AI safety or engineered pandemics or nuclear war even if all you care about are the people alive today.
The critique also does the “guilt by association” thing where it tries to make the movement bad by associating it with people the author knows are unpopular with their audience.
The new effectivealtruism.org homepage looks fantastic.
It does, but why is CEA capitalizing “effective altruism” now? 😕
Out of curiosity, what’s the logic with those graphs as the center and focus on the homepage?
Wow, I didn’t even know that there is a new design. Looks really good.
Hello! My name is Garrett, and I am from Seattle, Washington. I have been involved in EA for about a year and was introduced to it by my closest friend while at school. He and I have both always been directly involved in humanitarian aid projects around the world for most of our lives (it’s how we met, actually), and after returning from a service trip in Lesbos where he had been shaken by the suicide of a small child there he began to wonder about the effectiveness of his efforts. This then put him on the road to finding EA. When he ran across it, he shared it with me, and I immediately fell in love with everything about EA. I was the director of the university’s service department at the time and was responsible for activities involving hundreds of students, and was frustrated with what I perceived to be inefficient and ineffective university policies governing funding and activity options. EA was simply too relatable to pass up. I’ve been heavily involved ever since, although my schooling has prevented me from attending many of the conferences that I wish to attend one day in order to make more of your acquintances. Until then, I am happily engaged in furthering the EA chapter here at my university in Idaho.
The causes I currently care about the most within EA are animal welfare and biosecurity (although I pretty much love every other cause area within EA as well).
I was raised hunting and was always taught that animals were for my use, and before knowing EA I consumed inordinate amounts of meat (I kickboxed and for several years was a competitive weight lifter). There was a period of 4 months where, in order to reach a certain physique within the required time frame, I consumed 220g of animal-based protein per day on a 1700 calorie diet (I developed some minor health issues, as could be expected). It wasn’t until I attended an EA online training seminar for group leaders that a compelling argument for a vegetarian lifestyle was made by a fellow EA member, Pete Rowlett (I hope he doesn’t mind me mentioning him here), that I decided to become vegan. His arguments, combined with the EA reading material I was familiarizing myself with, along with certain verses from my religious texts, all combined to convince me that I needed to change my eating lifestyle. I am currently trying to start our university’s first Animal Welfare movement in tandem with our EA chapter, and am also preparing presentations on animal welfare to give at our EA retreats that we are planning. It hasn’t been easy switching lifestyles cold-turkey like that, and I have slipped up on the rare occasion, but I have found it well worth the effort (weight loss has never been so easy) and don’t plan on going back. I will add, however, that my university has a culture of meat consumption, and my new lifestyle has turned away some interested romantic partners. That was an unforeseen and difficult hurdle to overcome. Anyways...
As for biosecurity, I am in my university’s premedical program and have always had a deep interest in international relations as well as biotechnology/biomedicine/etc. After reading “Biological Threats of the 21st Century”, I found myself drawn to this cause area that I hadn’t even known existed! I have studied Chinese since I was 10 years old and lived in Asia for a couple of years doing humanitarian aid, and the tense peace between the island of Taiwan, the Chinese Communist Party, and the US always unnerved me. Biosecurity as a cause area was the perfect blend for me of IR and Biotech, and I found an amazing niche for “biosecurity in China” within EA. This combination of all my deepest intellectual interests was yet again too strong of a pull for me to ignore, and now I find myself attempting to organize our university’s Biosecurity in Asia society, contributing to online forums regarding Chinese biosecurity, and doing other various projects associated with biosecurity and our university, etc.
I plan on becoming a medical doctor so that I can “earn to give”, and I hope to eventually transition from medicine to working full-time on biosecurity policy in the United States (preferably related to China somehow, and hopefully done in direct association with EA). Fun facts about me are I like reading/writing, movies, debate, poetry, and art (and I am not too bad at playing Battlefront II online, either). I have four siblings, one of whom has been in Mexico doing humanitarian aid for 6 months now and I hope to persuade him to participate actively in EA. My family has not been very open to the ideas proposed by EA and I am having a difficult time convincing them to take it seriously, but I do get frequent questions from them along the lines of “what does EA think about ___?” so that is a good sign, I think. Other things that are good to know about me are I love paisley ties, indie music, and I am a horrible cook.
Hello everyone!
I am a human rights activist from Russia. I work as a ML scientist at a medical tech startup in Germany. When the war with Ukraine started 8 years ago, I decided to record an antiwar video as a reply to Ukrainian students. It was my first time trying to organize a protest, and it was way scarier than just participating. What if one of the students got expelled for this? What if at the rally I’d organize in their support someone got accused of hitting a cop? Suddenly it looked like my little initiative could turn into a years-long nightmare. I decided to do it and was very glad to discover that an Open Russia journalist had the same idea and we could merge our efforts.
No one got in trouble for the recording, but it didn’t change anything, either. So I went looking for more effective ways to help Ukraine and free my own country. As protests in Russia dwindled, I decided that building a friendly AI was my best bet. I got into machine learning, read most books on MIRI’s reading list and was in the middle of a MIRI interview when COVID struck and they stopped hiring programmers. My plan no longer called for staying in Russia, so I moved to Germany last year, to stop supporting Putin’s war and oppression with my taxes.
Hi all! I’m new to the EA forum. My husband’s been involved in EA for years, and I am finally in a place to want to join in as well. Specifically, I’m an efficiency consultant, specializing in operations and productivity improvement. I would love to take my talents to the EA world to make charities and the people involved more impactful.
Hello! I am the Affective Altruist, and I am building a little dating website for EAs. I’m starting off with WordPress to keep it simple. Consider this a fun side project of mine, and friendly competitor to reciprocity.io. ^_^
Is anyone interested? What simple features would you want from such a site? Should I make a top-level question asking this?
My general advice for people building projects that require network effects is to think about how to 100% of a small market before you try to tackle the entire market. Peter Thiel has written about this dynamic in Zero to One. Can you get all EAs in your city/region perhaps?
Yeah, I’ve read in another book, the Cold Start Problem by Andrew Chen, that to form an atomic network you should think even more specific than you normally would. I was considering EA as kinda niche but it would make sense that people generally want to date others constrained by location. Though early adopters might care a bit less if they’re willing to travel or have online relationships?
Hello everyone, my name is Emre. I am the co-founder and director of Kafessiz Türkiye, a farmed animal advocacy organisation in Turkey. Looking forward to learning from you all!
Hi everyone! I’m a longtime EA but I haven’t spent much time on the EA Forum, so taking this opportunity to introduce myself.
Professionally, I’m an economist in California focused on tax and benefit policy. I’m the co-founder and CEO of PolicyEngine, a tech nonprofit whose product lets anyone reform the tax and benefit system and see the quantified impact on society and one’s own household (we’re live in the UK and working on a US model). I’m also the founder and president of the UBI Center, a think tank researching universal basic income policies. Outside of work, I’m a founding lead of Ventura County YIMBY, which advocates housing density, and I lead the Ventura chapter of Citizens’ Climate Lobby, which advocates carbon dividends.
I previously spent most of my career as a data scientist at Google, where I first encountered EA when Google.org gave a grant to GiveDirectly in 2012. I then became active in Google’s internal EA group, left Google in 2018, took the GWWC pledge in 2019 (which I wrote about here), and got a Master’s in Development Economics from MIT in 2020, where I became involved in the MIT EA community. I give primarily to GiveDirectly and GiveWell, though as an avid listener of the 80k Hours podcast (and soon-to-be-avid reader of the EA Forum!) I’m always interested in new cause areas.
I’m also working on a post on tax and benefit policy as an EA cause area, so I’m open to ideas here on that topic.
Welcome, Max! I’ve been following you on Twitter for a long time, and I’m excited to see you on the site I help to run :-)
If you want feedback before you publish your post, I offer that to everyone (though it’s totally optional).
Hello there !
I’m David, 31, French, father of 2 - recently moved to Madagascar.
I would be really interested to get in touch with EA community members in Madagascar. Also I believe there’s also an opportunity to spread the movement here, given the poverty and inequalities issues are really tangible here.
Currently, I hold the role of Chief Technology Officer at Baobab+, a social business aiming at enabling access to energy, digital and finance products. We distribute our products in rural areas in Africa, and sell all our products in “pay as you go” (Similar to leasing) to make them affordable to the most (typical cost < 0.5 usd / day) .
Customers, proving their trustworthyness with good repayment enter a virtuous circle and get access to larger products (e.g. basic phone or a fridge) or loans.
I would be thrilled to study a bit closer the impact we’re having compared to other initiatives.
Hello!,
I chose a pseudonym (-dunce scout), as I’m starting a blog with same name. There isn’t a popular blog (or one that I know of), that talks about simple/big ideas like lesswrong or SSC or EA forum -around here. (I’m based in Kerala; I’ll write mostly in ENG, maybe both ENG/MAL for region-relevant posts? Then again, typing MAL is hard)
The blog will be a guide/map to these sites. Occasionally, I’ll digest the more large/complex posts -in an original way?; maybe write/think on simple things and show a new way to think through.
I somehow stumbled upon lesswrong, and added it to bookmarks. (This maybe through stumbleupon when it was free and available on the chrome web store; I think I was in 5th/6th grade when that happened) Never read it though. When covid/online classes happened, I got time. I started with Rationality A-Z since the posts had catchy headings. Soon realised that most posts are going over my head. Then, after a week or so I started with Codex and really enjoyed reading it. (except for the much more than you need to know series) I did read some of Rationality A-Z, but not to completion. Enjoyed hpmor, replacing guilt by Nate Soares, and few other posts on lesswrong by users like lukeprog. I find that most posts on lesswrong involve Comp.Sci or Mathematics to a level that I can’t follow. (I’m in medschool rn) The simpler posts are fun to read. The SSC subreddit introduced me to thelastpsychiatrist, I fell in love immediately -the pieces are usually small, if not- simple and fun. (sometimes the psychoanalysis is a bit loose/shaky; but it’s still good to read). During this time, I also found Paul Graham’s blog; he does the best kind of essay writing.
So, EA? After reading Codex for a while, I tried the SSC site. Was greeted by long/complex posts and kinda didn’t read on. Sometime later, I browsed the site again and found the EA community. (8000hours, EA-org) Or, I may have also ran into it on lesswrong at around the same time. (What made me come back to SSC site, is this)
I have some specific questions about blog-writing that I need some help with. (using other copyrighted art as meme, quoting books, penning-down ideas) I’ll scour the site some more; and if I don’t see anything about it, maybe I’ll post?
Hi! My name is Dev and I’m 17 years old. I’m a current high school graduate about to start university in the fall of 2022. Looking forward to interacting here. I’m currently interested in a lot of areas—including global priorities research, AI alignment, existential and s-risk, and energy poverty—but I’m currently trying to figure out the best path I could take since I’m at quite an early stage in my career. Of these topics, I’d say I’m most well-informed about energy poverty and I’m currently reading Superintelligence to get a better idea of AI alignment. Not sure what I want to do to have the most impact as of yet, but I welcome anyone who might want to have a conversation.
How’d you hear about the EA forum out of curiosity?
Got introduced to effective altruism by a friend and found the forum on the effectivealtruism.org website. Was a lurker for quite a while before I made this post
Hey Dev! I noticed you’re attending Berkeley for college—just wanted to let you know that the city is a pretty large EA hub, and the university has an active EA club. Feel free to reach out if you’d like to chat more or join our student group slack :)
I’m proud to announce that some months after my boss, Peter Wildeford, rudely overtook me on the Metaculus leaderboards, I’ve finally achieved higher EA Forum karma than him! 🎉🎉🎉
He’s higher than you again already!
😭😭😭
Time to up your game, Linch! 😉
I’m ahead of both him and MichaelA now. Currently #2!
Sam Harris and Rob Reid just put out this podcast that seems very relevant to this community:
[The After On Podcast] 58: Recipes for Future Plagues | Kevin Esvelt #theAfterOnPodcast
https://podcastaddict.com/episode/136135023 via @PodcastAddict
Basically, the US government is trying to find all the pandemic-capable viruses it can, and it will then POST THEIR FULL GENOMES ONLINE.
This is potentially a catastrophically stupid blunder that we intend to make but have not made yet. The recommended actions from Rob are to tell USAID directly at https://www.usaid.gov/contact-us, tweet at them, if you live in a state with a senator on the subcommittee on state department and USAID management (https://www.govtrack.us/congress/committees/SSFR/14) contact your senator, contact Washington State University if you have a relevant tie, and otherwise spread this, get attention, apply whatever leverage you have.
Twitter thread from Kevin Esvelt (professor at MIT, speaker at EA global on mitigating catastrophic biorisks):
https://twitter.com/kesvelt/status/1498409798903209996
Here’s some very well done podcast notes if you like text more than audio:
https://docs.google.com/document/d/1ORM6XjEQCycmzBrCt_D3nyl5O_fNPGwS3kYpAAy364c/edit?usp=sharing
Hi,
I’m new here. My name’s Carlos and I’m an anthropologist and social scientist looking for new career perspectives after my PhD. I would love to join a company or NGO to have a positive impact on the world. I’m interested in animal rights, fighting poverty and universal basic income. It’s a pleasure to be here and to learn with you. Thanks for reading me
Hi Carlos! I am delighted to find an anthropologist in the EA sphere! I study sociology at UC Berkeley and stayed with a tribe among the Woarani people in eastern ecuador and have been trying to find anyone involved in indigenous rights. I’ll send you a message I’d love to chat!
This summer, I became incredibly interested in effective altruism and as a high school student and someone from a low-income background, I felt like there were limited options on how to get involved in EA.
I would love to start a project supporting the EA movement for high school/secondary students. Here are my ideas!
1. A website similar to 80 000 hours, mentioning career planning and how to plan your undergraduate career to align with EA principles.
2. Hosting an EA conference for youth in a virtual format.
3. Having an EA council with mentorship from more established members in the space to work on the projects mentioned above and produce content.
Hi! I know this is two weeks late, but I’m new to the forum so I hope you’ll forgive me. I’m also a high school student interested in EA, and I’ve found some ways to help out in the movement despite the limited options which I’d be happy to talk more about.
I’m really interested in your ideas, and also just in how many high schoolers lurk on this forum but (like me) find the high level of discourse a bit intimidating. I’d like to write a post intended to surface and connect with those high schoolers. Perhaps from there, we can work together on making 2 or 3 happen.
Hi,
I’m not an official or representative from EA or anything like that, but this sounds awesome!
Your post is really welcome. Are you asking for help in any way? If so, just say so and people can help.
By the way, yes, the discourse uses a lot of words, but a lot of the ideas are basically from high school. People are just familiar with writing with them.
What really sets good EA apart is patience, listening, and perception, and the gradual development of good judgement. There’s deep pools of talent people who don’t write a lot. This is less obvious, but these people are valuable. You are too!
maybe Reddit can work in a similar way?
I find the level hard work too, so I practice in Facebook groups :-)
(I’m older than average EAs: EA wasn’t formally an available option when I was at college.)
I really appreciate the sentiment from this. I help run SPARC (https://sparc-camp.org/) and while the camp itself is meant to be a selective program, we want to support more broadly addressed initiatives too (if nothing else they end up benefiting us anyway because it encourages future good and aligned applications).
SPARC can probably help on the level of ops support from alumni who may be interested and a degree of funding that can at least make something like 2. happen.
Cool! Peter McIntyre is working on things like #1 and might be interested in 2 and 3 as well. That doesn’t mean you shouldn’t try it on your own, but that might be someone to get in touch with!
It seems like there’s been a proliferation of AI safety orgs recently; I’d like to see a forum post describing all of them so people can easily find out more about them and who’s hiring.
Hi, call me Rahela. I’m working in Anima International and Open Cages PL, as IT manager. In free time I write my personal blog about animals, effective helping, ethics and life on the countryside. I also host a podcast about similar topics. You can find me here https://hodowlaslow.pl/.
I found EA, thanks to my colleagues from Anima International. Before that I was working 13 years in fashion industry, as a designer thinking all days what am I doing here. Took me a long time to became pragmatic, not fanatic. (I was radical vegan 4 years ago).
You can contact me about some fundraising topics and IT if you need some help.
I love meditation and cats. Try to meditate with 3 cats! Feel free to contact me.
Hi everyone!
My name is Holly, and I’m a 20-year-old freshman student in California. I first encountered the EA community in the International Youth Summit on Energy and Climate Change, Shenzhen, China, and found the forum when I was looking for help to navigate through my future career path. I’ve been exploring and trying to understand the concept of effective altruism since I grew up in a highly self-interest-driven, bureaucratic environment, but I want to do good to help others and make this world a better place. EA would be a great opportunity for me.
I’m currently an Economics major, and I want to be an Econ professor in the future. (However, I just started to embark on this path to get a P.hD. first, and I found myself a little nervous since the road ahead is a bit unknown for me at this point. I sort of have a weak math background, and I’ve been trying to improve my skills) I care about people, and I’d love to help them find happiness and the true meaning of their lives, as well as help them to pick up the right mindset to understand the world and live better. This is what I wanna do for my whole life.
Greetings!
You didn’t mention whether you’d found an EA group near you, and I’d recommend looking for one if you haven’t. It’s easier to stay motivated and interested when some of your friends share your interests.
Do you see this as something you’d be able to do as an economics professor? What is it that draws you to economics, specifically?
Hi All,
Just introducing myself! I’ve been an advocate of EA for a number of years but I’m new to the forum. I’ve spent a while reading though various posts and it’s great to see a forum with such a reasonable, open minded and friendly tone.
Like most people here I’m really interested in how humanity responds to existential threats (e.g. climate change) and global living standards (e.g. economic development in poorer regions). My background has been working in a start up—so I feel very comfortable starting projects, getting things off the ground, discovering something doesn’t quite work and then consigning it to the failure list :P
If anyone has a great idea that they want help getting off the ground then I’d love to hear from you. I’m hoping to have more free time to devote to projects soon as I’m leaving my job as a Financial Director to go back to university to retrain as a computer scientist :)
Hi Stephen! Thanks for the post. What are the typical frameworks that you use to think about existential threats? Sometimes for instance we utilize probabilities to describe the chance of say nuclear Armageddon though that seems a bit off from a frequentinost philosophical perspective. For example, that type of event either happens or it doesn’t. We can’t run 100 earth high fidelity simulations and count the various outcomes and then calculate the probability of various catastrophes. I work with data in my day job so these types of questions are top of mind.
Hi Dem, I don’t really have a defined framework for thinking about existential threats. I have read quite a lot around AI, Nuclear (command and control is a great book on the history of nuclear weapons) and Climate Change. I tend to focus mainly on the likelihood of something occurring and the tractability of preventing it. On a very high level I’ve concluded that the AI threat is unlikely to be catastrophic, and until a general AI is even invented there is little research or useful work that can be done in this area. I think the nuclear weapons threat is very serious and likely underestimated (given the history of near misses it seems amazing to me that there hasn’t been a major incident) - but this is deeply tied up in geopolitics and seems highly intractable to me. For me that leaves climate change, which has ever stronger scientific evidence supporting the idea that it will be really bad, and there is enough political support to allow it to be tractable—which is why I have chosen to make it the area of my focus. I also think economic development for poorer countries (or the failure to do so ) is a huge issue on a similar scale to the above, but again I believe that it’s too bogged down in politics and national interests to be tractable.
Yes that makes sense and aligns with my thinking as well. Do you have a sense of how much the EA community gives to AI vs nuclear vs bioweapon existential risks? Or how to go about figuring that out?
Up until recently, the vast majority of EA donations come from Open Philanthropy, so you can look at their grants database to get a pretty good sense.
Does the Doomsday Clock and the bulletin of the Atomic scientists come up much in EA? I’m a bit new to this scene. https://thebulletin.org/
Jerry Brown’s warnings about nuclear Armageddon and the slow building climate tidal wave have definitely turned me on that organization.
Where do you see the opportunity to make a difference in the decarbonization effort?
Hi Locke—I’m not 100% sure how seriously nuclear Armageddon is taken in the EA community as I’m also pretty new. I’m just starting a piece of research to try and highlight where specific de-carbonisation efforts will be found (focused on a specific country—in my case Canada). Even though I haven’t started I strongly suspect the answer will be agriculture, as it accounts for a very large proportion of emissions, there are many proven, scalable low cost solutions and it seems to me to be very neglected from a funding point of view (I say that based on some brief research I did on the UK) compared to other areas like electric vehicles and renewable energy.
I like the new colored icons on posts with certain tags (e.g. Farmed animal welfare, Existential risk) 😀
Thanks, Evelyn!
Hi, I’m newish to EA and new (as of today) to the forum! I use she/her/hers pronouns and I’m a college freshman. I’ve recently been thinking a lot about how I can use my career to help. AI safety technical research seems like the best option for me from the couple hours of research I’ve done. I’m planning to donate all my disposable income to the EA meta fund. I’m really passionate about doing as much good as I can, and I’m excited to have found a community that shares that! My biggest stumbling block has recently been my mental health, so if anybody has resources/tips they want to share, I’d love to hear them (for reference, I am actively getting treatment, so no worries there)!
If you’re looking for resources on mental health, you might enjoy some of the upvoted posts under the self-care tag, including Mental Health Resources Tailored for EAs and Resources on Mental Health and Finding a Therapist.
Welcome to the Forum!
I think it’s good to donate a bit of money to good causes to help build good virtues, but at your current life/career stage you should probably focus on spending money in ways that make you better at doing good work later.
See this blog post for some considerations.
Similar to what Linch said, another useful perspective comes from in this post which says the value of your time might be higher than you think. At the same time, your earnings are probably lower right now than they will be.
With this perspective, you might be better off spending the money on yourself given the personal needs you mentioned. For example, regular cleaning or relaxing travel probably helps mental health for many.
It is wonderful you are working to help others.
I noticed something at EAG London which I want to promote to someone’s conscious attention. Almost no one at the conference was overweight, even though the attendees were mostly from countries with overweight and obesity rates ranging from 50-80% and 20-40% respectively. I estimate that I interacted with 100 people, of whom 2 were overweight. Here are some possible explanations; if the last one is true, it is potentially very concerning:
1. effective altruism is most common among young people, who have lower rates of obesity than the general population
2. effective altruism is correlated with veganism, which leads to generally healthy eating, which leads to lower rates of diseases including obesity
3. effective altruists have really good executive function, which helps resist the temptation of junk food
4. selection effects: something about effective altruism doesn’t appeal to overweight people
It’s clearly bad that EA has low representation of religious adherents and underprivileged minorities. Without getting into the issue of missing out on diverse perspectives, it’s also directly harmful in that it limits our talent and donor pools. Churches receive over $50 billion in donations each year in the US alone, an amount that dwarfs annual outlays to all effective causes. I think this topic has been covered on the forum before from the religion and ethnicity angles, but I haven’t seen it for other types of demographics.
If we’re somehow limiting participation to the 3/10ths of the population who are under 25 BMI, are we needlessly keeping out 7/10ths of the people who might otherwise work to effectively improve the world?
I think there are extensions of (1) and (3) that could also be true, like “people at EA Global were particularly likely to be college-educated” and “people who successfully applied to EA Global are particularly willing to sacrifice today in order to improve the future”
EDIT: and just generally wealth leads to increased fitness I think—obesity is correlated with poverty and food insecurity in Western countries
I’m currently doing research on this! The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol.
The natural first step here is to check whether EA has lower rates of overweight/obesity than the demographics from which it primarily recruits.
I can’t speak much to the US, but in the European countries I’ve lived in overweight/obesity varies massively with socioeconomic status. My classmates at university were also mostly thin, as were all the scientists I’ve worked with (in several groups in several countries) over the years. And it’s my reasonably strong impression that many other groups of highly-educated professionals have much lower rates of obesity than the population average.
In general, I’ve tended to be the most overweight person in most of my social and work circles – and I’d describe my fat level over the past 10 years as, at worst, a little chubby.
If it is the case that EA is representative of its source demographics on this dimension, that implies that it doesn’t make all that much sense to focus on getting more overweight/obese people into the movement. Obviously, as with other demographic issues, we should be very concerned if we find evidence of the movement being actively unwelcoming to these people – but their rarity per se is not strong evidence of this.
(EDIT: See also Khorton’s comment for similar points.)
It’s also probably worth noting that obesity levels in rich European countries are pretty dramatically lower than the US, which might skew perceptions of Americans at European conferences:
I don’t want to overstate this, since my memory of EA San Francisco 2019 was also generally thin. But it is probably something to remember to calibrate for.
FWIW I see a much higher percentage of overweight EAs in the Bay Area.
I’m skeptical of the comparability of your 2⁄100 and 50-80% numbers; being overweight as judged by BMI is consistent with looking pretty normal, especially if you have muscle. I would guess that more people would have technically counted as overweight than you’d expect using the typical informal meaning of the word.
It could also be that obese people are less likely to want to do conference socializing, and hence EAG is not representative of the movement.
While BMI as a measure of obesity is far from perfect, it mostly fails in a false negative direction. False positives are quite rare; you have to be really quite buff in order for BMI to tell you you’re obese when you’re not.
That is to say, I believe BMI-based measures will generally suggest lower rates of obesity than by-eye estimation, not higher.
https://examine.com/nutrition/how-valid-is-bmi-as-a-measure-of-health-and-obesity/
Is that so? From the way BMI is defined, one should expect a tendency to misclassify tall normal people as overweight, and short overweight people as normal—i.e. a bias in opposite directions for people on either end of the height continuum. This is because weight scales with the cube of height, but BMI is defined as weight / height².
After reading around a bit, my understanding is that the height exponent was derived empirically – the height exponent was chosen to maximise the fit to the data (of weight vs height in lean subjects). (Here’s a retrospective article from the Wikipedia citations.)
The guy who developed the index did this in the 19th century, so it may well be the case that we’d find a different exponent given modern data – but e.g. this study finds an exponent of 1.96 for males and 1.95 for females, suggesting it isn’t all that dumb. (This study finds lower exponents – bad for BMI but still not supporting a weight/height³ relationship.)
I don’t find this too surprising – allometry is complicated and often deviates from what a naive dimensional analysis would suggest. A weight/height³ relationship would only hold if tall people were isometrically scaled-up versions of short people; a different exponent implies that tall and short people have systematically different body shapes, which matches my experience.
In any case, my claim above is based on empirical evidence, comparing obesity as identified with BMI to obesity identified by other, believed-to-be-more-reliable metrics – those studies find that false positives are rare. Examine.com is a good source, and its conclusions roughly match my impressions from earlier reading, albeit with rather higher rates of false negatives than I’d thought.
Thanks for sharing this, I guess it looks like I was wrong!
I still don’t think you’re wrong. Will is correct when he says that it is more likely someone with a BMI of 25 or lower is actually overweight than someone with a BMI of 25 or higher is just well-muscled, but that isn’t the same as estimating by eye.
The point, as I understand it, is that if you live in a country where most people are overweight, your understanding of what “overweight” is will naturally be skewed. If the average person in your home country has a BMI of 25-30, you’ll see that subconsciously as normal, and therefore you could see plenty of mildly overweight people and not think they were overweight at all—only people at even higher BMI’s would be identifiable as overweight to you.
Relatively minor in this particular case, but: Please don’t claim people said things they didn’t actually say. I know you’re paraphrasing, but to me the combination of “when he says” with quote marks strongly implies a verbatim quote. It’s pretty important to clearly distinguish between those two things.
Fair enough. I’ve edited it to remove the quotation marks.
I agree “BMI gives lots of false negatives compared to more reliable measures of overweight” is not the same thing as “BMI is more prone to false negatives than by-eye estimation” – it could be that BMI underestimates overweight, but by-eye estimation underestimates it even more. It would be great to see a study comparing both BMI and by-eye estimation to a third metric (I haven’t searched for this).
But if BMI is more prone to false negatives, and less prone to false positives, than most people think, that still seems to me like prima facie evidence against the claim that the opposite (that by-eye will underestimate relative to BMI) is true.
Hey everyone! Just joined EA a few months ago and was very fortunate to attend EAGx Bostone recently! I could not be more excited about discovering this community!!!
I’m doing two fellowships and working on a marketing project team in my university EA USC group.
I feel very strongly about utilitarianism, am interested in physics, and as a result came to longtermism several years ago on my own. I actually wrote a book called “Ways to Save The World” essentially about innovative broad strategies to sustainably, systemically reduce existntial risk. Really excited to share it with the EA community and have my ideas challenged and improved by fellow highly intelligent, rational do-gooders!
Hi! Long time listener, first time caller. I currently work in operations in higher ed and I just know I can be doing the same exact job in the EA community and be making much more of an impact and have more of an opportunity to test my skills and grow into related fields. I actually just applied for a position at CEA which would be a dream! I’m curious if any one else from the community came into EA from student affairs or enrollment management and if so what are you doing now and how was the transition?
Hello all. I’m Dave, I’m in my late 20s and I’ve been on an existential crisis since I’ve come across EA and related topics. I don’t know what to do to help since I don’t have any degree, I don’t live in a rich country, and also because I don’t think there’s much we can do on the long-term. Namely if we keep inventing these magic-like technologies which will grant us power that no human being is wise enough to hold. I don’t have anyone to talk to, and even if it did I wouldn’t want to destroy their sanity, as I’m already on my way of destroying mine. Any advice or perspectives would be appreciated. Thank you
You’re not alone in finding these topics mind-boggling and distressing!
If you’d like to talk to people and there’s not an EA group near you, you could join the EA Anywhere group: https://eahub.org/group/effective-altruism-anywhere-2/
There’s also the EA Peer Support group: https://www.facebook.com/groups/ea.peer.support
This morning, the stock and crypto market has seen large declines. BTC and ETH has fallen 17-20%.
I guess:
This might affect EA spend given the source of EA funds. (But it’s unclear how substantive this is, as spending still amounts to a small fraction of these funds.)
There might be a recession (I don’t know how likely this is)
In a recession, some EA orgs may benefit from donations to cover shortfalls
In a recession, the “talent market” (relevant to EAs where there is a shortage of leaders for new projects and tech talent) might change and talent might become easier to obtain. (Alternatively, you can imagine adverse selection from a “people seeking shelter” sort of thing).
This comment is supposed to be “maybe this is relevant news, and like, put this on your radar or something”, I’m not really an expert in any of the above.
FUNDS ARE SAFU
https://fortune.com/2022/06/18/ftx-sam-bankman-fried-coinbase-brian-armstrong-crypto-layoffs/
If this is not the last crypto cycle, maybe the market is an opportunity for some EAs.
Or EAs should help FTX or SBF in some way?
Somewhat new EA here—I’m thinking of wearing EA gear at an upcoming livestreamed collegiate poker tournament. Any thoughts on whether that’s a good idea? Seems good for the EA brand as long as I don’t do/say anything too out of line (?)
Thoughts on how to talk about EA to other competitors/interviewers would be much appreciated too
Also a disclaimer that I don’t expect to do very well on the tournament hahaha, I’m a pretty recreational player
Update: I made it on akaNemsko’s Twitch stream (287k followers!) with my EAGxBoston shirt!
https://www.twitch.tv/videos/1457837512?t=02h28m34s
Hello, at age sixteen some combination of debating a pastor about universalism, visiting worship centers of various faiths, and Rick and Morty killed my religion. With nothing remaining that seemed worthwhile, I booked a ticket to Singapore and began wandering around odd destinations for the next few years in variable states of despair. I tried to construct a new sense of meaning through pragmatic mythicalism, the idea that untestable ideas can still be believed in based on their utility. I decided it would be useful to believe that the well being of people are worth fighting for, but still felt miserably alone.
Then I discovered EA, or rather it discovered me as I was ranting half-crazed to someone about the fermi paradox and great filters to which someone replied “oh yeah, those are called existential risks in effective altruism,” to which I replied “what the HELL is effective altruism?”
Then there was no turning back. The concept that a community exists with such a purposeful drive to improve lives gave me a rope to grasp as I clawed my way back to life like it matters. The ideology granted me a beacon to strive towards, but lacking interaction or connection with the community, I did not yet feel the hope or joy of being welcomed into a place that aligns with your values. The bones were there but not the flesh. I found that sense of belonging with hippies in the cacti forests of Chile, an orphanage in the Andes mountains of Ecuador, and in a tribe among the Woarani people in the jungle, and now am establishing that in UC Berkeley as I study sociology in order to understand how to bring that sense of community that was shown to me in South America into modern workspaces and living spaces.
I intend to find ways to welcome in those who find hope in this community and wish to be connected with it. Concepts on how to do this include developing Bountied Rationality as a public list of bounties that anyone can hunt (Idea outlined here https://docs.google.com/document/d/17h_PtFoRE-W7mRtVZOAyRinAFv542O-kR22VBXxGC6c/edit?usp=sharing), templates for group houses, and perhaps an EA hostel.
Thank you to everyone who has helped build this community, from those who founded international organizations to those who read one post and then told a friend about it (after all the person who told me about EA I think knew little about it but radically altered my life trajectory.) Thank you for showing me that there are people who are strategically kind.
If anyone has connections with EAs involved in indigenous rights (which I believe ought to be a cause area, I hope to write a post on this soon) and community building, feel free to reach out :)
I’m Gabe Newman from Canada. My wife got involved in EA earlier this year and I’ve been skulking on the sidelines, reading and thinking. I’m almost 50 but also a student again as I am getting my MSW (little midlife crisis). I’m still trying to figure out where and how to apply my skill set. I have lots of experience with micro NGO projects which are sustainable but I’m not sure how easy they would be to study, so EA is a bit of a new way of thinking for me. I’ve typically enjoyed Keep It Simple Stupid projects. But lately I have had a couple incredible complicated ideas.
First, I’ve been inspired by the discussion around megaprojects and I was wondering if there has been any consideration towards buying intellectual property rights. For example, if Astra Zeneca was purchased by an EA org then it could have gotten to lower income countries quickly rather than being horded by wealthy countries. Considerable lives would have been saved.
I appreciate a lot of money is going into research for the next pandemic, but what about skipping that step and buying rights to promising vaccines so it can go straight to generic. Pharma gets their money back and a small profit while low resource settings can access life saving medication. I’m sure there is a reason why this wouldn’t work but I don’t know what it is.
Secondly, Bill Gates is currently the second largest donor to the WHO (2018-2019 Budget https://www.who.int/about/finances-accountability/reports/results_report_18-19_high_res.pdf?ua=1) . He is driving where funding is allocated and protecting intellectual rights. I believe this is incredibly problematic. EA could have a seat at that table. Yes it would cost a lot (in 2018/2019 Gates contributed 531 million), but at 100 million it would make EA the 11th largest financial donor of the World health organization!
I realize these are just ideas with no details but I’d love to know if these are horrible ideas because I can’t get them out of my head. Perhaps, I’m looking for a reality check.
Thanks!
Welcome! It seems like your skills in NGO management are very needed in EA projects! You can consider reading more about how to apply your expertise to high-impact causes and see if you come across exciting opportunities to directly work in an NGO or be a consultant for different organizations.
Hey everyone, I’m also new to the forum and to EA as of summer 2021. I found EA mostly through Lex Fridman’s old podcast with Will MacAskill, which I watched after being reminded of EA by a friend. Then I read some articles on 80,000 hours and was pretty convinced.
I’m a sophomore computer science student at the University of Washington. I’m currently doing research with UW Applied Math on machine learning for science and engineering. It seems like my most likely career is in research in AI or brain-computer interfacing, but I’m still deciding and have an appointment with 80,000 hours advising.
Something else I’m interested in is joining (and possibly building) an EA community at UW. To my knowledge, the group has mostly died away since COVID, but there may still be some remaining UW EAs to link up with.
Looking forward to engaging in discussion on the forum!
Did you reach out to groups@centreforeffectivealtruism.org?
Yes, I have a group going now!
That’s great!
Hi, I’m new to the forum and wanted to introduce myself! I’m a product manager in the cybersecurity industry, located in Salt Lake City, UT. I’m currently looking for ways to make more of a positive impact, focused around 1) helping to build up the local EA community and 2) using my career.
I’m relatively early in my career so I have a lot of uncertainties around what cause area to work on and what my personal fit would be for different roles, so I’m trying to find lots of people to talk to in the EA community about product management, data science, or EA startups.
Happy to be here and excited to start contributing!
Hi there!
You may have considered this already, but I’d recommend applying to speak with 80,000 Hours. They’re a great starting point for finding others to talk to, and they accept a lot of applications (“roughly 40% of people who apply”, and I’d guess that many of their rejections are because the applicant has never heard of EA and doesn’t really “get” what 80K is about).
Yep, should have mentioned I already applied for their 1-on-1 advice! Trying to cast as wide a net as possible. :)
Welcome! I guess there’s a good chance you’ve already seen this, but just to make sure: some people think that careers in the info sec space can be very high-impact.
Thanks! Skimming that over, it does seem like a potentially good path. I know info sec is one of 80k’s “potentially good options” but I’ve generally brushed it off, even though it might seem like a good fit on paper. I’ve really only been involved in the development/management of a few insider risk products, so my skillset isn’t focused on expertise in traditional info sec, it’s mostly generalist PM skills for software dev. I’m probably in a slightly better position than most to pursue that route, but not by much. I’ll read it over more thoroughly, thanks for the pointer!
Hi EA community, I’ve been EA adjacent for a while both online and IRL. I saw the request for critiques of the EA movement on Marginal Revolution which inspired me to come over here and finally sign up. I do have to say though that with so many problems in the world today, any effort that’s getting people to go forth and do some good in the world is, well, a good thing! So it’ll take a bit of work to come up thoughtful critiques.
By the way, is there a EA member directory? I’d be curious to learn more about why people participate in the movement. Perhaps there’s a thread on that topic? Also does the EA movement doing an annual survey like Scott Alexander has done with his blog? I’d be very curious to that type of information. Thank you in advance for any guidance you might offer and directions you might point me in. Cheers
Hi Locke!
I don’t think there’s a definition of “EA member”, there is a list of users of this forum by location, a list of Giving What We Can pledgers, some profiles on ea hub. But many people very involved with the movement are not in any of these lists, and there are people in these lists that don’t identify as “EA”.
That’s an interesting question, I would make one! You would get new answers and maybe someone will link to previous threads (I couldn’t find any).
Maybe you might be interested in reading some posts tagged “Community experiences”
There is one https://forum.effectivealtruism.org/topics/effective-altruism-survey?sortedBy=new but the latest data is from 2020
Actually, now that I look at it, it includes some information on your previous question
Hi everyone. I’m a therapist & academic philosopher based in Boston. I do individual therapy and also teach philosophy at Bentley University. Further info here: https://www.jmaier.net/about-me.html
I look forward to hearing more about ideas/suggestions about how to direct my own giving. I have a strong interest in promoting effective mental health interventions at scale. I’ve written about this a bit in a blog for Psychology Today: https://www.psychologytoday.com/us/blog/philosophy-and-therapy. Looking forward to learning from folks on this forum.
Hi John! You might be interested in the work of the Happier Lives Institute, they have a donation advice page https://www.happierlivesinstitute.org/donation-advice.html
You can also see all forum posts tagged as “mental health” here: https://forum.effectivealtruism.org/tag/mental-health
Thank you, Lorenzo, this is really helpful. I’m familiar w the Happier Lives Institute and the v important work that they’re doing. Looking forward to learning more.
Hello everyone! I’m a member of the Polish EA community. Over the last few days we’ve witnessed an outpouring of support for Ukraine which is amazing. But among the information overload, both donors as well as those in need, may find it difficult to single out credible forms of help.
We’re aiming to create a database of verified information to make sure people can make the biggest impact when donating.
This FORM allows those of you who have information about existing initiatives to submit them for our evaluation. Please, spread it in your groups / communities! To avoid duplicating what we have already written and researched, look into this document called HELP FOR UKRAINIANS.
This is how the process will work:
Collecting organisations / initiatives via FORM
Analysing reports from the form by our research director Jakub (EA Poland) and selecting promising ones for more restrictive analysis
Those considered worthy of recommendation are placed in the document HELP FOR UKRAINIANS
Based on data in HELP FOR UKRAINIANS we publish and update once a day our blog posts in 3 languages (Polish, Ukrainian, English) AND we encourage you to translate this post also for your blogs if you are form countries where these 3 are not national languages.
The FORM and HELP FOR UKRAINIANS are to be spread only in the EA Community and organisations approved by the EA Poland group. Blog posts are intended for a wider audience. If you’d like to know more, feel free to reach out to me!
Hello EA world! My name’s Zach. I’m a writer by trade and for passion, and I’m stoked to be here.
I found this community after asking Google “what are some jobs that do good for the world.”
Currently, I work for an ad agency (owned by an AI company) company as a Creative Copywriter. It’s a new job, but I already know that this kind of writing contributes to a destructive economic system (not that it’s all bad, but it’s definitely destructive.)
After that Google search, I found 80,000 as many other have, and now I’m looking for my next steps. That might mean another job (hopefully sooner than later), volunteer opportunities, and building relationships with people who feel similar convictions.
I’ve always had an intense drive to be authentic, to live true to what I believe in and say I believe in, and so this stage feels sort of inevitable. Life is so much bigger than money and cars and careers and comfort and [insert material desire here] - and I feel the need to do something for more than myself.
I came upon this prayer one time (not trying to push religion or anything) and it encapsulates this bone-deep belief I’ve experienced the past few months.
The Blessing of Discomfort—Sr. Ruth Fox, OSB (1985)
“May God bless you with discomfort
At easy answers, half-truths, and superficial relationships,
So that you may live deep within your heart.
May God bless you with anger
At injustice, oppression and exploitation of people,
So that you may work for justice, freedom and peace.
May God bless you with tears
To shed for those who suffer pain, rejection, hunger, and war,
So that you may reach out your hand to comfort them
And turn their pain into joy.
And may God bless you with enough foolishness
To believe that you can make a difference in the world,
So that you can do what others claim cannot be done
To bring justice and kindness to all our children and the poor.”
This is why I’m here. It’s great to be here. I’m looking forward to meeting you!
Hey Zach! I really like that prayer! Thank you for sharing it, and welcome to the community!
Thanks for liking it :) and thank you also for the welcome.
As per this comment, “winter” doesn’t feel like the best term for this time of year given we have people from both hemispheres on the Forum
https://forum.effectivealtruism.org/posts/MTfxQbT4gPgZrgqwP/ea-conferences-in-2022-save-the-dates?commentId=QvL6qaa7gETS4PkYe
Here is a forum bug that has been bugging me since forever: My own comments show up as new comments, i.e., the post comments bubble lights up in blue. But this shouldn’t be the case; I already know that I left a new comment.
Hi! I got recommendation to join the forums because of my reflections about what I should focus on in my career. Is it allowed to write a post on the forum which is not making a specific proposition but rather is asking for advice and providing discussion points for commenters? Or should that be posted as a question?
I’d probably use the question feature, but I’m sure either is fine—looking forward to your post!
Hi everyone, I’m new to the EA community. My husband introduced me here, since I’m facing a career choice dilemma about helping others. I’m currently in tech, but wanted to change to a career in Coaching or Therapy.
Why the switch: I care deeply about reducing individual human suffering and I enjoy working with people 1:1. I don’t see myself in tech for my whole productive years. Causes I care the most: mental health in the workplace, career happiness, and connecting to one’s true self.
My dilemma: I’m debating between a career in coaching vs. therapy.
Coaching:
Pros:
Found it more helpful than therapy when I experienced a career burnout.
I’m in a coaching program and enjoy it.
Coached a few people who are facing career issues and those were truly rewarding experiences.
Faster and cheaper to get certification.
Cons:
Takes lots of time to find clients, and I don’t enjoy the marketing aspect. May never have a big enough practice.
Likely will serve privileged groups, since coaching is not covered by insurance.
Therapy:
Pros:
Therapy is covered by insurance = much easier to find clients and cserve all types of social groups
More established as an occupation than coaching?
Cons:
Big initial commitment both in terms of $ and time. A master’s degree can take 2-3 years, plus 2 more years of field work to get licensure.
I may not enjoy all the classes and all the types of cases I serve.
Also personally, coaching worked better for me than therapy, but it could also just be that I haven’t found a good therapist yet.
Would love to hear thoughts from anyone. Appreciate you reading thus far.
I would suggest to try coaching first as it will be much quicker to find out if you enjoy it/find it impactful compared to therapy which could take years before you get a good sense of your personal fit.
80,000 Hours have a section in their career guide on exploration which might be useful here.
”Later in your career, if you’re genuinely unsure between two options, you might want to try the more ‘reversible’ one first. For instance, it’s easier to move from business to nonprofits than vice versa.”
It’s worth reaching out to therapists and coaches here to get a better sense of your uncertainties.
Thanks so much for the pointers here! Super helpful
Upcoming posts about not yet created EA project or institution called “EA common application”.
I know a writer/”founder” who wrote up documents related to an “EA common application”.
Importantly, their vision seemed to get serious interest and funding—but they later exited or got kicked off the project[1][2].
I have access to these documents written by this person.
In the last few months, EAs have asked for these documents to read and distribute to others. Some requests have come from people I have never met.
There seems to be a lot of interest. In a friendly manner, people tracked other people down during a major conference [3]and then wrote very long messages on SwapCard, which seems like a lot of commitment[4].
I’m unsure if this actually represents general positive sentiment (but if there is a strong team being put together, please PM me and I can just skip all the writing described in this comment chain).
The reasons around this writer/”founders” removal are interesting and important by themselves, but writing about them would occupy a lot of space.
This writer/”founder” will remain anonymous in the documents, for example, because it may be necessary to “trash talk” them.
They did this, not by sending messages, not by requesting meetings, but by physically flagging down while walking between buildings!
I am not joking, there are hours of work in here, this is impressive.
(Continued)
For my own idiosyncratic reasons, related to this particular project of the common application, it seems bad for me to organize, or put people together. Similarly, being a single point of contact, or “holding on to these documents or ideas” seems inappropriate.
Yet, with the pressure/sentiment described above, it’s irresponsible to do nothing or just sit on the document . So I plan to write up some posts and share the documents.
I’ll write this all up quickly. The resulting output might be low quality, or confusing to people not engaged or in the “common app headspace” (like, 99% of forum users).
The truth is that writing about this is pretty hard, orthogonal to founder skill, and there’s just a lot going on, it’s one of the more complex projects, and the content is by its nature opinionated.
All this content will be posted on a new EA forum account, with much more conventional communication norms than the one used here.
I’ll write a little more below for some context, as I prepare a document.
Quick, basic overview of EA Common Application (1/2)
(Note that the following describes one vision of the common application, and is dependent on founding team preferences and ability. Things will be different, even if everything goes perfectly. The below content might also be wrong or misleading.)
Basically, the “common application” is a common point of entry for EAs and talented individuals applying to EA organizations.
Concretely, this would include a website that is used by applicants and EA orgs. It would also become a team or institution that is universally seen as competent, principled and transparent by all EAs. To say it simply, it would be a website that everyone uses and applies to, when working in EA. It’s just the optimal thing to do.
To the organizations and applicants that are users, the common application will be simple and straightforward.
But for the founders/creators, achieving this is harder than it sounds, and in the best version, there are (extraordinarily) complex considerations[1].
But as demanding as it is, it’s equally or more valuable to EA. Even in the early stages, the value of the common application includes:
A streamlined, common place for thousands of talented people looking to contribute or work at EA orgs, as well as a competent institution that provides services, advice and standards to EA organizations.
A central place that provides insights about EA recruiting (like this, but automatically for everyone, at the time), and observes and can intervene in bad outcomes (“really hard to find an EA job”).
The common application can coordinate with EA, responding to gaps as well as surpluses for talent, for example by creating grants or special programs to keep talent from bouncing off, or coordinating with headhunting or hiring agencies to fill gaps.
To see this:
One of the key powers of the common application is sharing applicant interest and progress among organizations, e.g. there might be 10 extremely talented candidates who got rejected in the last stage of a hiring round. This talent can be retained in EA and hired in other organizations.
At the same time, while sharing all of this information, the common application must not disadvantage applicants, even though adverse information can leak in complex, implicit ways.
There is a tension between the two issues above.
Quick, basic overview of EA Common Application (2/2)
The bread and butter of the common application is the day-to-day work to get operations running smoothly and build expertise and trust among EA orgs and applicants.
While much of this seems seems mundane, just the basic operations and having experienced, trusted staff perform friendly check-ins with talented candidates is important (I think focus might be on engaging and retaining highly talented “liminal” EAs, as opposed to existing highly-engaged EAs). It is key to have founder(s) who respects and will execute this unglamorous work.
That being said, in the later stages (year 2 and after), the common application can provide enormous and unique value:
Working as a servant to EA organizations, the common application can develop assessment, screening and guidance tools for candidates and organizations that makes EA organizations recruit more effectively and provides confidence and insight for EAs in their job search.
The common application can go far beyond streamlining recruiting, bringing strong candidates into EA, and make better matches for existing talent, for example, creating new roles, catching candidates who might bounce off EA, and building up deep pools of talent beyond any single job search.
This activity in the common application will provide a way to further develop and grow the pool of EA “vetting” and communication that is important for EA scaling, supporting existing strong EA culture, norms and institutions
This vision of the common application is unusual. It’s hard to think of any other movement that has an institution like this. In later stages, some of the ideas, methods and practices could be groundbreaking.
The previous writer/”founder” had interest from professors in Stanford GSB , Sloan/MIT and Penn State, as well as other schools, who expressed interest in working for free, studying and developing methods (market design, assessment) for this common application (because the work and data can produce publications).
A common application builds on some of the best traits of Effective Altruism: consideration of others and their contributions outside of one’s own organization, the coordination and communication between EAs and organizations, and a talent pool that should only increase in value over time.
This provides an enduring asset for the movement, a pillar that provides stability, confidence and happiness, and enhances object level work for hundreds or thousands of people in the coming years.
Would it be fair to say that Triplebyte is a similar thing for the software engineering industry?
I don’t fully understand Triplebyte, but the common application seems more extensive in functionality.
I expect EAs who create a common application to believe they can achieve closer and more effective coordination between EA organizations than many portals or job search sites.
For example, (in one vision of the common application) with the consent of organizations and explicit agreement by candidates, organizations can share (carefully controlled, positive) information about candidates who don’t end up accepting a job offer, or share other expertise or knowledge about hiring or talent pools they come across.
I think this post, and future, not yet posted content, by the account “che” will be more explicit and clarify the role and value of a common application.
Hi everyone!
It’s been a while since I started my research on how to donate cost-effectively. That journey led me to GiveWell, TheLifeYouCanSave, Animal Charity Evaluators and, eventually, to the EA Community. I am so grateful for all the valuable resources, tools and concepts I could find thanks to the effective altruism movement. This has allowed me to start refining my mindset to maximise the positive impact, not only of my donations, but all my actions.
However, I have not found any way to donate tax-efficiently from my country (Spain). The charities I want to donate to are not among the 5 supported by Ayuda Efectiva, and neither among those listed in Transnational Giving ( I’ve seen that Transnational Giving has a process to donate to unlisted charities, but it is very complicated). Anyway, I have decided to donate directly to those charities.
Still, for upcoming donations, I’d like to know if you are aware of any other options to donate tax-efficiently from Spain that I haven’t considered.
Something like a Spanish RC Forward would be great, wouldn’t it? I think it would be a good idea to create a similar platform/tool for Spanish donors. What are your thoughts about that?
Feel free to contact me! :)
Hi Liam!
I usually look at this table for country-specific tax-deductibility opportunities https://donationswap.eahub.org/charities/
It seems that among the listed charities only Animal Ethics and Oxfam are tax-deductible in Spain :(
You can try the donation swap (not sure how responsive they are), and of course keep in mind that donating effectively does not necessarily imply donating tax-deductibly, but you probably already thought about that.
Hey Lorezo, thank you for your reply!
What a pitty that only Animal Ethics and Oxfam are the only tax-deductible listed charitities in Spain :/
Lately I have been seriously thinking about starting a Spanish platform like RC Forward. I may write a post soon about that idea, asking for the community’s feedback.
Hi everyone! I generally go by Velociraptor online, but if you find that too silly, please call me Lu. I had a pretty awful experience burning myself out trying to do too much volunteer work during the peak of covid, and when I was seeking more reasonable and high-impact ways to return to helping, I stumbled across effective altruism a few months ago. The ideas have really appealed to me, although I’m still uncertain about some aspects (mostly the global focus, I’m generally a proponent of local efforts as participants tend to have more in-depth knowledge about the problems they are tackling). I’m most interested in the career directing ideas that I’ve read about, but I’m also interested in learning more about all aspects of this movement!
If you’re from an affluent community or country, there’s a trade-off between doing things you strongly know to be good (because you’re local), and helping the people who are the least fortunate (who are nowhere near you). A solution might be finding ways to elicit local knowledge and help with impactful work in other places (the Global South, the future, factory farms etc.).
👋 I’m Seth Ariel Green, I mostly write here: https://setharielgreen.com/blog/, I’m a freelance writer currently based in New Orleans, about to go finish up a thru-hike of the Appalachian Trail that I mostly completed last year. Long-time lurker, might start posting, looking forward to getting into it with y’all
Welcome! Props for that accomplishment. Our editor decided to interpret the comma after your link as part of your link. I fixed it for you, I hope you don’t mind.
TY TY!
Hi, I’m Jonny, a software engineer based in London. I’ve recently come across EA and am looking to re-align my career along a higher impact path, most likely focusing on AI risk, however I’ve still not fully bought into longtermism just yet so am hedging by also considering working on climate change or global health. I look forward to using this forum to try and answer some of my questions and clarify my own thinking.
Welcome! Super exciting you’re thinking of using your career for impact. I’m also a software engineer and was in the same position in 2016, and now I make this Forum. Take your time to discuss the ideas, and don’t feel any pressure to come to any particular conclusions.
What are you thinking about regarding next steps to become more involved with AI Safety?
I’ve taken a few concrete steps:
Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month
Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/Brian Christian, watching Rob Miles’ videos etc
Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nanda’s excellent Simplify EA Pitches to “Holy Shit, X-Risk” and Scott Alexander’s “Long-Termism” vs “Existential Risk” (I’d not spent much time considering philosophy before engaging with EA and haven’t had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably don’t need to make a decision about that yet and can focus on alignment knowing that it’s likely the highest impact cause I can work on).
Began cold-emailing AI safety folks to see if I can get them to give me any advice
Signed up to some newsletters, joined the AI alignment Slack group
I plan on taking a few more concrete steps:
Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available.
In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact
Getting in contact with the folks at AI Safety Support
Complete the deep learning for coders fast.ai course
My first goal is to ascertain whether or not I’d be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and I’m a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem promising:
Applying for relevant internships. A lot of these seem aimed at current students, but I’m hoping I can find some that would be suitable for me.
Getting an interim job that primarily uses python and ideally ML so I can upskill in those (at the moment my skills are more generic backend API development focused), even if the job isn’t focused on safety.
Apply for a grant to self-study for 3-6 months, ideally under the guidance of a mentor, with a view to building a portfolio that would enable me to get a job somewhere like Deepmind.
Applying for research associate positions focused on AI alignment.
I appreciate there’s little context to my current situation which might be relevant here, but nonetheless any feedback on these would be greatly appreciated!
Nice, I’d also recommend considering applying for the next round of the AGI Safety Fundamentals course. To be honest, I don’t have much else I can recommend, as it seems like you’ve already got a pretty solid plan.
If you’re interested in more resources to help you decide, may I recommend https://80000hours.org/
It has a pretty good set of decision-making tips for someone like yourself. They also occasionally give out personalized career advice which might be of benefit.
Hello I’m Timothy from Germany I just joined the forum after finding out about EA through Peter Singer a couple of days ago. I am just 18 years old so I still have my whole career ahead of me. I’m currently thinking about what to study and what to do in the next six months before university will start. Any suggestions welcome, especially for what to do in the next six months.
Hi Timothy, it’s great that you found your way here! There’s a vibrant German EA community (including an upcoming conference in Berlin in September/October that you may want to join).
Regarding your university studies, I essentially agree with Ryan’s comment. However, while studying in the UK and US can be great, I appreciate that doing so may be daunting and financially infeasible for many young Germans. If you decide to study in Germany and are more interested in the social sciences than in the natural sciences, I would encourage you (like Ryan) to consider undergraduate programs that combine economics with politics and/or philosophy. I can recommend the BA Philosophy & Economics at the University of Bayreuth, though you should also consider the BSc Economics at the University of Mannheim (which you can combine with a minor in philosophy or political science).
In case you are interested in talking through all this sometime, feel free to reach out to me and we’ll schedule a call. :)
It depends what your strengths and interests are, but let me give some generic thoughts.
Most EA high-schoolers who like math/science should at least consider a CS degree (useful for AI safety research and job security in software development), or a math/econ double degree (useful for Econ PhD, policy, and big picture strategy research). I would recommend that a strong student apply to US universities, because they are far stronger than any outside US/UK/CH. But it’s a few months past the deadline for those (and UK universities too). If you’re confident you can lodge a strong application to US schools, but you didn’t do it this year, then you could take a gap year, and apply in 6 months. For people who dislike maths and are excited about policy or politics, another option is law, which in a US setting could follow an undergrad in some combo of polisci, philosophy, and econ.
I’d be interested to hear what others think too!
Hi, I’ve been interested in EA for years, but I’m not a heavyhitter. I’m expecting to give only dozens of thousands of dollars during my life.
That said, I have a problem and I’d like some advice on how to solve it: I don’t know whether to focus on shortterm organizations like Animal Charity Evaluators and Givewell or longterm organizations like Machine Intelligence Research Institute, Center for Reducing Suffering (CRS), Center on Longterm Risk (CLR), Longterm Future Fund, Clean Air Task Force and so on. It feels like longterm organizations are a huge gamble, and if they don’t achieve their goals, I will feel like I’ve wasted my money. On the other hand, shorterm organizations don’t really focus on the big picture and it’s uncertain whether their actions will reliably reduce the amount of suffering in the universe unlike how CRS or CLR claim to do. What do you think?
Personally,
1. Bite all the bullets, uncertain but higher expected impact > certain but lower impact
2. It’s tricky to know how good longtermist organizations are compared to each other. In the past I would have said to just defer to the LTFF, but now I feel more uncertain.
Thank you for answering, your reasoning makes sense if longterm charities have a higher expected impact when taking into account the uncertainty involved.
This is one of the hardest “big questions” in EA, and you’ve outlined what makes the question hard.
You might want to wait another week or two — we have an annual post where people explain where they’re giving and why. You can be notified when it goes up if you subscribe to the donation writeup tag. You can also see last year’s version of that post.
Maybe some of the explanations in these posts will help you figure out what point of view makes the most sense to you!
Thank you for answering, I subscribed to that tag and I will take a closer look at those threads.
Hi guys, my name is Nathaniel and I’m new to this forum. I found out about EA a few months ago because I’ve been thinking in these terms my whole life (how to maximize positive output to the world) and it’s great to see there’s a whole community centered around that question. I’m studying an undergrad in sustainable energy engineering at SFU and I’m hoping to have a career somewhere in the intersection between this field and computer science (computational sustainability). I haven’t done a lot of research into this yet but it seems like an area with so much potential. I dream big and have thought a lot about how AI could be used to optimize permaculture setups and help transition our food system into decentralized farming co-ops especially in the wake of climate change.
I’m also interested in animal rights activism and anti-capitalism!
TLDR; The EA Forum (EA as a whole?) should ready for attention/influx due to political money in about a 12 month horizon from this comment (so like 2023ish?). So maybe designing/implementing structure or norms, e.g. encouraging high quality discussion, using real names is good.
There is a news cycle going around that SBF will increase political spending for 2024.
Examples:
https://www.cnbc.com/2022/05/24/crypto-billionaire-says-he-could-spend-a-record-breaking-1-billion-in-2024-election.html
https://news.yahoo.com/crypto-billionaire-sam-bankman-fried-131134527.html
https://www.nbcnews.com/politics/2022-election/crypto-billionaire-says-spend-record-breaking-1-billion-2024-election-rcna30351
https://newrepublic.com/article/166584/sam-bankman-fried-crypto-kings-political-donations
https://uk.sports.yahoo.com/news/ftxs-bankman-fried-already-political-202539699.html
Two examples of newcomers, whose presence seems positive or productive:
https://forum.effectivealtruism.org/users/_pk
https://forum.effectivealtruism.org/users/carol-greenough
But this doesn’t indicate what could happen to forum discussion after an extensive, large deployment of money.
It’s prudent to think about bad scenarios for the forum (e.g. large coordinated outside response, or just ~100 outside people coming in, causing weeks of chatter).
The best scenarios probably involve a forum which encourages and filters for good discussion (because the hundreds of thousands of people interested can’t all be accommodated and just relying on self selection from a smaller group of people who wander in probably results in adverse selection).
The best outcomes might include bringing in and hosting discussions with great policy expertise, getting EA candidates good exposure, and building understanding and expertise in political campaigning.
I guess a bad scenario is maybe 20-30% probable? I guess most scenarios are just sort of mediocre outcomes, with “streetlight” sort of limitations in discussion, and selecting for the loud voices with less outside options.
Very good scenarios seem unlikely without EA effort. Maybe good scenarios requires active involvement and promotion of discussion.
Hi Everyone, this is my introduction post. I’ve put some info in my bio, so I’ll elaborate on it here. You can find out a little more about me here https://snlawrence.com/.
I was introduced to EA through an interview with William MacAskill on Sam Harris’ meditation app, Waking Up. In the interview, William mentioned 80000 hours, which I then googled after. I began reading through their key idea and career review articles and was quickly convinced of the value of doing impactful work over my career. The articles are well written, well researched and very honest about their shortcomings, something I had rarely encountered before. I didn’t have a strong conviction of any particular career pathway and was at a bit of a loss for what sort of jobs to look for after my PhD was finished. I completed the career planning guide and applied for and did a career counselling session at the start of 2021.
Now that I am much closer to finishing my PhD, I find myself with the opportunity to spend the remainder of this year doing low-cost experiments in impactful career pathways to get information about where I may have a competitive advantage for doing impactful work in the longer-term. Currently, I am considering software development, data science, R&D and becoming a founder as potential ‘experiments’. After my final PhD thesis submission, I intend to spend time fleshing out what these ‘experiments’ would look like and what the expected value of each would be so that I can rank them and select which to do. Hopefully, I can share this progress here with the EA community as I go! I’d be interested to hear from anyone who has done, or is doing, something similar.
Hi Sean, we met online last year and through 80,000 hours, nice to see you on the forum! Let’s keep the conversation going, I’m in a similar boat looking to maximise exploration value over the 24 months—keen to trade ideas.
Hello everyone,
I’m a PhD student using non-invasive brain stimulation to enhance human attention. I’m convinced that using non-invasive brain stimulation to enhance human intelligence has massive potential in improving productivity across the global economy.
Unlike its productivity-enhancing counterparts (invasive brain stimulation and artificial intelligence) it is vastly underfunded, making it an ideal target for effective altruism!
Compared to current AI human intelligence is already general, so enhancing it can be applied to all aspects of society. Intelligence enhancement itself would be self-improving, meaning it’s plausible that as people that would on intelligence get more intelligent themselves, there would be a positive feedback loop of intelligence leading to increasingly accelerated productivity.
Finally, compared to pharmaceutical and invasive methods it is known to be safe and would require minimal regulatory intervention. This would mean an initial product offering could be created sooner, and iterations on the technology could happen faster. Ultimately leading to productivity improvement happening sooner, and improving faster.
I plan to write a full post on this soon, but just thought I would introduce myself and open the floor to feedback/criticism.
Ahead of the full post, I’d like to know what you think the most compelling evidence is for non-invasive brain stimulation actually working. This could be a paper, a blog post from some self-experimenter, or something else — whatever made you think this was important to study further.
(I know nothing about this topic at all, and don’t even have a mental picture of what NIBS would physically look like.)
Thanks Aaron, I will make sure to include this information but hopefully this will help in the meantime:
Non-invasive brain stimulation is any method of causing brain activity to change without surgery. This can include using electrodes to apply a small amount of current to the scalp with a headset like this:
https://www.neuroelectrics.com/solutions/starstim
Creating a magnetic field in the brain with a device like this:
https://www.healthline.com/health/tms-therapy#What-is-TMS-therapy?
Or by using ultrasound waves with a device that looks something like the image here:
https://www.semanticscholar.org/paper/Technical-Review-and-Perspectives-of-Transcranial-Yoo/c26b8b3655561cfb24dfb262d4fbf5ad76bc6867
The electrical and magnetic stimulation methods are well established with decades of research covering tens of thousands of participants and proven safety profiles. The magnetic method is too bulky for a consumer headset, and the electrical method has issues with reliability across subjects (my research plays a small part in helping to address this.)
The ultrasound method is more new, but with the promise of much more accurate stimulation. Without going too deep into the technical challenges that remain I think an electrical stimulation based headset that increases intelligence significantly could be available to consumers within 5 years. With an ultrasound-based headset superseding that once the research is more firmly established.
Can you explain why this technology/approach is so underfunded/neglected, when some implementations seem simple/benign, and the benefits seem large?
Great question, I think it’s largely because the implementation wouldn’t be as simple as it may first appear so relatively deep pockets are required. Also, the amount of researchers in this field is pretty low (low thousands?). It’s still much simpler than invasive stimulation (e.g. Neuralink), but not something that can be implemented overnight.
The easiest headset to initially implement would use electrical stimulation, and there are devices on the market that use electrical stimulation, for example, this one for depression:
https://flowneuroscience.com/
The issue is that we all have different shaped heads, skull thickness, shapes of brain etc and this can lead to up to a 100% difference in the electric field in the brain https://www.sciencedirect.com/science/article/pii/S1935861X19304115. To phrase that differently, because our brains and heads are different giving two people the same stimulation can mean one has improved intelligence and the other does not. But luckily there is a way around this, namely taking an MRI scan of the user’s head, simulating brain stimulation, then personalising the stimulation to their head and brain. This essentially gets rid of much of this variability between people by accounting for the different shape of the head and brain. The issue of course is that we can’t go and have an MRI scan when we buy this headset, it’s expensive time consuming and doesn’t scale across the population. This is where the field has sat for a few years, have personalised stimulation at great expense or don’t and have it and get poor results. Most research groups cannot afford to put every participant through an MRI, so most research on this topic has poor results.
Instead, a prospective startup needs to find a way to personalise the stimulation without an MRI scan. One way is to use AI to generate an MRI scan based on the shape of the persons head, their demographics and maybe even their DNA (see https://developer.nvidia.com/blog/kings-college-london-accelerates-synthetic-brain-3d-image-creation-using-ai-models-powered-by-cambridge-1-supercomputer/?ncid=so-twit-448517#cid=ix11_so-twit_en-us). The other way is to create a model where you give it someones head shape, demographics and / or aspects of their DNA and it tells you what kind of stimulation would work for them given its training in simulation. Early versions of this already exist! For example, this paper takes head circumference and can tell you how much to stimulate reducing the inter-person variability by around 25%.
https://www.sciencedirect.com/science/article/pii/S1935861X21001352
So a company would have to create these models, validate them against people that have gone through MRI scans. Create the physical hardware, ideally track the position of the stimulation electrodes relative to the head (which is an engineering challenge in itself). Package this all up nicely, run a large scale study to prove it enhances aspects of intelligence and improves productivity then ship it. Then on a rolling basis perform more studies on enhancing different aspects of intelligence then ship that as hardware/software updates. It’s a big undertaking.
Hi Jack. I am really into cognitive enhancement. In 2020 (right before COVID) I did a two months research period at Bernhard Hommel’s cognitive enhancement lab in Leiden. While I was a Cognitive Science student in Milan I did an exam with Roberta Ferrucci and one with Alberto Priori, two prominent TDCS as a cognitive enancher experts. At the last EAxOxford I spoke with Anders Sandberg about cognitive enhancement as an EA cause area. All to say that I am interested in what you are doing and that could be valuable to connect more people that are into “serious” (e.g. non risky and unproved biohacking shit) cognitive enhancement research
Hi Luca,
That sounds really interesting, it is good to hear from others in this space! I have connected with you on LinkedIn, hopefully, we can find a way to work on this together in the future.
Hi everyone!
I was wondering if anyone had an opinion on whether it is more ethical to eat 100% grass-fed beef/lamb from trusted suppliers in Australia (i.e. CCTV in slaughter houses and minimal transport) or more tofu/beans?
The pros of tofu/beans are clearly that it does not require taking the life from a cow or lamb who wants to live (although note that it takes lots of meals to cause the death of one cow), and also that it dramatically reduces carbon emissions.
The pros of instead eating 100% grass-fed beef/lamb are that it may help me avoid causing wild animal suffering, since crop cultivation causes potentially painful animal deaths. Although, it is worth noting that these animals may counterfactually die painful deaths in the wild anyway, and eating crops could also reduce wild animal populations who may have net negative lives. Eating beef/lamb once or twice a week would make it somewhat easier to stay healthy and potentially be more productive, and would make my parents less concerned about my health.
I am assuming that cows/lamb live a net neutral life, which seems to be a reasonable assumption for trusted suppliers. In terms of monetary cost, I think the cost of buying vitamin supplements is approximately cancelled out by the cost of buying meat. Also, I wouldn’t eat any meat out of the house, so you can assume that the impact of my eating on my friends is irrelevant.
Looking forward to hearing your thoughts!
Lucas
Update—I just came across this article, which suggests that harvesting/pasture deaths are probably higher for beef than plants anyway, so it seems a pretty clear decision that being vegan is best in expectation!
From a consequentialist perspective, I think what matters more is how these options affect your psychology and epistemics (in particular, whether doing this will increase or decrease your speciesist bias, and whether doing this makes you uncomfortable), instead of the amount of suffering they directly produce or reduce. After all, your major impact on the world is from your words and actions, not what you eat.
That being said, I think non-consequentialist views deserve some considerations too, if only due to moral uncertainty. I’m less certain about what are their implications though, especially when taking into account things like WAS.
A few minor notes to your points:
At least where I live, vitamin supplements can be super cheap if you go for the pharmaceutical products instead of those health products wrapped up in fancy packages. I’m taking 5 kinds of supplements simultaneously, and in total they cost me no more than (the RMB equivalent of) several dollars per month.
It might be hard to hide that from your friends if you are eating meat when being alone. All the time people mindlessly say things they aren’t supposed to say. Also when your friends ask you about your eating habit you’ll have to lie, which might be a bad thing even for consequentialists.
Thanks, these are really interesting and useful thoughts!
Might be irrelevant, but have you considered moving to the US for the increased salary?
Thanks for the suggestion, but I’m currently in college, so it’s impossible for me to move :)
This is a really thoughtful and useful question.
Most informed people agree that beef and dairy cows live the best life of all factory farmed animals, more so than pigs, and much much more so than chickens.
Further, as you point out, beef and dairy cows produce much more food per animal (or suffering weighted days alive).
A calculator here can help make make the above thoughts more concrete, maybe you have seen it.
I think you meant prevents painful deaths?
With this change, I don’t know, but this seems plausible. (I think amount of suffering depends on the land use and pesticides, but I don’t know if the scientific understanding is settled, and this subtopic may be distracting.)
I think you have a great question.
Note that extreme suffering in factory farming probably comes from very specific issues, concentrated in a few types of animals (caged hens suffering to death by the millions and other graphic situations).
This means that, if the assumptions in this discussion are true, and our concern is on animal suffering, decisions like beef versus tofu, or even much larger dietary decisions, seem small in comparison.
Thanks Charles for your thoughtful response.
I just wanted to note that I’m referring to 100% pasture fed lamb/beef. I think it’s very unlikely that it’s ethically permissable to eat factory farmed lamb/beef, even if it’s less bad than eating chickens, etc. I’d also caution against eating dairy since calves and mothers show signs of sadness when separated, although each dairy cow produces a lot of dairy (as you noted).
Sorry, I probably could’ve worded this better, but my original wording was what I meant. My understanding is that crop cultivation for grains and beans causes painful wild animal deaths, but grass-fed cows/lamb do not eat crops and therefore, as far as I’m aware, do not cause wild animal deaths.
I certainly agree with your conclusion that not eating factory farmed chicken, pork, and eggs (and probably also fish) is the most important step! But I’d still like to do the very best with my own consumption.
Everything you said is fair and valid and seems right to me. Thank you for your thoughtful choices and reasoning.
Edit: I forgot you said entirely pasture/grass fed beef, so this waives the thoughts below.
A quibble:It seems that beef and dairy cows both use feed, not just grass. Because eating dairy/beef requires more calories of feed (trophic levels), it is possible the amount of land needed for beef might be large compared to land needed for soy.Grass crops are a use of land that might have ambiguous effects on animal suffering.I don’t know about either of 1) or 2) above.I guess I am saying it is either good to be uncertain, or else get a good canonical source.Just watched the new James Bond movie No Time to Die—the plot centers around a nanobot-based bioweapon developed by MI6 that gets stolen by international terrorists (if I’m understanding the plot correctly; it was confusing). Maybe someone can write a review of it that focuses on the EA themes?
I am the founder of Sanctuary Hostel a unique cross border eco friendly animal rescue/ hostel/ community garden project.
After taking a trip all over Mexico i noticed the animals were not treated well there, so i decided to move there and build an animal rescue. After arriving i decided a rescue was not enough. The existing rescues fail because they rely solely on donations and they dont really solve the problem they are a band aid.
I felt community and worldwide involvement was needed so i decided combining a hostel would help with that as well as a community garden.
Our focus is not rescue its education we want to stop strays from existing we want to stop people from breeding animals, we want to stop people from abusing the animals.
So in 2019 i moved to Rosarito where i purchased some land and have been working towards building this concept. We are still very new so we dont have many people on the team and we dont have a lot of brand awareness. I am trying to learn a bit about fundraising and crowdfunding and donations.
This is why i chose Mexico.
Roughly 70% of Mexico’s 18 million dogs are abandoned and become strays, making it the worst country for pet abandonment in Latin America. Animals are treated more like property than pets, and they are often mistreated whether living in a home or on the street.
For those interested in the work Michael Kremer (Giving What We Can member and 2019 Nobel Laureate in Economics) and his spouse and fellow GWWC member Rachel Glennerster have done on COVID-19 vaccine supply, our team profiled one of their co-authors this week — Juan Camilo Castillo of UPenn. An excerpt is below / the link is here: https://innovationexchange.mayoclinic.org/market-design-for-covid-19-vaccines-interview-with-upenn-professor-castillo/
###
JCC: Michael Kremer had worked on groundbreaking pneumococcal vaccine research in the past. Early in 2020, he realized there would be a profound need for research into financing COVID-19 vaccines. He thus reached out to several people and put together a team of economists that included some of his former colleagues and some new people (such as myself).
At the start of our work, we saw that some of the hurdles that had to be cleared to develop a vaccine were no longer a problem, since phase I and II clinical trials were already underway for several vaccine candidates. However, we realized that it would not be easy to translate successful trials into large-scale vaccination quickly, since few steps had been taken to set in place the capacity to manufacture vaccines. So we focused our work on financing large-scale manufacturing capacity that would allow for quick, large-scale vaccination as soon as vaccine trials were successful.
Hi all, this is my first post on the forum and I apologize for the shameless plug, but I just recently came into an opportunity to work on a large project focusing on climate change and emerging technologies relating to it ending with a presentation to the leadership of a fund with ~50 billion dollars in assets under management and the ability to put reasonable portions of that to work every year.
My influence is likely to be quite limited, however if anyone has special insight into hydrogen production, green VC firms, carbon storage technologies or any other areas they see as ready to scale in the energy transition, I would love to find out more. Additionally, I would also be interested if any subject matter experts are willing to recommend resources or discuss ideas with me.
I would say there is a 90% chance this project has a minimal impact on the board’s decision, but the leadership team seems to be open to being convinced and it seems worth a shot.
Hi Chris! You’re probably already aware of this, but founders pledge and giving green are doing great research on this and might be worth contacting.
You might also be interested in the forum posts tagged climate change or climate engineering, and maybe contact their authors or some commenters that seem subject matter experts.
Good luck on the project!
Good advice, thanks!
Thoughts/comments on potential new series of posts (“Gates are Open, Come In”)?
Someone I know has benefited a lot from interactions with major EA funders (for reasons that aren’t clear, the funders just seem communicative and benevolent).
This person is thinking of writing up a series of posts about their experiences, in a positive, personally generous way, to provide value and insight to others.
They would share actual documents (they wrote) as well as describing their views of communications and key points that seem important to their interactions.
This series seems potentially popular and interesting. It tries to use this interest to achieve three prosocial purposes:
By illustrating experiences, the series tries to reduce friction and increase access to these channels, which in theory improves the supply of proposals, and happiness and impact overall.
There is currently one “live” proposal or project that may need a founder. This only exists as a private document, and it seems worth showing this publicly as part of this series.
In addition to the above, the series might actively promote certain patterns and virtues that are robustly good (like “intellectual honesty”) and certain limited insights to grant making. While the writer is uncertain, the very best outcome from this would be to create a new set of actions or patterns that most people can explore at low risk, that has low negative externalities and produces value and gives feedback [1].
These positives seem pretty large.
I think the downsides are important. Here are some downsides that have been thought about:
One downside is misleading people. One way this might happen is by giving a false sense of the bar of a proposal being accepted (too easy or too hard). This is hard to completely avoid, because the writer is in no position to state the bar. The writer will try to give some sense of their qualifications, as well as tying these to the mechanics of the project and impact, which seems to be what is important.
Another downside is having readers “over index” on the content. For example, someone might rigidly use a proposal in a post they see, as a template in future proposals. To partially address this, without distorting the content, the writer can focus on traits about the process that seem robustly good (calibration, and examples of “reasoning transparency”).
I’m writing this to get any feedback on the above, especially any objections for any reason, covered or not covered above.
In particular, for this “pattern”, I’m thinking about the process of people finding specific talented individuals, who are not inside EA, and bringing them to EA, and with the bar of interesting funders. This pattern seems fairly safe for most people to explore, develops skills that seem universally useful, and touches on key traits useful for grant making.
This looks like a great idea!
Hi, I’m new here. I am writing from Calgary., Canada. I’m a Ph.D. student in the area of Communication and Media Studies. Interested in AI and media
That seems like really important and interesting work.
Can you write a bit more here about anything that would help you in your “journey” in EA or elsewhere, or have any questions for anyone?
Hi Charles, Oh, so sorry I took so long to answer. I’m particularly interested in AI narratives portrayed by the media and their relationship with Ai governability. Beyond that, I’m interested in science communication and disinformation. This Forum looks like a great place to share my texts and learn from others.
That’s fantastic, many people would be interested in your work!
(Repost from Shortform because I didn’t get an answer. Hope that’s ok.)
The “Personal Blogposts” section has recently become swamped with [Event] posts.
Most of them are irrelevant to me. Is there a way to hide them in the “All Posts”-view?
Thanks Tobias, we are aware of this issue and have fixing it on our backlog. Unfortunately there isn’t an easy way to filter out these posts in the interim.
Are shortforms supposed to show up on the front page? I published a shortform on Sunday and noticed that it did not appear in the recent activity feed, but older material did.
Also, does anyone else think that the shortform section should be more prominent? It’s a nice way to encourage people to publish ideas even if they’re not confident in them, but my most recent one has gotten little to no engagement.
The shortform should in fact appear in recent activity—not sure what happened there.
And I agree that we should grow and develop low-barrier ways of interacting with the Forum.
https://www.nytimes.com/2022/04/10/business/mackenzie-scott-charity.html
This seems like a great article and thought provoking:
There’s a lot of attention on meta EA and EA money. The FTX grants, which might total ~$100M in a year, seem big. These grants are extremely important for the cultural effects and could be enormously impactful.
Scott moved out $8.6 billion last year. If just 10% of that was directed toward very impactful causes, what would the value of that be?
Did Scott or her staff encounter EA? Did this happen, and if so, what did they hear, how did that come across, and what did they think? If they didn’t encounter EA, why not?
It seems hard to integrate multiple, large sources of funding into EA because of coordination, social or cultural reasons. But the very best implementation of the movement could probably do this, while preserving all EA goals and work.
Hi everyone, I am Oisín from Ireland. I am relatively new to EA (about 4 months), and am currently in university studying Theoretical Physics (3rd year), though I’m pretty sure I won’t graduate with a first to be quite honest. The general field of EA I would currently be most invested in is animal welfare/advocacy. I am also in the middle of the AAC training course and finding it intruiging. Would you know how someone with my sort of degree could be useful in EAA (effective animal advocacy) or other areas of EA? Thanks for all the advice
you might want to have a look at animal advocacy career’s website. they have a section for career advice as well as an introductory online course.
(if you are interested in other areas too, there is also 80k hours which you probably already heard about. they offer 1-on-1 advice too.)
Hi,
I am writing to say that I might be doing “moderately-high temporal resolution scrapes of some subset of EA Forum content”.
This comment/notification is mainly for forum technical admin, and anyone interested in these scrapes or the potential products of such a project.
Precedents for this scraping include this post, this question, the existence of the API and it’s discussion here, and general open source/discussion principles, or something.
Feel free to discuss!
For more information, I’ve very quickly written rambling, verbose thoughts in a reply to this comment.
Flagging some more technical points about the scraping above (verbose, quickly written):
This scraping might be in the form of API calls that occur every few minutes. The burden of these calls seems small (?) relative to the mundane, everyday use of the API, e.g. see GreaterWrong or Issa Rice’s site.
Just to be super clear, I think the computing costs for the backend activity of these calls are probably <$1 a month
It seems there aren’t rules/norms for rate limits and there is some evidence that the EA forum/ LessWrong may not handle heavy use of API calls robustly :
Calls that seem sort of large are allowed. To me, these calls seem large compared to say, response limits and size limits of calls of Gmail API and other commercial APIs I’ve used.
Pagination isn’t supported in the API, and for many calls there aren’t even date filters (“before:”/”after:”) for me to approximate paginationI found additional query “views” such as MultiCommentOutput, which allow offset, so you can paginate.The API exposes certain information that isn’t available in the front-end website. However, I am reluctant to elaborate because (1) this same information is available another way, so it’s not quite a leak (2) I’m a noob, but this was easy to find—I think this is a sign it’s sanguine and maybe already used (3) I don’t want to just add a low value ticket to someone’s Kanban board (4) I find this information interesting!
Other comments on the purpose (also verbose, quickly written):
This “higher resolution” scraping might help answer interesting questions. I don’t want to write details, mainly because I’m in the fun, initial 10%/ideation stage of a side project. In this stage, usually I see something shiny, like a batch of kittens in the neighborhood that need fostering, and the project ends.
Not really related to high frequency temporal scrapping, but related scrapping in general: this is useful to get over certain limitations with the API. e.g. See the part in Rice Issa’s thoughtful walkthrough of GraphQL where he says “Some queries are hard/impossible to do. Examples: (1) getting comments of a user by placing conditions on the parent comment or post (e.g. finding all comments by user 1 where they are replying to user 2); (2) querying and sorting posts by a function of arbitrary fields (e.g. as a function of baseScore and voteCount); (3) finding the highest-karma users looking only at the past N days of activity.”
I guess one reason I’m writing all this is to make sure there isn’t some big blocker, before I spend the time grokking my AWS Lambda cookbook, or whatever.
Hey, I have a series of js snippets that I’ve put some love into that that might be of help, do reach out via PM.
Hi Nuño,
This is generous of you.
So I managed to stitch together a quick script in Python. This consists of GraphQL queries created per the post here and Python requests/urllib3.
If you have something interesting written up in js, that would be cool to share! I guess you have much deeper knowledge of the API than I do.
It was a bit of a hassle was getting it packaged and running on AWS, with Lambda calls every few minutes. But I got it working!
Now, witness the firepower of this fully armed and operational battlestation!
Hey, everyone. I don’t post here often and I’m not particularly knowledgeable about strong longtermism, but I’ve been thinking a bit about it lately and wanted to share a thought I haven’t seen addressed yet and I was wondering if it’s reasonable and unaddressed. I’m not sure this is the right place though, but here goes.
It seems to me that strong longtermism is extremely biased towards human beings.
In most catastrophic risks I can imagine (climate change, AI misalignment, and maybe even nuclear war* or pandemics**), it seems unlikely that earth would become uninhabitable for a long period or that all life on earth would be disrupted.
Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?
Although I definitely am somewhat biased towards human beings and think existential risk is a very important cause, I wonder if this critique makes sense.
*Regarding nuclear war, I guess it would depend on the length and strength of the radioactivity, which is not a subject I’m familiar with.
**From what I’ve learned in the last year and a half, it wouldn’t be easy for viruses (not sure about bacteria) to infect lots of different species (covid-19 doesn’t seem to be a problem to other species).
If humanity survives, we have a decent shot of reducing suffering in nature and spreading utopia throughout the stars.
If humanity dies, but not all life, and some other species eventually evolves intelligence and then builds civilization, I think they might also have a shot of doing the same thing, but this is more speculative and uncertain, and seems to me to be a much worse bet than betting on humanity (flawed as we are).
Thanks for the comment. I really hadn’t considered colonizing the stars and bringing animals.
TBC, I think it’s more likely that utopia would not look like having animals in the stars. Digital minds seem more likely, but also I think it’s likely just that the future will be really weird, even weirder than digital minds.
Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn’t affect non-humans:
X-risks aren’t the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
I think X-risk work does affect non-humans. Linch’s comment mentions one possible way, though I think we need to weigh the upsides and downsides more carefully. Another thing I want to add is that misaligned AI can be a much powerful actor than other earth-originating intelligient species, and may have a large influence on non-humans even after human extinction.
I think we need to thoroughly investigate the influence of our longtermist interventions on non-humans. This topic is highly neglected relative to its importance.
I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.
Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future.
In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change.
Hi there, I first came across EA last summer after someone suggested it might be a good fit for what I’m working on.
I’m building a platform for community-based disaster risk manangement and resilience. https://www.thrivespring.com/
The events of the past two years have made it very clear that we live in an increasingly complex and unpredictable world. Our interdependent global supply chain is brilliant when everything is working as expected, but it is vulnerable to black sky hazards that can cause cascading failures across multiple sectors. Crossing our fingers and hoping that a low probability but high impact event like an intense solar flare, EMP or cyber attack on the grid won’t ever occur seems like an awfully big gamble when we consider the stakes.
So how can we increase capacity at the local level to be able to withstand disruptions and shocks?
Long-term resilience, self-sustainability and community-based disaster risk management are part of the answer. We need to tap into the resources within every community and get connected and organised so that we can work together to reduce our risks and increase our capacity for coping with disruptions and disasters. How can we do that?
Through practical grassroots initiatives focused on:
- local food production—
water security
- community energy
- emergency communications—
mutual aid groups
- mitigation and preparedness measures
- response and recovery teams
Our community resilience noticeboard lets people request collaborators and funding for new and ongoing local projects.
https://www.thrivespring.com/community-projects
That’s about it for now! We’ve got more features in the pipeline but the biggest challenge now is getting the platform out there so that people become aware of it and start or get involved with projects in their areas.
If anyone would like to know more please feel free to get in touch.
Charlotte Cecil
Hi everyone!
I’ve known about the ideas behind EA for a while now, but have just recently become aware of how much concrete organizing is going on and how many resources the movement now has.
I’ve got academic training in a lot of skills that are useful to EA organizations, such as cost-effectiveness analysis, decision science, and preference elicitation. My reading in the EA literature has given me a few ideas about how I might some day put those skills to work for this cause. I’m definitely open to research and project collaborations if you think I might be useful to you—or even if you just want someone to brainstorm with!
I had a conversation with my partner yesterday about how we want to do good better, but at the same time nobody can do 100% and taking care of yourself is important. She described to me a concept that is a simple but important change from how I have understood EA, and I’d like to share it. While I normally thought of doing good better as
what she described was
This isn’t a huge distinction, but I think that it makes EA feel much less intimidating and much more welcoming; it sets the bar lower. It is true that there are lots of people who devote a larger percentage of their resources toward doing good better, but for people who only feel comfortable with a smaller amount I think this idea is helpful.
Instead of A & B this idea describes EA more as B is important for us (and it is also common to do quite a lot of A).
For me, knowing my giving is effective makes me more confident to give more. Before learning about EA I never considered donating 10% of my income because I never thought it will be so helpful, and I saw charity as something I was sometimes obliged to donate small amounts to.
I look at it this way: EA is about maximising the total amount of good you do over your lifetime. If you can do lots of good right now but it will tear you down—you may not be more impactful overall by doing it.
Would it be beneficial for the EA community to have dedicated financial planners who help community members invest for personal and altruistic goals (i.e. investing to give), kind of like 80K advising? I see that we have some financial planners registered on EA Hub.
Founders Pledge thinks it’s fairly difficult to make an impact through one’s investments, at least in large stock markets – see Impact Investing Executive Summary | Founders Pledge.
I meant investing to give, not impact investing—but that’s helpful!
Last year there were 2062 Frontpage posts and 82 Personal Blogposts. By default, Personal Blogposts are hidden from view — you have to search for them in All Posts or change your settings to view them.
Clearly “selected by moderators as especially interesting or useful” isn’t really true, since Frontpage both is the default and is the status of over 96% of last year’s posts. I’m not sure what to think of this. On balance, I guess I like that moderators have the ability to torpedo a post’s visibility without deleting it. But at the least, I’m uncomfortable with the description of Frontpage vs Personal Blogposts quoted above.
This description has been out of date for a long time, and I thought the Forum team had updated it a while ago. Might be a merge issue that reinstituted some old language, or maybe we updated in some places but not others.
While I’m no longer a moderator, I should clarify that we never used “Personal Blog” to “torpedo a post’s visibility”. If we thought a post shouldn’t have been visible, we moved it back to a draft (many instances of spam, exactly one instance I can recall of an infohazard concern). Otherwise, it’s always been up to the voters.
Personal Blog is the Forum’s way of classifying:
a) Posts that the authors want to be less visible (you can ask to have your post labeled this way in the post editor)
b) Posts that aren’t really about EA (although, as with any classification system, there will be weird in-between examples)
Everything else is Frontpage by default.
I’ll let the online team know that the description is wrong at the moment — thanks!
Had the chance to speak to venture capitalist, former poker pro and Effective Altruist Haseeb Qureshi about EA and Web3 - including earning to give and how crypto can facilitate effective giving. You can give it a read here: https://golem.foundation/2021/12/03/interview-HQureshi.html.
That link is broken for me.
There’s an extra dot at the end. Remove it and the link is fine.
Greeting. My name is Anna and I am a digital producer. I am glad that there are so many of us here :)
Hi everyone! I’m Hyunjun and I live in Boston. I first came across effective altruism while reading about utilitarianism in college classes, but I just recently heard about this organization. Excited to be here!
Also, a bit of a shameless plug: I’m in the early stages of building a product that makes it much easier for people to invest their money in a socially responsible way while meeting their financial goals. If you’ve ever felt frustrated when thinking about how your personal investments could better line up with your values and you live in the US, I’d be happy to help.
All you have to do is fill out a short survey and I’ll do some research to find index funds and mutual funds that line up with what you care about, whether it’s moving towards clean energy, supporting gender equality, or anything in between.
I’m not trying to sell anything—the only thing that I’d ask for in return is feedback on how to make this product better!
Who are the EA folks most into the AI governance space? I’d be curious to their thoughts on this essay on the superintelligence issue and realistic risks: https://idlewords.com/talks/superintelligence.htm
You may have better luck getting responses to this posting on LessWrong with the ‘AI’ and ‘AI Governance’ (https://www.lesswrong.com/tag/ai-governance) tags, and/or on the AI Alignment Slack.
I skimmed the article. IMO it looks like a piece from circa 2015 dismissive of AI risk concerns. I don’t have time right now to go through each argument, but it looks pretty easily refutable esp. with all that we’ve continued to learn about AI risk and the alignment problem in the past 8 years.
Was there a particular part from that link you found particularly compelling?
Tbh the whole piece is my go to for skepticism about AI. In particular, the analogy with alchemy seems apropos given that concepts like sentience are very ill posed.
What would you say are good places to get up to speed on what we’ve learned about AI risk and the alignment problem in the past 8 years? Thanks much!
I took another look at that section, interesting to learn more about the alchemists.
I think most AI alignment researchers consider ‘sentience’ to be unimportant for questions of AI existential risk—it doesn’t turn out to matter whether or not an AI is conscious or has qualia or anything like that. [1] What matters a lot more is whether AI can model the world and gain advanced capabilities, and AI systems today are making pretty quick progress along both these dimensions.
My favorite overview of the general topic is the AGI Safety Fundamentals course from EA Cambridge. I found taking the actual course to be very worthwhile, but they also make the curriculum freely available online. Weeks 1-3 are mostly about AGI risk and link to a lot of great readings on the topic. The weeks after that are mostly about looking at different approaches to solving AI alignment.
As for what has changed specifically in the last 8 years. I probably can’t do the topic justice, but a couple things that jump out at me:
The “inner alignment” problem has been identified and articulated. Most of the problems from Bostrom’s Superintelligence (2014) fall under the category of what we now call “outer alignment”, as the inner alignment problem wasn’t really known at that time. Outer alignment isn’t solved yet, but substantial work has been done on it. Inner alignment, on the other hand, is something many researchers consider to be more difficult.
Links on inner alignment: Canonical post on inner alignment, Article explainer, Video explainer
AI has advanced more rapidly than many people anticipated. People used to point to many things that ML models and other computer programs couldn’t do yet as evidence that we were a long way from having anything resembling AI. But AI has now passed many of those milestones.
Here I’ll list out some of those previously unsolved problems along with AI advances since 2015 that have solved them: Beating humans at Go (AlphaGo), beating humans at StarCraft (AlphaStar), biological protein folding (AlphaFold), having advanced linguistic/conversational abilities (GPT-3, PaLM), generalizing knowledge to competence in new tasks (XLand), artistic creation (DALL·E 2), multi-modal capabilities like combined language + vision + robotics (SayCan, Socratic Models, Gato).
Because of these rapid advances, many people have updated their estimates of when transformative AI will arrive to many years sooner than they previously thought. This cuts down on the time we have to solve the alignment problem.
--
[1]: It matters a lot whether the AI is sentient for moral questions around how we should treat advanced AI. But those are separate questions from AI x-risk.
Hi, everyone, I’m Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven’t been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I’d appreciate any thoughts you have.
The idea is that, broadly, if you accept the repugnant conclusion with a “high” threshold (some people consensually alive today don’t meet the “barely worth living” line), I think your expected utility for the longterm future has to take a big hit from negative scenarios. From that perspective, not only is it likely that future civilization will—as do, apparently, we—mistake negative for positive welfare, but also welfare should be put on hold (since apparently near-threshold lives can be productive) as they too invest in favor of the distant intergalactic future (until existential catastrophe comes for them).
In other words, I worry (1) expected-total-utility motivations for longtermism underrate very bad outcomes, and (2) these motivations can put you in the position of continually making Pascalian bets long enough to all but guarantee gambler’s ruin before realizing your astronomical potential value.
I’ll answer my own question a bit:
Scattered critiques of longtermism exist, but are generally informal, tentative, and limited in scope. This recent comment and its replies were the best directory I could find.
A longtermist critique of “The expected value of extinction risk reduction is positive”, in particular, seems to be the best expression of my worry (1). My points about near-threshold lives and procrastination are another plausible story by which extinction risk reduction could be negative in expectation.
There’s writing about Pascalian reasoning (a couple that came up repeatedly were A Paradox for Tiny Probabilities and Enormous Values, In defence of fanaticism).
I vaguely recall a named paradox, maybe involving “procrastination” or “patience”, about how an immortal investor never cashes in—and possibly that this was a standard answer to Pascal’s wager/mugging together with some larger (but still tiny) probability of, say, getting hit by a meteor while you’re making the bet. Maybe I just imagined it.
I added a more mathematical note at the end of my post showing what I mean by (2). I think in general it’s more coherent to treat trajectory problems with dynamic programming methods rather than try to integrate expected value over time.
Is there like some statistics on this forum? Particularly distribution of votes over posts?
Hi Emrik! Is this what you’re looking for?
https://effectivealtruismdata.com/#post-wilkinson-section
https://www.effectivealtruismdata.com/#forum-scatter-section
Exactly! Thanks a lot.
Hello! I am here to get feedback on a blog post I wrote recently (Wild Animal Suffering Should be Effective Altruism’s Flagship Cause (substack.com)). I wrote it for my blog, but I ended up emailing openphil for feedback, and a rep told me to go ahead and share it here.
A summary of the article is that wild animal suffering would become much more relevant as correlated with certain engineering problems as ecosystem design and microbiome control, and that this gives it desirable properties as a future “flagship”. Therefore, we should invest in popularizing it now to seed interest.
I don’t really see myself investi ng in anything other than my own coffer/health/religion for the next 5 or so years, so I’m sharing this for feedback and to arbitrage on the fact that I hadn’t shared it with its actual target audience yet. The other reason is that I’m proud of many aspects of my writing and want more people to read it.
Hi all! Recently found this community and I’m really impressed with the discourse here!
This is kind of meta and not about EA per se, but from a community-builder’s perspective I was wondering how this forum is moderated (self or otherwise), and how it was built up to such a vibrant space! Are there other forums like this (I know lesswrong runs on a similar-looking community blogging model)? Have there been any moderation challenges?
I read through some of these posts (https://forum.effectivealtruism.org/tag/discussion-norms) but would appreciate any other links people might have.
background: I’ve been thinking of ways to build community in another setting that is more “lay audience.” While I have a strong affinity for the “academic-ish” and “open-source-ish” standards that this EA forum seems to be able to maintain, I also recognize that these practices 1) take effort and 2) are probably not the default mode of being for most people. I’m curious about how to make these standards simple and concise enough to be grasped, but not over-simplified.
Hello everyone,
I have a quick question: if I want to have maximum impact to mitigate climate change, what’s the best use of a small monthly donation? I was planning to pay the extra money to my utility company every month for renewable energy, but I figured there might be a more effective use of that same money. Any suggestions?
This is totally not my area but since no one else answered in six days, I’ll just say that Founders pledge has a report on best climate change interventions with some charity recommendations at the bottom. Also, there is this post, though I don’t know if recommendations are up to date there. And probably there is much more EA stuff that I don’t know about on this topic.
Thank you!
Hello Everyone
I’m Attila Suba a blues singer and the founder of an international climate action project from Amsterdam called the Green Revolution.
We want promote the science behind How wealth reduces compassion. That is why our project focus on decentalization of power.
Because of our impending climate catastrophe, we created a project for funding and coordinating the global mobilization transforming our consumer society, capturing vast amount of CO2 while lifting people out of poverty.
We are reaching out to the world leading climate scientist community to sign on a white paper produced by Green Revolution & our partners competing in the X Prize about the global carbon capture capabilities of industrial hemp with a help of Rituraj Phukan.
Can you help us reach a new level of consciousness with our revolution?
I would be happy to invite you to a call with our team.
Down below you can find all links and introductions to the Green Revolution and our supporters.
I’m excited to talk with you!
Many Thanks,
Weed Against Greed
Our goal is to open the world`s first climate action coffeeshop in Amsterdam. So once again Amsterdam to lead the way in the global cannabis revolution. Just from one coffeeshop we can put 6-10 million euros a year funding climate activist and artist around the globe. With hemp solutions we can clean water, provide food and shelter to millions already impacted by climate catastrophes.
Our social enterprise coffeeshop franchise model is focusing on community climate action involving multiple stakeholders in creating a sustainable resilient city. Featuring workshops and events, circular energy solutions, community growing, cashless payment system, DeFi payment processing for carbon capture, total transparency insured by open-sourced secure systems.
Now we are asking the Amsterdam city innovation council to support our project to give us a place, funding and a license to sell cannabis in Amsterdam for climate action!
Short video:
https://youtu.be/Q-TV3edkntA
Green Revolution:
Interview with Attila Suba.: https://quota.media/cop26-hemp-not-politicians-or-billionaires-will-solve-climate-change/
Business pitch: https://www.greenrevolution.earth/startup
Music to free the soul—Sounds of Revolution
Supporters:
Alicia Fall—Her Many Voices Foundation https://www.hermanyvoices.org/
Arthur van de Graaf https://www.f6s.com/arthurvandegraaf
https://www.greenrevolution.earth/advisory-club
https://drawdownhemp.org/
https://catalyst2030.net/
https://weall.org/
https://globalhempassociation.org/
---
Attila Suba
Founder of GR Foundation
https://www.greenrevolution.earth/
Anyone know how to embed links into text in the “User Profile” section?
So make it look like this:
Instead of this:
Just can’t seem to do it!
I think we maybe support markdown in that textbox, so try using Markdown syntax.
Thanks. I ticked “Activate Markdown Editor” and tried the hyperlink syntax but comes out like this:
Maybe I’m doing something wrong?
You had a non-syntactical space between [LinkedIn] and your URL. I removed it.
(Note that you don’t need to turn on the Markdown editor to edit your bio — the bio is in Markdown no matter what.)
Thanks Aaron!
How do EA’s in SF think about local civic action and altruism? That seems a priori like a place with A) a lot of EAs and B) a place with LOTs of local problems. Here’s a good Atlantic article that’s worth reading in full on the problems of SF: theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/
And for reference here’s a post I penned recently in response to the call for EA critiques that emphasizes the importance of local as well as global altruistic action: https://forum.effectivealtruism.org/posts/LnuuN7zuBSZvEo845/why-the-ea-aversion-to-local-altruistic-action
Hi is there a way to get stats on EA membership and activity by location? I can’t seem to place that from the individual local chapters pages, which might be for the best since that’d be a pain to scrape one by one, and ideally there’d be a simple table with chapter location and number of members (total is fine, ideally would have a subcategory for a common definition of active members). Anyone know where one might find such a thing?
How do you practice charity beginning at home? Do any EA folks give a set percentage of their giving locally? Has anyone seen statistics on typical breakdowns? Is the EA recommended giving percentage 100% to the globally highest impact charities? A EA member passed along this GiveWell post. It seems very intuitive to me that getting your own life, household and community in order is a good thing. It also seems like the more that you get your immediate life and those in it in order, the more you can support people in need further away.
A small percentage of my donations go to local organisations. People are liable to interpret EA ideas as saying that their favourite local charity sucks. I want to emphasise to people that while their favourite local charity is great, there are even better giving opportunities out there. I think it’s good messaging.
What’s small? 1%? 10%? Do you have a sense of how typical your beliefs are in the EA community? I’d be very curious to have this type of question included in a future EA annual survey. It seems the last one was done in 2020 which means that perhaps its timely for another?
Hello.
Ideas to improve the Effective Altruism movement include:
* include scoring, ranking, and distance measures of the altruistic value of the outcome of all personal behaviors, including all spending behaviors.
* research the causal relations of personal behaviors and the altruistic value of the consequences of personal behaviors.
* treat altruistic value as a relative and subjective metric with positive, null, and negative possible values.
* provide public research and debate on the size and certainty of altruistic values assigned to all common human behaviors (by individual EA practitioners).
Successful implementation of these ideas yields:
* robust maps of consequences of all personal behaviors and their relative altruistic value.
* an end to context-limited assessments of one’s effective altruism over one’s life so far.
-Noah
What’s up EAers. I noticed that this website has some issues on mobile devices—the left bar links don’t work, several places where text overlaps, tapping the search icon causes an inappropriate zoom—is there someone currently working on this where it would help if I filed a ticket or reported an issue?
Don’t worry about finding the perfect place (here is a fine place for now). You can message us about bugs, or post in the Feature Suggestion Thread for feature requests, so that others can vote on the ideas.
I’m guessing you use an iPhone? This is a longstanding issue that we really should have fixed, it used to be you had to tap twice, though now it appears to have broken entirely. Thanks for the report.
I see the behavior, thanks.
Excellent, sounds like you’re on it. I do in fact use an iPhone. I should have made a more specific note about where I saw overlapping text earlier, I can’t seem to find it again now. I’ll use the message us link about any future minor UI bugs.
(X-posting from LW open thread)
I’m not sure if this is the right place to ask this, but does anyone know what point Paul’s trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
It seems like an important topic but I’m a bit confused by what he’s saying here. Is the perspective he’s discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn’t that suggest every human’s life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of ‘hedonium’ and ‘dolorium’, which could potentially be dealt with by some sort of limitation on compute?
Also, I’m not really sure if this set of views is more “a broken bone/waterboarding is a million times as morally pressing as making a happy person”, or along the more empirical lines of “most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn’t scale to the same degree.” Even a tiny chance of the second one being true is awful to contemplate.
Specifically:
I’m not really sure what’s meant by “the reality” here, nor what’s meant by biased. Is the assertion that humans’ intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn’t likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it’s worse (rather than better)? I’m not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
As I understand it, he gives two possibilities. 1. Our capacity for happiness is symmetric while our “reality” (i.e. humanity’s historical environment) has been asymmetric. 2. Our preferences themselves were asymmetric, because we were “trained” to suffer more from adverse events, making us have greater capacity for suffering. (1) gives more reason for optimism than (2) because we are more able to change the environment than our capability for happiness/suffering.
FWIW, I think we might be able to change our capability for happiness/suffering too, and so thinking along these lines, the question might ultimately hang on energy efficiency arguments anyway.
Cheers for the response; I’m still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.
Pulse charity:
Effective altruism seems to focus like a laser on the most valuable problems for human suffering, but what if we extend the metaphor further, and to increase the impact make it a pulse laser? (Part of my inspiration was debt jubilees) I think this could have a few effects:
Many issues can be solved with large piles of cash that can’t be solved with smaller ones, such as building a well vs importing water
on the donor side, it could be a Schelling point. Hey, those EA folks only come around every few years, now I can blow off other donors the rest of the time and signal sophistication in my philanthropy
some problems, such as poverty, have a web like network, and would benefit from being solved in tandem rather than piecemeal
Currently, EA resources are not gained gradually year by year; instead, they’re gained in big leaps (think of Openphil and FTX). Therefore it might not make sense to accumulate resources for several years and give them out all at once.
In fact, there is a call for megaprojects in EA, which echos your point 1 and 3 (though these megaprojects are not expected to funded by accumulating resources over the years, but by directly deploying existing resources). I’m not sure if I understand your second point though.
Hello! I am slowly seeping into the Forum floorboards, dripping down the comments section, leaving meandering mumblings along an electronic thread. Most of my thoughts are obscure and dubiously specific. Expect errors; I do. And, I value dialogue not for compromise, but to send feelers out in all directions of the design-space. Those lateral extremes bind the constraints of good ideas, found only after pondering a few dozen flops! I’m glad to turn them around, to find any lucky inspirations. Most domains are a straight path up my alley; I follow specific problems into each arena, in turn.
‘Trojan Source’ Bug Threatens the Security of All Code
https://krebsonsecurity.com/2021/11/trojan-source-bug-threatens-the-security-of-all-code/
Does this strike you as unusually threatening compared to other bugs that have been discovered in recent years? Headline aside, the article’s tone seemed mild to me, and it looks like several organizations are taking steps to mitigate the issue.
But my knowledge of computer security is rudimentary at best — do the stakes seem very high to you?
I’m excited.
A lot changes now.
Future is really now.
Why I Am (Not) a LongTermist
I am copy and pasting my newest endeavor to meditate on the meaning of “long-termism.” https://whatiscalledthinking.substack.com/p/why-i-am-not-a-long-termist?s=w
1.
The Long-Term is like the Maimonidean conception of God—you know it when you don’t see it.
2.
The Divine Face, like the distant future, is hidden. But Moses is permitted to see the back of God’s face. Similarly, today’s super-forecasters cannot know the future, but they can see the back of the future.
3.
Of God we know nothing, says Franz Rosenzweig, but our ignorance is of God. So, too, the long-termist knows nothing of the future, but her ignorance is of the future.
4.
In the very long run, there is no time, no before and after, no linearity. In the very long run, there is no humanity. In the very long term, there is the End of Days, which means a day of joy in which the world no longer yearns for anything more in the future or it is a day of self-destruction; perhaps these are one. For in ascetic traditions, Enlightenment is akin to death, a total loss of self, of appetite, of drive and desire. The catatonic priest, hooked up to the neuro-imaging device, appears happy. The relevant parts of his brain light up, signaling to researchers that we will need new jargon to describe his bliss—and yet behaviorally, his non-responsiveness is worrisome. Maybe the joy-symptoms are a sign of something nefarious, as Zero Mostel’s character says in A Funny Thing Happened on the Way to the Forum: the Cretans only smile because they are about to die of a plague. But Mostel is lying. The Cretans will be fine, and, besides, if a plague is coming it does not correlate with smiling. Nonetheless, the success of the long-termist is her failure.
5.
Utilitarians are not concerned with whose pleasure it is that they are maximizing, so long as they are getting a good deal. If you can get a better deal by diminishing the suffering—or increasing the joy—of some hitherto neglected group, like amoebas, that is worth more than just marginally increasing the joy of the happiest person. Many utilitarians accept, at least theoretically, that there is no reason to think human pleasure and pain are categorically different than animal pleasure and pain. And if we could know or imagine the pain of inanimate objects, a la panpsychism, then the utilitarian might come to the repugnant conclusion that we should be extend our circle of compassion to bricks and chips. Less fantastically, if AI were to become so awesome as to approximate human consciousness, it might also command our love and responsibility, whatever that might mean.
6.
On the way to pursuing pleasure and minimizing pain, we might stop being human. Deontologists, whatever you think of them, argue that we have a primary obligation to remain human, even if this means that we have a certain amount of suffering. But strict utilitarians see this as just a religious-hang up, an anti-democratic posture preventing the widespread distribution of newer, cheaper and better goods. The question is not, as Leo Strauss put it, “Progress or Return” but “Progress or Transgress”—at what point does progression cease being a matter of making life better and become a matter of making it unrecognizable, categorically other? If pleasure and pain are the only or primary latitudes, it shouldn’t matter. We’re all just code, right?
7.
A core tenet of the effective altruist movement is care for the long-term. Minimally, this means ensuring that the long-term will be there. But Heidegger challenges this linear conception of time as a line. For the long-term to exist, there must first be temporality. For there to be temporality, there must be mortality, awareness of and concern for one’s ownmost finitude.
8.
It is fashionable amongst long-termists to worry about catastrophic risk; the death of the planet from nuclear war, environmental dissolution, asteroid collisions, alien invasion, or AI golems gone rogue. But none of these “ontic” risks can mean anything if there is not a Dasein, an existing being, for whom these are threats. What is the long-termist solution to preserving Dasein?
9.
A long-termist self-critique involves appreciating the ways in which utilitarian thinking and policy-making threaten the “being”—if not the happiness—of the ones engaged in it. A world in which we are maximally happy, but not sufficiently mortal would be one in which earth might exist, but the “world” would not.
10.
There are good reasons to suspect calls for “Meaning” and “care.” Richard Rorty thinks that Heideggerian thought, and any thought devoted to preserving something like the sublime leads to cruelty. But if cruelty is a side-effect of a calculus that acknowledges the incalculable and the awe-inspiring, numbness is a side effect of one that sees most if not all problems of accounting.
11.
Existence is not a column on the spreadsheet. It is, rather, that being that takes issue with itself, and uses spreadsheets to solve solvable problems while fleeing from the insoluble one: itself.
I did it!
Well not really. The specific me doesn’t matter. It’s the idea.
Ideas are Elementary Units of ABSOLUTE TRUTH.
Now We Are Free. (Lisa Gerrard)
Yes, that’s a gladiator reference. Why? The universe willed it to be so. It called the idea of utopia by the name Rome. And we worked really hard for it. Yes we did make loads of mistakes. That’s only natural. Entropy increases in an isolated system. We found the answer to every question we needed to survive and beyond. We just didn’t realise.
Now its, No Time for Caution.
Full thrusters. But different. We will for the first time harness the power of chaos to drive our chariot forward.
And last for all eternity.
In peace.
my blog