Co-CEO of Rethink Priorities
Marcus_A_Davis
Interesting piece. However, the article conflates psychopathy meaning “people with smaller amygdalas” and psychopathy meaning “people with smaller amygdalas who display anti-social behavior”. The former group is not necessarily in the latter group. For example, you may have a smaller than average amygdala and genuinely respond less to the fear and distress of others but not become a social predator that manipulates people.
And as you point out, it’s not clear how this study relates to EAs. It could be that EAs have relatively normal amygdala size but are disproportionately interested in rationality and ethics and hence recognize the good they can and should be doing in the world.
Having grown up as one of those people who figured “can’t succeed, don’t try” with regards to large problems, I think this is really a fantastic point that I hadn’t considered expressing this way. I think lots of people like who currently think like I did could be swayed if the message could get through to them that they can indeed change the world for the better.
This is very useful. As someone still very new to this who wants to contribute more it can be helpful to see what other EAs are doing in detail. I still struggle from not knowing exactly what I can do now and what are realistic goals for behavioral and social changes, particularly in the short term.
More generally, as someone trying to be more productive and efficient Toggl looks promising and I’m going to try it out myself.
Can anyone recommend to me some work on existential threats as a whole? I don’t just mean AI or technology related threats but nuclear war, climate change, etc.
Btw Nick Bostrom’s Superintelligence is already at the top of my reading list, and I know Less Wrong is currently engaged in a reading group on that book.
I’m interested in if anyone has any data or experience attempting to introduce people to EA through conventional charitable activities like blood drives, volunteering at a food bank, etc. The idea I’ve been kicking around is basically start or co-opt a blood drive or whatever event.
While people are engaged in the activity, or before or after, you introduce them to the idea of EA. Possibly even using this conventional charitable event as the prelude to a giving game. On the plus side the people you are speaking with are self-selected for doing charitable acts so might be more receptive to EA than a typical audience. On the downside is this group might be self-selected for people who care a lot about personally getting hands on for charitable works which typically aren’t the most effective things you can do.
Working on getting a more useful skill but for now if ever someone needs some audio editing, perhaps for a potential EA podcast, I can do it.
Also this job board seems relevant as skills people have they might not think would be of use are in demand.
I’m pretty excited to help out. Of course, as pointed by Ryan, if anyone has any pointers about spreading our reach more effectively on social media, I’m open to hearing them.
Ryan and I were discussing doing that for different subreddits that a given post here might be of interest to. So if it’s a post about medical interventions posting it in /r/medicine for example.
Of course, the Internet is a lot bigger than Reddit though so there are probably many venues related to philanthropy, productivity, philosophy, animal rights, medical interventions, etc. that posts here could be relevant to. I’m going to try to do what I can but I would appreciate guidance toward relevant venues and potentially help actually doing the work if it proves to be a huge task.
A bit OT but this reminded me: Does anyone know if The Most Good You Can Do is coming out for Kindle?
I strongly prefer digital books, so buying it for Kindle would be the medium by which I can leave a verified purchase review on Amazon. However, the book doesn’t seem to available digitally anywhere in the U.S. iBooks is seemingly selling it for Australia only.
I’m pretty sure I’m grasping at proverbial straws here though.
I’m up to help do both of those. Of course, how much I can help with the former will depend on what exactly needs to be done.
If you’d like I can give a go at cleaning up the audio of Ord’s talk.
And by give a go I mean, run it through a few filters to see if it can go from “very bad” to “passable”.
Has anyone else tried to pushing EA specifically at religious audiences? There’s this on .impact but it’s been a while since that was touched and I’d guess this could use some follow up. Doing this could really prove beneficial at getting favorable audiences especially if you or someone you’re close to is heavily involved in a church.
In navel-gazing curiosity: Has there been a poll done on what EAs think about moral realism?
I searched the Facebook group and Googled a bit but didn’t come up with anything.
To answer myself: turns out at least for iBooks the problem was my impatience. It’s now in the library and it’s still a week before it is officially released. Perhaps Kindle will be the same way.
Still, I so rarely anticipate books being released I’m not sure if this is common.
Are there any first person pieces on someone about successfully changing careers in order to earn to give? There have been several stories discussing the topic over the past few years but these all seem to be descriptive, third person accounts, or normative analysis.
Even if not, if you’ve actually made such a change could you please publicly share your story. I’d like to hear it and I’d bet many others would too.
Ah, I should have guessed that from the “this is being actively pursued” label or I could have just asked there.
Naturally, if you’d like the help, I suspect there may be at least a few people here who, given their familiarity with a given religion, may have a decent idea of how to pitch the focus on effectiveness to a specific group.
There is also a contingent of utilitarians within effective altruism who primarily care >about reducing and ending suffering. They may be willing to compromise in favor of >animal welfare, and not full rights, but I’m not sure. They definitely don’t seem a >majority of those concerned with animal suffering within effective altruism.
Of course, only actual data on EAs could demonstrate the proportionate of utilitarians willing to compromise but this seems weird. To me it would seem utilitarianism all but commits you to accept “compromises” on animal welfare at least in the short term given historical facts about how groups gained ethical consideration. As far as I know (anyone feel free to provide examples to the contrary), no oppressed group has ever seen respect for their interests go from essentially “no consideration” (where animals are today) to “equal consideration” without many compromising steps in the middle.
In other words, a utilitarian may want the total elimination of meat eating (though this is also somewhat contentious) but in practice they will take any welfare gains they can get. Similarly, utilitarians may want all wealthy people to donate to effective charities until global poverty is completely solved but will temporarily “compromise” by accepting only 5% of wealthy people to donate 10% of their income to such charities while pushing people to do better.
So, in practice, utilitarianism would mean setting the bar at perfection (and publicly signaling the highest standard that advances you towards perfection) but taking the best improvement actually on offer. I see no reason this shouldn’t apply to the treatment of animals. Of course, other utilitarians may disagree that this is the best long term strategy (hopefully evidence will settle this question) but that is an argument about game theory and not whether some improvement is better than none or if settling for less than perfection is allowable.
As someone currently in the process of learning programming here are a few thoughts on my attempt at learning two of the bolded languages, Java and Ruby:
I’m currently working through The Odin Project, which has a backend focus on Ruby, and I’d highly recommend it. I’d also recommend Peter’s guide to TOP which I’ve found very useful which includes some time estimates, some additional resources and some things to learn after you complete TOP. Perhaps the biggest plus to TOP for me is giving projects of the correct difficulty at the correct time so that they are challenging but doable. Another of the biggest benefits of TOP is the sheer scope of the resources already collected for you. Also Ruby is far more intuitive than Java.
Before starting TOP I started learning programming by attempting to learn Java on my own without much structure. However, going on my own I’d often spend time attempting to track down a good explanation for topics. There was also the issue of not knowing what was a logical path to take to learning and I think I took some major false steps. The resource I found most beneficial during that time were probably the free courses at Cave of Programming which covered a wide range of topics but had the huge downside of being somewhat dated video tutorials. Other than that I didn’t find lots of free resources to help learning Java but there are some pretty cheap stuff on Udemy and a subscription to Lynda could be a good investment as well.
Of course, a huge caveat, I am a sample size of one who had no experience at all with programming before starting with Java. People with different backgrounds may have very different experiences.
Hola everyone. I’m Marcus. I’m an audio engineer but I really got into philosophy during college. Eventually that led me to ethics and effective altruism.
I’m currently learning a more financially beneficial skill so I can earn to give. In the meantime, I intend to do everything I can outside of that to contribute and help spread the word of EA.