Open Thread #40
The last Open Thread was in October 2017, so I thought we were overdue for a new one.
Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don’t have enough karma to post on the main forum.
Consider giving your post a brief title to improve readability.
The EA Forum Needs More Sub-Forums
EDIT: please go to the recent announcement post on the new EA Forum to comment
The traditional discussion forum has sub-forums and sub-sub-forums where people in communities can discuss areas that they’re particularly interested in. The EA Forum doesn’t have these and this make it hard to filter for what you’re looking for.
On Facebook on the other hand, there are hundreds of groups based around different cause areas, local groups and organisations, and subpopulations. Here it’s also hard to start rigorous discussions around certain topics because many groups are inactive and moderated poorly.
Then there are lots of other small communication platforms launched by organisations that range in their accessibility, quality standards, and moderation. It all kind of works but it’s messy and hard to sort through.
It’s hard to start productive conversations on specialised niche topics with international people because
1) Relevant people won’t find you easily within the mass of posts
2) You’ll contribute to that mass and thus distract everyone else.
Perhaps this a reason why some posts on specific topics only get a few comments even though the quality of the insights and writing seems high.
Examples of posts that we’re missing out on now:
Local group organiser Kate tried X career workshop format X times and found that it underperformed other formats
Private donor Bob dug into the documents of start-up vaccination charity X and wants to share preliminary findings with other donors in the global poverty space
Machine learning student Jenna would like to ask some specific questions on how the deep reinforcement learning algorithm of AlphaGo functions
The leader of animal welfare advocacy org X would like to share some local engagement statistics on vegan flyering, 3D headset demos, before sending them off in a more polished form to ACE.
Interested in any other examples you have. :-)
What to do about it?
I don’t have any clear solutions in mind for this (perhaps this could be made a key focus in the transition to using the forum architecture of LessWrong 2.0). Just want to plant a flag here that given how much the community has grown vs. 3 years ago, people should start specialising more in the work they do, and that our current platforms are woefully behind for facilitating discussions around that.
It would be impossible for one forum to handle all this adequately and it seems useful for people to experiment with different interfaces, communication processes and guidelines. Nevertheless, our current state seems far from optimal. I think some people should consider tracking down and paying for additional thoughtful, capable web developers to adjust the forum to our changing needs.
UPDATE: After reading @John Maxwell IV’s comments below, I’ve changed my mind from a naive ‘we should overhaul the entire system’ view to ‘we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do’ view.
This sounds like it might be a bad idea to me. I just wrote a long comment about the difficulty the EA community has in establishing Schelling points. This forum strikes me as one of the few successful Schelling points in EA. I worry that if subforums are done in a careless way, dividing a single reasonably high-traffic forum into lots of smaller low-traffic ones, one of the few Schelling points we have will be destroyed.
Another problem would be when creating extra sub-forums would result in people splitting their conversations up more between those and the Facebook and Google groups. Reminds me of the XKCD comic on the problem of creating a new universal standard.
I think you made a great point in your comment on that people need to do ‘intensive networking and find compromises’ before attempting to establish new Schelling points.
Hmm, would you think Schelling points would still be destroyed if it was just clearer where people could meet to discuss certain specific topics besides a ‘common space’ where people could post on topics that are relevant to many people?
I find the comment you link to really insightful but I doubt whether it neatly applies here. Personally, I see a problem with that we should have more well-defined Schelling points as the community grows but that currently the EA Forum is a vague place to go to ‘to read and write posts on EA’. Other places for gathering to talk about more specific topics are widely dispersed over the internet – they’re both hard to find and disconnected from each other (i.e. it’s hard to zoom in and out of topics as well as explore parallel topics that once can work on and discuss).
I think you’re right that you don’t want to accidentally kill off a communication platform that actually kind of works. So perhaps a way of dealing with this is to maintain the current EA Forum structure but then also test giving groups of people the ability to start sub-forums where they can coordinate around more specific Schelling points on ethical views, problem areas, interventions, projects, roles, etc. – conversations that would add noise for others if they did it on the main forum instead.
Yeah. I feel like the EA community already has a discussion platform with very granular topic divisions in Facebook, and yet here were are. I’m not exactly sure why the EA forum seems to me like it’s working better than Facebook, but I figure if it’s not broken don’t fix it. Also, I think something like the EA Forum is inherently a bit more fragile than Facebook… any Facebook group is going to benefit from Facebook’s ubiquity as a communication tool/online distraction.
You made a list of posts that we’re missing out on now… those kinda seem like the sort of posts I see on EA facebook groups, but maybe you disagree?
Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?
The example posts I gave are on the extreme end of the kind of granularity I’d personally like to see more of (I deliberately made them extra specific to make a clear case). I agree those kinds of posts tend to show up more in the Facebook groups (though the writing tends to be short there). Then there seems to be stuff in the middle that might not fit well anywhere.
I feel now that the sub-forum approach should be explored much more carefully than I did when I wrote the comment at the top. In my opinion, we (or rather, Marek :-) should definitely still run contained experiments on this because on our current platform it’s too hard to gather around topics narrower than being generally interested in EA work (maybe even test a hybrid model that allows for crossover between the forum and the Facebook groups).
So I’ve changed my mind from a naive ‘we should overhaul the entire system’ view to ‘we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do’ view.
Thanks for your points!
Lol, like I said, I’m not completely sure. Posts & comments seem to go into greater depth, posts sometimes get referenced long after they are written?
I’m not certain subfora are a terrible idea, I just wanted this risk to be on peoples’ radar. One possible compromise is to let people tag their posts (perhaps restricted to a set of tags chosen by moderators) and allow users to subscribe to RSS feeds associated with particular tags.
As Julia mentions below, over the last few months we have been been putting a lot of thought into how to improve the Forum ahead of its re-launch later this year. The ‘sub-forum model’ was what we also arrived at as a desirable potential vision.
Due to hoping to relaunch the Forum in a relatively short timeframe, and the availability of the LW2 codebase for us to work with, our initial goal is to release a direct clone of LW2 rebranded for use as the EA Forum 2.0. The LW2 format already addresses some of the issues and feedback we have had about the current functionality. However, over the medium term (after we release the new version in the next few months) we expect to do further work on implementing various functionality improvements, including investigating the viability of a sub-forum model.
We will be publishing an official announcement regarding the EA Forum relaunch in the next few days, and I would hope we could use the comments section there to serve as the main schelling point for user feedback and ideas on what we should focus on after the initial release.
I like that the forum is not sorted so one can keep abreast of the major developments and debates in all of EA. I don’t think there is so much content as to be overwhelming.
CEA is thinking along these same lines for the new version of the Forum! The project manager is planning to reply with more detail in the next day or so.
Wow, nice! Would love to learn more.
It seems that what we need in this forum is categories/subforums. What we currently have is one subreddit. Conceptually, there’s little difference between https://www.reddit.com/r/EffectiveAltruism/ and this forum, people just use them differently. What I think we need is a whole new website like https://www.reddit.com/ that would have subreddits like “AI policy” and “Community building”. Your homepage would be customised based on subreddits you subscribed to. Maybe there could even be subreddits like “Newcomer questions” and “Editing & Review” at the same website that do not contain novel thoughts like posts on this forum. And there would be a subreddit “Old EA forum” that would contain all posts in the current forum but no new posts. Perhaps that is too complicated, maybe we just need few categories that you could filter by (and webpage would remember you user’s filter). I haven’t thought much about this, these are just my first thoughts.
Vox is looking for EA journalists. This is an opportunity to publicize EA and help shape its public perception. Their ad hints that they want people who are already in the movement, so take a look if you have any writing or journalism related skills.
I think this is so huge. I was going to post it but saw you got to it first.
Forum searching tip
Searching the forum by typing “site:effective-altruism.com [your search query]” into the URL bar gives you more results—including some very relevant ones—than using the built-in search bar on the top-right of the forum itself (at least for me).
Why I’m skeptical of cost-effectiveness analysis
Reposting as comment because mods told me this wasn’t thorough enough to be a post.
Briefly:
The entire course of the future matters (more)
Present-day interventions will bear on the entire course of the future, out to the far future
The effects of present-day interventions on far-future outcomes are very hard to predict
Any model of an intervention’s effectiveness that doesn’t include far-future effects isn’t taking into account the bulk of the effects of the intervention
Any model that includes far-future effects isn’t believable because these effects are very difficult to predict accurately
I’m glad you reposted this.
I’d argue we don’t necessarily know yet whether this is true. It may well be true, but it may well be false.
This doesn’t account for the fact that there’s still gradients of relative believability here, even if the absolute believability is low. There’s also an interesting meta-question of what to do when under various levels and kinds of uncertainty (and getting a better handle just how bad the uncertainty is).
I think the crux here is that absolute believability is low, such that you can’t really trust the output of your analysis.
Agree the meta-question is interesting :-)
I think it’s almost certainly true (confidence ~90%) that far future effects account for the bulk of impact for at least a substantial minority of interventions (like at least 20%? But very difficult to quantify believably).
Also seems almost certainly true that we don’t know for which interventions far future effects account for the bulk of impact.
Separately, I’d wager that I feel pretty confident that taking into account all the possible long-term effects I can think of (population ethics, meat eating, economic development, differential technological development), that the effect of AMF is still net positive. I wonder if you really can model all these things? I previously wrote about five ways to handle flow-through effects in analysis and like this kind of weighted quantitative modeling.
I suspect it’s basically impossible to model all the relevant far-future considerations in a way that feels believable (i.e. high confidence that the sign of all considerations is correct, plus high confidence that you’re not missing anything crucial).
I share this intuition, but “still net positive” is a long way off from “most cost-effective.”
AMF has received so much scrutiny because it’s a contender for the most cost-effective way to give money – I’m skeptical we can make believable claims about cost-effect when we take the far future into account.
I’m more bullish about assessing the sign of interventions while taking the far future into account, though that still feels fraught.
I recently played two different video games with heavy time-travel elements. One of the games heavily implied that choosing differently made small differences for a little while but ultimately didn’t matter in the grand scheme of things. The other game heavily implied that even the smallest of changes could butterfly effect into dramatically different changes. I kind of find both intuitions plausible so I’m just pretty confused about how confused I should be.
I wish there was a way to empirically test this, other than with time travel.
A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)
I think this is the case for a lot of stuff in my friends’ lives as well, and appears to happen a lot in history too.
It’s not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.
It’s surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there’s no signal whatsoever and it’s all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we’ve managed to document regularities in how the world works. It’s true that as you move “up the stack”, say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.
Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.
I’m making the claim that with regard to the far future, it’s mostly noise and very little signal.
I think there’s some signal re: the far future. E.g. probably true that fewer nuclear weapons on the planet today is better for very distant outcomes.
But I don’t think most things are like this re: the far future.
I think the signal:noise ratio is much better in other domains.
I don’t know very much about evolution, but I suspect that humans evolved the ability to make accurate predictions on short time horizons (i.e. 40 years or less).
“Anything you need to quantify can be measured in some way that is superior to not measuring it at all.”
My post is basically contesting the claim that any measurement is superior to no measurement in all domains.
It might be worth looking at the domains where it might be less worthwhile (formal chaotic systems, or systems with many sign flipping crucial considerations). If you can show that trying to make cost-effectiveness based decisions in such environments is not worth it, that might strengthen your case.
Yeah, I’m continuing to think about this, and would like to get more specific about which domains are most amiable to cost-effectiveness analysis (some related thinking here).
I think it’s very hard to identify which domains have the most crucial considerations, because such considerations are unveiled over long time frames.
A hypothesis that seems plausible: cost-effectiveness is good for deciding about which interventions to focus on within a given domain (e.g. “want to best reduce worldwide poverty in the next 20 years? These interventions should yield the biggest bang for buck...”)
But not so good for deciding about which domain to focus on, if you’re trying to select the domain that most helps the world over the entire course of the future. For that, comparing theories of change probably works better.
Aren’t there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future? Somewhat more abstract things such as “increasing empathy” or “improving human rationality” come to mind, but I guess one could argue how they could have a negative impact on the future in some plausible way. Another one certainly is “reduce existencial risks”—unless you weigh suffering risks so heavily that it’s unclear whether preventing existential risk is good or bad in the first place.
Regarding such causes—given we can identify robust ones—it then may still be valuable to analyze cost-effectiveness, as there would likely be a (high?) correlation between cost-effectiveness and positive impact on the future.
If you were to agree with that, then maybe we could reframe your argument from “cost-effectiveness may be of low value” to “cause areas outside of far future considerations are overrated (and hence their cost-effectiveness is measured in a way that is of little use)” or something like that.
I agree that interventions like this exist, and I think we identify them by making theoretical cases for & against.
As above, I think cost-effectiveness can useful for determining which intervention to focus on within a specific domain (e.g. “which intervention most increases empathy?” could benefit from a cost-effect analysis).
But for questions about which domain to focus on, I don’t think cost-effectiveness gives much lift (e.g. “is it better to focus on increasing empathy or improving nuclear security?” is the kind of question that seems intractable to cost-effect analysis).
Another way of saying it is “Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse.” It’s taken from http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/ which is relevant here.
Sure, but I don’t think those are the only options.
Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.
I think this is basically what MIRI does. As far as I know, MIRI didn’t use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).
Instead, it used a chain of theoretical reasoning to arrive at the intervention it’s focusing on.
I’m not sure I understand the distinction you’re making. In what sense is this compatible with your contention that “Any model that includes far-future effects isn’t believable because these effects are very difficult to predict accurately”? Is this “chain of theoretical reasoning” a “model that includes far-future effects”?
We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock’s terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to “the math of forecasting”.)
I’m trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn’t arrive at a numerical cost-per-outcome, instead arrives at something like ”...and so this thing is probably best”).
Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they’re actually using pretty different mental processes.
Sure, but forecasters are working with pretty tight time horizons. I’ve never heard of a forecaster making predictions about what will happen 1000 years from now. (And even if one did, what could we make of such a prediction?)
My argument is that what we care about (the entire course of the future) extends far beyond what we can predict (the next few years, perhaps the next few decades).
I wanted to ask what kind of conclusions this line of reasoning leads you to make. But am I right to think that this is a very short summary of your series of posts exploring consequentialist cluelessness (http://effective-altruism.com/ea/1hh/what_consequences/)? In that case the answer is in the last post of the series, right?
Yeah, my conclusions here definitely overlap with the cluelessness stuff. Here I’m thinking specifically about cost-effectiveness.
My main takeaway so far: cost-effect estimates should be weighted less & theoretical models of change should be weighted more when deciding what interventions have the most impact.
Do you think you’re in significant disagreement with this Givewell blog post?
I basically agree with that post, though GiveWell cost-effectiveness is about comparing different interventions within the domain of improving global health & development in the next 20-50 years.
As far as I know, GiveWell hasn’t used cost-effectiveness analysis to determine that global health & development is a domain worth focusing on (perhaps they did some of this early on, before far-future considerations were salient).
The complication I’m pointing at arises when cost-effectiveness is used to compare across very different domains.
Just a newbie exploring the forum!
Hi, I’m new here!
By the looks of it, there is SO much to learn about effective altruism and I absolutely love that. I’ve really come to accept learning as a never ending process and it’s liberating to look at learning that way.
I’m hoping to earn some Karma points so I can make my own posts here and interact with members of this lovely forum, to continuously learn and maybe contribute sometimes along the way.
I’ve totally bought into the concepts of effective altruism, with the ideals of working as a community to edge closer to a better society resonating with me. I’m so excited about EA that I’ve decided that I want to help host an Effective Altruism Global X in my home country, Bangladesh.
I know for a fact that not many people know about effective altruism in Bangladesh. I’ve seen there is a listed group from Bangladesh in Effective Altruism Hub and I’ve mailed them to get in touch, but I have not detected much activity from them prior to sending them an email.
I just feel that concepts such as “Earning To Give” and “Cause Neutrality” are ideas that more people should know about. So many people do not fully understand the potential for impact each individual holds, they underestimate their potential to do good and do not invest their time in finding out what they can do with their career to have more impact. So many incredibly intelligent people, due to lack of information that could have been easily available, prematurely decide that earning money is the best they can do with their lives.
I absolutely believe that coming across 80,000 Hours was one of the luckiest moments in my life. The way they use scientific evidence to make a person understand the sheer capacity in one’s hands, whether through donating effectively, advocacy, or direct work as explained in parts 2 and 3 of the career guide, inspires people to go out there and dedicate their lives to learning and doing good better.
Bangladesh, a lower-middle income country, is an area that desperately needs more effective altruists. Also derived from the career guide, in identifying proper problem areas, they should be large enough in scale, neglected, and solvable. Dhaka, the capital of Bangladesh, is the most overcrowded city in the world according to a 2017 post by the Telegraph, and 5th in the world in population density according to Wikipedia. Dhaka is a regular in the Economist Intelligence Unit’s annual rankings of the “Least Liveable Cities” in the world. Dhaka came 2nd in 2014. Whether the misfortune of the people of Bangladesh is due to a weak government, swayed by corruption, or a lack of unity by the people, one thing is clear, that spreading the ideals of effective altruism has potential for massive impact in Bangladesh. The facts are that even if an effective altruist group exists in Dhaka, they have not been active and there is a huge opportunity to turn some heads and galvanize the doers of the society with the effective altruism movement, so that we can stride towards the development of our society that has been overdue now.
I’m very excited to meet more wonderful people, and learn many new things. It keeps me awake at night to think of what a success an effective altruist community fostered in Bangladesh could prove to be. I have not yet applied to be an organizer, because I thought I should maybe get at least a little bit involved with the community.
Lots of things to look forward to, and that is always how life should be.
Love,
Farhan
This is really sweet to hear. :) I wish you the best and hope you find a lot in the effective altruism community.
Thank you so much Peter! :)
I was terrified of pursuing an EA career
For 3 years after joining EA I was still set on going to medical school. I knew I could do more but I was just terrified of switching. Even when I got an opportunity presented to me I was very torn between pursuing it or staying in my comfort zone. Now I’m having the best summer of my life in a biosecurity internship. I’m more motivated, I’m more productive, I’m going on more adventures, and I have a lot more and better connections than before.
EA was amazing in that having this network made it easier to go into an effective field than any other option I have, and for the first time in my life I’m doing something I’m passionate about.
So if any of you reading this and you’re on the fence about a big career change, just know that it might be harder than your current plan, but it might also be easier!
Requesting Help for a Compilation of Top EA Facebook Posts
In December 2015, Claire Zabel posted links to all posts in the EA Facebook group with 50 or more likes or comments. I think it’s time for a similar post. From what I understand, the most liked and most commented on posts can be found using the “My groups dashboard” feature on Facebook. Unfortunately, I do not have a Facebook account. I am posting in this thread to request that someone with a Facebook account post the most liked and most commented on posts as a reply to this comment. I can then go through each of them and extract the key information about each (see below) so people can see if there are any they want to read without clicking every single one. I would then post this information as its own forum post. Alternatively, you can do the extracting yourself and post it as a forum post yourself.
Format
Author: Initials are used to prevent future employers from easily associating the post with the author (unless the person is a prominent EA who is likely to remain in EA, in which case the full name is used).
Year: This can give people context as various ideas have become more or less accepted over time.
Text: If the full text is too long, an excerpt is chosen that encapsulates the post.
URL: This allows people to read the post for themselves.
Link Title: This helps people decide whether to click on the link.
Link Author: This is included when the identity of the author is relevant (generally only when the author is an EA).
Link URL: This allows people to go directly to the link without having to go to the post first.
You can see examples of this formatting below.
Posts with the Most Likes as of December 2015 (based on Claire Zabel’s comment)
1)
Author: Peter Hurford
Year: 2014
Text: “EA Onion Article Headlines”
URL: https://www.facebook.com/437177563005273/posts/722086321181061
2)
Author: Robert Wiblin
Year: 2015
Text: “William takes 5 criticisms of Zuckerberg to pieces. The nonsense thrown at him has appalled me:”
URL: https://www.facebook.com/437177563005273/posts/971385786251112
Link Title: 5 criticisms of billionaire mega-philanthropy, debunked
Link Author: William MacAskill
Link URL: https://qz.com/564805/5-criticisms-of-billionaire-mega-philanthropy-debunked/
3)
Author: Robert Wiblin
Year: 2015
Text: “We are really excited to announce that 80,000 Hours got into the world’s leading startup incubator Y Combinator which helped build AirBnB, Dropbox and Reddit, among many others. As a result we are temporarily living in Mountain View!”
URL: https://www.facebook.com/437177563005273/posts/911770118879346
Link Title: Want To Make An Impact With Your Work? Try Some Advice From 80,000 Hours
Link URL: https://techcrunch.com/2015/08/04/80000-hours/
4)
Author: William MacAskill
Year: 2015
Text: “Peter Singer is running a MOOC on Effective Altruism. Cool, eh? Sign up if you want to learn more:”
URL: https://www.facebook.com/437177563005273/posts/850791891643836
Link URL: https://www.coursera.org/learn/altruism
5)
Author: William MacAskill
Year: 2015
Text: “Today is the launch day for Doing Good Better!”
URL: https://www.facebook.com/437177563005273/posts/908514712538220
Link URL: https://www.amazon.com/Doing-Good-Better-Effective-Difference/dp/1592409105/
Posts with the Most Comments as of December 2015 (based on Claire Zabel’s comment)
1) Unable to access
2)
Author: Jacy Reese
Year: 2015
Text: “Kelsey Piper, from Stanford Effective Altruists, wrote this thoughtful post explaining how she feels about the issue [of whether meat is served in EA spaces], which summarizes many of our views (although, of course, we don’t all agree in the details), and I’d encourage everyone to read it to better understand the issue.”
URL: https://www.facebook.com/437177563005273/posts/914246908631667
Link Title: preference accommodation problems
Link Author: Kelsey Piper
Link URL: https://theunitofcaring.tumblr.com/post/126310876481/preference-accommodation-problems
3)
Author: C. H.
Year: 2013
Text: “To those who agree with Alexander Kruel: Does CFAR have the same problems as MIRI and LessWrong? I’ve read good things about CFAR elsewhere in this group, but there was a time when I was reading so many good things about MIRI and LessWrong that I consider myself lucky to have dodged the Basilisk.”
URL: https://www.facebook.com/437177563005273/posts/570478889675139
Link Title: AI Risk Critiques: Index
Link Author: Alexander Kruel
Link URL: http://kruel.co/2012/07/17/ai-risk-critiques-index/
4)
Author: Robert Wiblin
Year: 2015
Text: “What academic disciplines do you think we here are undervaluing relative to their usefulness, and should be making an active effort to learn from? My top guess at the moment is history.”
URL: https://www.facebook.com/437177563005273/posts/856678024388556
5)
Author: T. B.
Year: 2014
Text: “In Robert Wiblin’s summit talk, he estimated that about 80-90% of EAs are concerned about animal suffering[.] [Is this correct?]”
URL: https://www.facebook.com/437177563005273/posts/725327480856945
As a learning exercise, I’ve been working on a web scraper to compile this info from the FB group.
Doing this in spare time, so it will likely be another week or two before I have a post put together, but posting here in the meantime as an FYI and to potentially gain 5 karma so I can post once it’s ready :)
UPDATE: After scraping the initial post data, there are 200+ posts with 50 or more likes. (Obviously the group has gotten quite a bit more active over the past couple years!)
Not sure if there’s a maximum length for a forum post, but regardless, this strikes me as probably too many “top posts” to feature. Would it be better to limit it to the top 50 posts? Top 100? Welcome any input on this.
Top 50 sounds good to me. Thanks for doing this.
UPDATE: Here’s the post.
It seems you need the Grytics tool to do this. I can’t work out to do it in facebook itself. Would also be interested to see this.
Should EAs work on reducing food waste?
According to USDA statistics, a significant percent of food purchased by consumers goes uneaten (15% of chicken, 35% of turkey, 20% of beef, 29% of pork, and 23% of the edible portion of eggs). If consumers wasted less food, they would purchase less meat/eggs/dairy, which would lead to fewer animals suffering on factory farms.
One factor that could be driving food waste is confusing date labeling. For example, an egg container may have a ‘Sell By’ date meant to help retailers manage their inventory, but a consumer who sees the label and date some time after purchasing might throw the eggs away thinking they are no longer safe to eat. One possible solution is a federal labeling law that limits producers to listing the freshness date and the expiration date (and requires them to use specific easy to understand phrases when listing either). However, there are several reasons that working towards such a law may be a bad use of resources. First, legal change may be unnecessary as it appears the food industry may voluntarily adopt such a system. Second, it’s unclear how much labeling reform reduces food waste (I was unable to find any studies in my brief search). Third, it may be that the primary benefits of reducing animal product consumption are the long term effects, in which case reductions in consumption driven by factors other than concern for animals may be much less impactful. Of course, there may also be other ways to reduce food waste (to which the first two concerns would not apply).
Interesting. It’s strange that I’ve never heard anyone talking about decreasing animal suffering by decreasing food waste before. I wonder if anyone investigated such possibilities, I couldn’t find anything by googling. I happened to talk with an ACE researcher today and he didn’t know about any such research either. I think it’s possible that there are some effective interventions in this area. Because there are many ways to reduce waste. For example:
Vacuum-packaging meat products. They can extended the life of some products by up to 9 days when compared to conventional packaging.
Getting rid of ‘buy one get one free’ promotions at groceries
Helping with redistribution of surplus food
It can be complicated though. For example, it’s possible some people don’t by eggs because they look at the “Sell by” date and think that they will expire soon.
I wonder what could be next steps to increase the probability that someone looks into this. It could be added to http://effectivethesis.com/ but that would have a low probability of changing anything. EA Animal Welfare Fund may want to fund such research if there was someone to do it, but a more concrete topic would be needed.
I think there was some data that showed the majority of waste happened before a product got to a supermarket, and that switching to plant based/clean meat would be more efficient than cutting waste between shop and bin.
On page 37 of this report it says, for poultry, 11% of feed energy gets converted into human food. https://www.wri.org/sites/default/files/wri13_report_4c_wrr_online.pdf
If 15% of the 11% gets wasted that seems less of priority than the original 89% that is lost, although maybe it would be a more tractable and neglected area to work on.
My comment was concerned with the impact of food waste on the number of animals suffering on factory farms. The report you cite seems to be discussing feed that is ‘wasted’ in the conversion process. But since this feed is likely to be mostly plants, improving the conversion ratio would probably not have a large effect on the number of animals on factory farms. (If anything, improving the conversion ratio might increase the number of factory farmed animals by reducing how much it costs to raise animals.)
Seeking 5 karma so I can post about the recent WASR grant competition! :)
New here. Hoping to get some karma points so that I can ask specific questions for the local community development project I have planned.
I just finished reading “The Nobel Laureates’ Guide To The Smartest Targets For The World” and can not find the specific methods that can be employed to achieve the proposed targets. For example: with regard to coral reef loss, if the research is accurate and there is a 24$ economic return for every 1$ spent, through what organizations or processes can this be achieved? The specific dollar figure must imply that the process is known. Is there a separate resource of footnotes that describe how to achieve those returns? The short book was very interesting as a navigation tool towards the initiatives that may have the greatest economic return and resultant prosperity for humankind.
Thanks for any insights if you get the chance. -Tom
I don’t trust the author (Lomborg), based on the exaggerations I found in his book Cool It.
I reviewed that book here.
Welcome!
If there was a $24 total return to every dollar spent, and the actor could capture even a small fraction of this return, I’d expect that a for-profit enterprise would already be doing this.
But I’m not familiar with the domain, maybe there’s no way for a for-profit to capture the return, or maybe the 24:1 ratio is incorrect.
Thanks, Milan. I think the economics are such that the return does not necessarily go to the person/org that donated the money. The 24$ return per 1$ invested is seen in sustainable fisheries and the taxes they generate; in generating tourism for that region and all the jobs and auxiliary benefits, taxes, decreased welfare spending, etc. So it’s a great return but does not accrue to the donor, per se. But it’s a great investment for governments and for charities that are looking to maximize well-being.
Other examples from the book have “family planning/sex education” as a 120$ return per 1$ invested. Campaigns against malaria as 36$:1. And these ideas are vetted, calculated by teams of economists trying to decide where the trillions of dollars that will be spent on aid over the next 15 years.
Does that make sense?
If anyone found this useful I could use a couple karma points to start threads in the regular forum. Thanks. :) -Tom
Hm, could you link to the place where you’re getting these figures? I’m curious :-)
(Or give page numbers if it’s a book.)
It’s only 145 pages and very interesting imo. Well worth the short read. I love the concept of interventions that pay for themselves. An insecticide treated bed net for 5$ including delivery, on average, pays for itself by preventing malaria and fostering a culture with less societal burden down the road; less hospital costs for the sick, more taxes generated by healthy workers; healthy kids from those parents; etc. An economically virtuous circle.
Book: https://www.amazon.ca/Nobel-Laureates-Smartest-Targets-2016-2030/dp/1940003113
Seeking 5 upvotes in order to make a post.
Looking for Karma points.
Hi all, I would like to post a critical perspective on maximizing happiness. It includes an alternative approach, mental health issues and burnout. I would love to see a discussion about it, but not only on FB :) Anyone interested and willing to give me some karma to enable my post?
Cheers :)
Frequency of Open Threads
What do people think would be the optimal frequency for open threads? Monthly? Quarterly? Semi-annually?
Every 2-3 months seems good (weakly held).
Every 2-3 months seems good.
Impact Investing from an EA Perspective
This is just a teaser, since I don’t have enough karma for a full post yet!
Picture a scale that has charity one one side (good social utility, −100% financial return) and Investing on the other (zero social utility, 7% financial return). Impact investing is a space that can give similar risk-adjusted market returns as traditional investments, but also provides social utility.
In my research, I’ve found several factors that make me excited about this area:
Impact investing is about 5% the size of charitable donations (22B vs 410B in 2016), and is growing much faster (17% vs 4% annually)
Impact investing makes up only 0.16% of the total capital markets—huge room for growth
Philanthropic enterprises with sustainable business models can use existing capital markets to get funded on a large scale
Due to the market’s current inability to accurately value the ‘social utility’ provided, there are many greatly under-valued investment opportunities, providing similar social utility as comparable charities
I’ve got more detail, logic and sources in the full post, but in the mean time, I’ll tell you about one example opportunity that I’ve zoomed-in on.
WorldTree is a company that lets you buy an acre of fast-growing Empress Splendor trees. It’s goal is to generate income from the harvest of the trees, and offset the carbon footprint of investors:
$2500 CAD minimum investment, enough to plant 1 acre of trees
One acre is enough to offset your lifetime carbon footprint
The timber is sold after 10 years, conservative return to the investor is $20k
From an EA perspective, I compared the stated carbon cost of World Tree ($1.72/tonne) to Cool Earth ($1.34/tonne) and traditional carbon offset programs ($10/tonne). This investment could return a 23% annual return, while the Cool Earth ‘investment’ would be a loss of 100%. At it’s surface, this example does look quite promising when counting both the social utility generated, and the future utility my $20k could do in 10 years time.
Looking forward to posting a more detailed write-up on the space once I’m able, and to hearing your feedback on these ideas!
That’s insanely high… social arguments would be irrelevant if you could safely get that kind of return. Every investor would want in.
The key word is “safely”. This kind of investment would be considered high risk—this company only started this program three years ago, and the first trees haven’t yet produced profit. Additionally, the 10 year duration is unattractive for many investors, and there isn’t really a market for this type of wood in North America yet. They need to offer a big reward in order to entice investors to fund their venture at this early stage.
I suspect other early stage ventures would have a similar high-risk, high potential return profile, which is why they are typically limited to accredited investors.
I’m a huge fan of this concept. Have you done a lot of research on this? Do you like WorldTree specifically, or are there other Impact Investing orgs you’re aware of?
This field is really interesting, and there is a lot of research out there on it. The Global Impact Investing Network (GIIN) is a good starting place, but I’ve spent about a week pulling together stats from several sources to build my view on this space, and the Canadian options in particular.
I do like World Tree in particular, because it both produces high-impact social utility, has a high expected financial return, and I can actually buy-in without being accredited. Unfortunately for people with less than $1M, the options for impact investing are very slim at the moment.
Typical options include Green Bonds with a 4-5% return over 5 years, or investments in smaller community funds with a fairly small impact.
Check out a few Canadian options at OpenImpact
Hello everyone! I’m new to the EA Forum and it’d be great if I could get some karma so I can start contributing more. :)
This next fall I am running a university EA group. Is there anyone who has run an EA group that has any advice for me other than the basic information on EA Hub? What types of events were the most fun? What types of events were the most effective in gaining members or discussing issues?
Hey Jared, you may get more of a response in the group organisers group. https://www.facebook.com/groups/956362287803174/
Side note: I’d encourage commenters to put a title at the top of their comments (maybe this can be done in the OP).
I edited the OP to mention it.
Thanks, done!
Has anyone reframed priorities choices (such as x-risk vs. poverty) as losses to check if they’re really biased?
I’m new here. Since I suspect someone has probably already made a similar question somewhere (but I couldn’t find it, sorry), I’m mostly trying to satisfy my curiosity; however, there’s a small probability that it touches an important unsolved dilemma about global priorities and the x-risk vs. safe causes.
I’ve read a little bit about the possibility that preferences for poverty reduction/global health/animal welfare causes over x-risk reduction may be due to some kind of ambiguity-aversion bias. Between donating U$3,000 for (A) saving a life (high certainty, presently) or (B) potentially saving 10^20 future lives (I know, this may be a conservative guess), by making something like a marginal 10^-5 contribution to reducing in 10^-5 some extinction risk, people would prefer the first safe option, despite the large pay-off of the second one. However, such bias is sensitive to framing effects: people usually prefer sure gains and uncertain losses. So, I was trying to find out, without success, if anyone had reframed this decision as matter of losses, to see if one prefers, e.g., (A’) reducing deaths by malaria from 478,001 to 478,000 or (B’) reducing the odds of extinction (minus 10^20 lives) in 10^-10.
Perhaps there’s a better way to reframe this choice, but I’m not interested in discussing one particular example (however, I’m concerned with the possibility that there’s no bias-free way of framing it). My point is that, if one chooses something like A-B’, then we have a strong case for the existence of a bias.
(I’m aware of other objections against x-risk causes, such as Pascal’s mugging and discount rates arguments – but I think they’ve received due attention, and should be discussed separately. Also, I’m mostly thinking about donation choices, not about policy or career decisions, which is a completely different decision; however, IF this experiment confirmed the existence of a bias, it could influence the latter, too.)
Animal v. Human Prioritization
Hi all,
A person involved with EA said I should get involved with the forum, so here I am.
Here is/are my question(s).
1) Is (and should) morality be based on a combination of biology and strict logical induction?
If yes to 1), then here’s my deal.
I have a preference for valuing human life over animal life. However, if some animal species are more likely to live longer than the human species will, then would I be doing more good by prioritizing helping those animals out first and foremost?
This article— https://www.theatlantic.com/science/archive/2018/08/earths-scorching-hot-history/566762/ — mentions crocodiles and sand sharks living under 1000 ppm CO2eq conditions. I’m not sure that humans can. Would I be doing more good trying to make sure that crocodiles and such animals can survive, and consider the human species a sunk cost at this point?
Let me know your thoughts as you’re able to, please. Thank you,
Donald Zepeda