I always donate close to 100% to what I believe is most effective at any given time. I do “diversify” across time, though. Last year, I almost donated 100% to an Effective Giving organization. In the end, I decided against this, because (a) their average donor was giving mostly to global health and development, while I was thinking that AI safety would be more effective by a factor much larger than their multiplier, and (b) the multiplier effect probably shifts this balance even further against my preferences.
There is of course an argument that it is only a question of time until newly acquired donors board the train to “crazy town” and give to more speculative causes with higher EV. But I was working under the assumption that the multiplier effect probably mostly reaches a demographic that likely sticks to their existing world views.
Paul_Lang
Yeah, the double accounting question can be a problem. It is inherent to counterfactual impact. Imagine a production chain X → Y → Product. Then counterfactually, X can call 100% dips of the product; as can Y. So together, they have 200%, which does not make sense.
However, there are alternative impact metrics. For example, Shapley values have some nice properties. In particular, they guarantee that they sum up to one. Intuitively, they calculate the mean counterfactual impact for each player over all possible configurations of players. This can be useful to assess important predictors in statistical modles. But it is also the reason why I don’t find them partucularly useful for decision making. After all, you are not interested in your impact in hypothetical worlds, but just in your impact in the current constellation of the world, i.e. your counterfactual impact.
So in summary, I’d say use counterfactuals for decision making and Shapley values for determining bragging rights ;)
Hi @Leandro Franz thank you very much for this post. I’d be curious to have a look at your document or a summarized version of it. Could you double check the link to the document? It does not work for me.
I am also lacto-vegetarian and wanted to buy https://veganpowah.com/product/vegan-powah-180/. They have some good info about their ingredients on that website. However, they are out of stock, so I purchased most ingredients in powder form (except for things I take separately/don’t need like Omega3 (I have a product with higher EPA; also Idk how vegan powah got oil into powder form—and have concerns about chemical stability if I mix it in myself), iron (inhibits zinc absorption, so I take it separately) selenium (I just eat ~2 brazil nuts/day) and B vitamins (I have high B12, should probably check the others some time)). I mixed things together in increasing order of amount (i.e. put the ingredient with the lowest mass to the ingredient with the second lowest mass into an empty yoghurt bucket, rolled that around, added the ingredient with the third highest mass, rolled the yoghurt bucket around...). I hope everything is mixed reasonably well. At least when I mix my exercise-recovery shake like that, the brown cocoa powder is smoothly distributed. I was thinking about putting the mixture into capsules, but that seems a big effort, so I just put the powder into my breakfast cereal. Maybe I should check if this is OK.
I am wondering if 80k should publicly speak of the PINT instead of the INT framework (with P for personal fit). This is because I get the impression that the INT framework contributes to generating a reputation hierarchy of cause areas to work on; and many (young) EAs tend to over-emphasize reputation over personal fit, which basically sets them up for failure a couple of years down the line. Putting the “P” in there might help to avoid this.
Is there any evidence that translation efforts are effective to reach people who do not have English as their first language? My impression is that native German speakers <35 years with a university degree understand written English perfectly well, although some prefer German. Listening and especially speaking can be a bit more challenging. As a rule of thumb, the younger the person, the better their English (due to YouTube, Netflix, etc.).
I suggest exploiting Facebook’s Dating App instead, roughly like so (still needs some testing; dm’d you, Affective Altruist): https://docs.google.com/document/d/1VTRO12Nsl3H9P7Zpx3mcyeQ1HWNapxkUlaf45xS5OcU/edit?usp=sharing
Good to hear that there are EAs working on that within governments.
Thanks for sharing your insights Mako! After reading your response and the IEEE Spectrum article you mentioned, I am much more optimistic that the metaverse can/will move in the right direction. Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?
I also liked your example of Twitter, where addictiveness was not designed into the system, but happened accidentally. Accidents usually prompt investigations to improve regulations, for instance in the aircraft industry. Do you think there are any concrete key learnings from the case Twitter how to prevent similar accidents in the future of the internet or metaverse? If so, could or should some of these be baked into better designs, and are current incentives aligned with this or would it require some governmental regulations (since you are worried about liberalisation)?
I still believe that Meta is a major player on the market. And while I do agree that they have no direct interest in destroying democracy or creating an unliveable world, I think they act in line of Milton Friedman and would do just try to maximise their profits. I am not sure if there is anything wrong with that in principle, as long as the rules of the game ensure that maximizing profits aligns well with overall utility. In the past, I don’t think the rules of the social media game aligned well with overall utility. And I am not sure that the need for and support of open standard by players like Meta alone is sufficient to align profit maximization with overall utility in the metaverse. If this assessment is correct, it would make sense to brainstorm ideas for such an alignment as the metaverse develops.
Btw. thanks also for sharing your LW article on Webs of Trust (on my reading list) and your thoughts on RoamResearch (pm’d you with a question on Roam vs. Obsidian).
Metaverse democratisation as a potential EA cause area
To me that sounds like a project that could be listed on https://www.eawork.club/ . I once listed to translate the German Wikipedia article for Bovine Meat and Milk Factors into English cause I did not have the rights to do it. A day later somebody had it done. And in the meanwhile somebody apparently translated it to Chinese.
Regarding media: to keep track of media coverage and potentially react accordingly, it seems that https://www.google.com/alerts can be helpful.
I agree with your statement that “The message of the post is that specific impact investments can pass a high effectiveness bar”
But when you say >>I think the message of this post isn’t that compatible with general claims like “investing is doing good, but donating is doing more good”.<< I think I must have been misled by the decision matrix. To me it suggested this comparison between investment and donation, while not being able to resolve a difference between the columns “Pass & Invest to Give”, “Pass & Give now” (and a hypothetical column “Pass & Invest to keep the money for yourself”, with presumably all zero rows), which would all result in zero total portfolio return (differences between these three options would become visible if the consumer wallet would be included and the “Pass & Invest to Give” would create impact through giving, like the “Pass & Give now” column does).
Anyway, I now understand that this comparison between investing and donating was never the message of the post, so all good.
Thanks for the response. My issue was just that the money flow from the customer to the investor was accounted as positive for the investor, but not negative for the customer. I see the argument that customers are reasonably well off non-EAs whereas the investor is an EA. I am not sure if it can be used to justify the asymmetry in the accounting.
Perhaps it would make sense that an EA investor is only 10% altruistic and 90% selfish (somewhat in line with the 10% GW pledge)? The conclusion of that would be that investing is doing good, but donating is doing more good.
I would have thought that this is magnitudes easier, because (with exception of my last sentence) this uses existing technology (although, AFAIK the artificial ecosystems we tried to create on earth failed after some time, so maybe there is a bit more fine-tuning needed). Whereas we still seem to be far away to understand humans or upload them to computers. But in the end, perhaps we would not want to colonise space with a rocket like structure, but with the lightest stuff we can possibly built do to relativistic mass increase. Who knows. The lightweight argument would certainly work in favour of the upload to computer solution.
From an impartial perspective, I think it is also necessary to account for the wallet of the customer, not only of the investor. After all, the only reason why the investor gets their money back, is that customers are paying for the product.
In other words, one could add a row “Financial loss customer” to the decision matrix. For the “Pass & Give Now” Column it would be 0% (there is customer who pays the investor back). For all other columns it would be 100%, I think. That is, once the customers wallet is taken into account, the best world would be if either the investor did not invest but donate to the BACO, or the customer did not buy the App but donate to the BACO instead.
So all things considered, solo and lead impact investing are good, but only 7 and 30% as good as donating to the BACO, respectively. Or am getting this wrong?
I really enjoyed this podcast. But regarding space colonization, I do not think that uploading humans to computers is the only alternative to avoid transporting human colonies in space ships. For instance, we could send facilities for producing nutrition, oxygen and human artificial wombs there, plus two tiny test tubes of undamaged egg and sperm cells. Of course, once synthetic biology gives us the ability to create cells ourselves, we can also upload the human (epi)genome to a storage medium and synthetize the DNA and zygotes on the new planet.
Did anyone else use Google Pay? Didn’t seem to incur fees for me.
Love to hear that there is important work being done in that area! Are there approaches to measure SWB as a function of “objective” well-being (OWB)? And what are their shortcomings? For instance, to me it feels that SWB could be a weighted sum of OWB the recent change in OWB and how one’s own (x) OWB compares to the OWB of others (x_i):
The weights are probably person specific parameters. Persons with low w1 and w2 might be resilient, persons with high w2 might like status symbols, etc.
I’d like to add another bullet point
- personal fit
I think that protests play an important role in the political landscape, so I joined a few, but but walking through streets in large crowds and chanting made me feel uncomfortable. Maybe I’d get used to it if I tried more often.