the only organizations I know that are trying to get low-income countries to become high-income countries are the World Bank, IMF, and Growth Teams
ruthgrace
I think I’m convinced that getting low-income countries to develop into high-income countries is more important than the abundance agenda. OpenPhil has so much money that I’m pretty sure they should do both. As far as I know, they aren’t doing either. A country is not going to develop through malarial net donations.
Yes, this is an interesting problem with new smart/planned cities! Probably not a problem with New York, San Francisco, and San Jose though
Thank you Vaidehi! I worked really hard on this and I’m glad it shows :)
appreciate your comment! Thanks for posting
I agree on the natural areas point; I would hope that we can increase density without decreasing the density of parks and playgrounds (though I would definitely be okay with decreasing the size of big ones that don’t serve that many people)
I am wary of arguments that we need to do other difficult to do things like improving transit before we can build housing because the practical result is that nothing is going to be done at all, which is worse than if we build more housing and then people had to campaign or lobby to get the transit fixed to accommodate.
What do you think of what I wrote in the post about the USA being like a low income country within a high income country when it comes to health and poverty? I think there’s value in making that not happen in high income countries , and it seems more tractable to me than developing a low income country because of the money is already there to do it
thanks for your comment! I’m sure a lot of people reading this are thinking the same thing. Effective altruism in general is biased against systemic change due to it being difficult to measure and outcomes being diffuse. From my post
Pushing an abundance agenda means working towards a world where everyone expects business and government decisions to prioritize the supply of essential goods and services.
This isn’t a list of policies, this is a cultural shift. Sure, I’ve listed a bunch of directly positive effects in my examples, but if this goal was actually achieved, it would mean a restoration of democracy[1] and trust in government. This is mitigating future harm done when business interests propose policy that stalls innovation and leads to shortages. And lack of trust in government and other public institutions is one of the big reasons it has been so difficult to fight this pandemic. You would never be able to calculate the value of building the kind of movement that reflects a change in public opinion in of QALYs. But I think it’s really important that this work be done, especially in the context of people who are doing large scale movement building for non altruistic reasons, continually eroding democracy and trust in government. Without willingness to accept these kinds of diffuse outcomes, the scale of change that effective altruism can accomplish is limited.
- ↩︎
legislators disproportionality responsive to economic elites and business lobby that influence of average citizen is zero. Paper: https://www.cambridge.org/core/journals/perspectives-on-politics/article/testing-theories-of-american-politics-elites-interest-groups-and-average-citizens/62327F513959D0A304D4893B382B992B Washington Post article responding to criticisms: https://www.washingtonpost.com/news/monkey-cage/wp/2016/05/23/critics-challenge-our-portrait-of-americas-political-inequality-heres-5-ways-they-are-wrong/
- ↩︎
Open Philanthropy should fund the abundance agenda movement
I disagree with 2) because I think the movement will be able to get more done with more diverse backgrounds of people who are really good at different things. Even if AI is the most important thing, we need people who understand communications, policy, organizing grassroots movements, and also people who are good at completely unrelated fields who can understand the impact of AI on their field (manufacturing, agricuture, shipping logistics, etc) though there aren’t those opportunities to do that work directly in AI right now.
Is the survey for people who work at EA orgs? People who work at organizations that identify as being EA aligned? Or any person doing some kind of direct work (for example at an organization that may not identify with effective altruism even if the individual who works there does)?
Yes! Also I suspect that people who think that AI is by far the most important problem might be more concentrated in the san Francisco bay area, compared to other cities with a lot of effective altruists, like London. Personally I think we probably already have enough people working on AI but I was worried about getting downvoted if i put that in my original post, so I scoped it down to something I thought everybody could get on board with (that people shouldn’t feel bad about not working on AI)
Yes that’s exactly it! Even if a lot of people think that AI is the most important problem to work on, I would expect only a small minority to have a comparative advantage. I worry that students are setting themselves up for burnout and failure by feeling obligated to work on what’s been billed as some as the most pressing/impactful cause area, and I worry that it’s getting in the way of people exploring with different roles and figuring out and building out their actual comparative advantage
It’s OK not to go into AI (for students)
It’s my understanding that in places like Malawi, the paints are oil based and can be manufactured without advanced equipment, whereas in developed countries the paints are latex based and require more complicated equipment to produce the proper emulsion. I’m curious if the manufacturers that you are working with are simply replacing the pigments and continuing with oil-based manufacturing?
Love this post!
In software engineering culture, “forced vacation” is not done so much for the good of the person taking it, but the good of the team to make sure that they are set up to reliably cover the person taking the vacation as practice in case anything happens to that person (they might leave or fall ill). It’s probably easier for software engineers to substitute for each other than for you to figure out all the different people that would need to cover different aspects of your leadership role though.
i love this and have been using it a bunch! the cuckoo coworking timer has been down today, though :(
Yes, I agree! I think “too much information and people have a difficult time telling what to trust” is a more accurate and nuanced descriptor than “misinformation”! and that your point about
more general improvements in communication strategies/governance/economic growth could be more important.
would go a really long way.
I wonder that if people could more generally feel like they had a say and a stake in the way that the country is run, to the point where a regular person could advocate for improvements for themselves and their communities, that there would be more understanding and trust in government and public health institutions. I suspect that when people feel like they’re screwed, it makes the situation more ripe for misinformation to affect people. Here’s a paper that talks about how sharers of misinfo are more likely to express existentially based needs (e.g. fear of death or other threats). https://arxiv.org/pdf/2203.10560.pdf
State of the land: Misinformation and its effects on global catastrophic risks
I assumed they were talking about situations where the young toddler was being breastfed still.
I don’t think it’s possible to do an analysis that makes sense at all, given that outcomes are so high variance and depends so much on the skill and strategy and luck of the people working on it. That doesn’t mean no one should work on it. Open Philanthropy and the FTX future fund are uniquely positioned to be able to get effective at this kind of work and drive the kind of results no one else can
And I think they know this and have been trying; OpenPhil has done work in land use reform and criminal justice reform, for example. I’m not complaining about what people choose to do or not do, but I think my original statement about EA being biased against difficult-to-measure things is correct and makes sense with an evidence-based ideology