DavidNash
I would think it’s more peer appreciation than public appreciation that matters.
Demographic Uncertainty and the Future of Extreme Poverty
Do you think CoGi need broad appeal if they’re mainly looking for multi millionaire donors?
r/badmathematics have already looked at it.
”Being very generous, I think their attempt is to invoke this result of Chaitin to basically say “if the universe was a simulation, then there would be a formal system that described how the universe worked. By Chaitin, there’s some ‘complexity bound’ for which statements beyond this bound are undecidable. But, these statements have physical meaning so we could theoretically construct the statement’s analog in our universe, and then the simulation would have to be able to decide these undecidable statements.”What they don’t explain is:
why we should think that we’re guaranteed to be able to construct such physical analogs of these statements,
why they think that whatever universe that is simulating ours must have the same axioms as ours (e.g. Godel only applies to proving statements within the formal system under considerations),
why they can rule out that the hypothetical simulating computer wouldn’t be able to just throw some random value out when it encounters an undecidable statement (i.e. how do we know that physics is actually consistent without examining all events everywhere in the universe?),
...or a bunch of other necessary assumptions that they’re making and not really talking much about.
They also get into some more bad mathematics (maybe bad philosophy?) by appealing to Penrose-Lucas to claim that “human cognition surpasses formal computation,” but I don’t think this is anywhere near a universally accepted stance.”
Thanks for adding this Nithya, I agree with both points. This post was more about raising a question I hadn’t seen discussed much in EA spaces and so there is likely research that supports or weakens the argument I didn’t come across.
My general impression is that non profits tend to teach skills that are less valuable longer term when globally 90% of jobs are in the private sector, even if the skills are valuable to some extent.
It’s also about who is learning the skills, if the people who would have been the top 5% of entrepreneurs/leaders/scientists are not working in those spaces that seems like a loss for those countries.
I haven’t looked into it, I think GiveWell recommended charities make up a very small percentage of most countries NGO workforce, so it seems unlikely to be much of a difference.
Doesn’t that still depend on how much risk you think there is, and how tractable you think interventions are?
I think it’s still accurate to say that those concerned with near term AI risk think it is likely more cost effective than AMF.
The Charity Trap: Brain Misallocation
I think your experience matches what most people interested in EA actually do, the vast majority aren’t deeply involved in the ‘community’ year-round. The EA frameworks and networks tend to be most valuable at specific decision points (career/cause changes, donation decisions) or if you work in niche areas like meta-EA or cause incubation.
After a few years, most people find their value shifts to specific cause communities (as you noted) or other interest-based networks. I think it might actually be a bad sign if there was more expectation of people being very involved as soon as they hear about EA and forever more.
I’d also push back on Hanson’s characterisation, which was more accurate at the time it was written but less so now. The average age continues rising (mean 32.4 years old, median 31) and more than 75% aren’t students.
There are people now with 15+ years of EA engagement in senior positions across business, tech and government, and there are numerous, increasingly professional and sizable, organisations inspired by EA ideas.
The methods within EA differ markedly from typical youth movements, there’s minimal focus on protests or awareness-raising except where it’s seemingly more strategic within specific cause areas.
I think China, in the last few years, has approved a few crops (from here, lots of interesting sections).
Maybe that’s why I’m more optimistic, despite the public being against GMOs (In 2018 46.7% of respondents had negative views of GMOs and 14% viewed GMOs as a form of bioterrorism aimed at China), China leadership is still pushing ahead with them as it benefits the country.
Over time the countries that don’t use GMOs will either have to import, give larger subsidies to their farmers, or have people complain about why their food costs so much vs neighbouring countries.
I’m not so sure it has gone ‘badly’ wrong vs other tech innovations but I’m not as well read on tech adoption and the ups and downs of going from innovation to mass usage.
There has been less uptake than may have been hoped for (and I think animal feed is a large percentage in US at least), but it still could be considered impressive growth since the 90′s.
It’s hard for me to know what the expected take off for this technology should have been and how it compares to similar things (slower than AI and smartphones but faster than tv’s and electricity, but these aren’t great reference classes).
I’m not as convinced by public opinion surveys as I imagine you’d probably get a similar proportion, if not higher, that think factory farming should be banned, which doesn’t stop them being used if people are prioritising price/taste/etc.
With reducing poverty, I think that is a whole host of other things that GMO’s wouldn’t have made much of a difference, even if they were 100% of food.
How is failure being defined here?
When I looked into it, it looks like GMO usage is growing globally.
”In 2023, GM technology was used in 76 countries and regions globally, and 206.3 million hectares of GM crops were planted in 27 countries and regions, representing 3.05% growth over the previous year. The planting area of GM crops has expanded 121-fold since 1996, and now accounts for approximately 13.38% of the total world farmland area (1,542 million hectares), with a total planting area exceeding 3.4 billion hectares.”
I’m not so sure, there are quite a lot of groups that gather together, but not as many that trade off the community side in favour of epistemics (I imagine EA could be much bigger if it focused more on climate or other less neglected areas).
I also wouldn’t use the example of 20 vs 2, but with 10,000 people with average epistemics vs 1,000 with better epistemics I’d predict the better reasoning group would have more impact.
I agree that there is impact to be found here, but the framing in the main post seemed to not consider the effective giving ecosystem as it currently is.
I’m still saying that this area is neglected. I’m trying to give more context, rather than telling people to not work on it. In my own advising I’ve recommended a lot of people to consider these wider areas.
Homicide Reduction—A Potential EA Cause Area?
I agree that it could be useful but I don’t think it’s as neglected as you think.
Anecdotally I know quite a few people in your second category, people in less ‘EA’ branded areas/orgs (although a lot will have more impact). There are several orgs looking into advising donors that haven’t heard/aren’t interested in EA (Giving Green, Longview, Generation Pledge, Ellis Impact, Founders Pledge, etc).
I think some may not be seen in EA spaces as much because of PR concerns but I think the main reason is that they are focused on their target audiences or mainly just interact with others in the effective giving ecosystem.
Also it’s not quite the forum, but I did link to a blog listing Azim Premji on this global health landscape post (not that you would ever be expected to know that).
I think the evidence that EA has “abandoned” open borders is relatively weak, it looks more like that it was never a high priority, and still isn’t.
There has been interest in labour mobility, and in 2024 and 2025 Open Phil funded related areas − 1, 2, 3. But the tag has changed and it now falls under global health, innovation or abundance.
I’m not sure forum posts are relevant when it’s just 1-2 posts a year, and suggest ongoing limited engagement.
0.15% by 2100 seems pretty scary (would probably suggest spending more resources on it then we currently do).
And how much of a reward is it for your boss to ask if you want to write something (with a sense of obligation and worry about what happens if you don’t say yes). Nice story though.
I think it makes more sense to consider this part of their marketing budget than their ‘trying to do good’ budget.