I’m surprised to see no ideas that incorporate AI. Y-Combinator, the for-profit equivalent of AIM, is now ~75% AI startups. If AIM has looked into relevant ideas, I’d be curious to know what deterred them.
At the same time, I think one challenge here is that it’s hard to imagine many worldviews and corresponding projects where the following would be simultaneously true:
1. Global health charities are more efficient than AI-safety charities. 2. The AI project in question wouldn’t become obsolete after a few years, due to much better LLMs or TAI. 3. The project itself is highly exciting, perhaps in part because of advances in LLMs that are expected in a few years.
So basically, I think this might require a fairly narrow worldview that not too many people fall into. The people highly AI-pilled are focused on projects more specific to AI progress, and the least AI-pilled aren’t very bought into the use of AI.
(Correspondingly, my personal position is that I’m more on the AI-pilled side, and can’t think of many AI-related global welfare projects that would excite me now over other AI projects.)
YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals. I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products. The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either...
YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products
As every charity listed is focused on human wellbeing, let’s focus on that. I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either..
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would pose an existential threat to humanity
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / applying to each organisation, than the relative merit of the ideas.
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
Not really. YC doesn’t just care about percentage of value capture, it also cares about the total amount of value available to capture. This tends towards its target market being deep-pocketed corporations and consumers with disposable income to spend on AI app platforms or subscriAI tools for writing better software, and completely ignoring the Global South and people who don’t use the internet much.
AIM cares about the opposite: people that don’t have access to basics in life and its cost-effectiveness is measured on non-financial returns
I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
But if the advice is bad it might actually be net negative (and AI trained on an internet dominated by the developed world is likely to be suboptimal at generating responses to people with limited literacy on medical conditions specific to their region and poverty level in a language that features relatively little in OpenAI’s corpus). And training generative AI to be good at specialised tasks to life-or-death levels of reliability is definitely not cheap (and nor is getting that chatbot in front of people who tend not to be prolific users of the internet)
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would post an existential threat to humanity
Unlike many EAs, I agree that the threat to humanity posed by ChatGPT is negligible, but there’s a difference between that and trusting OpenAI enough to think building products piggybacking on their infrastructure is potentially one of the most effective uses of donor funds. Even if I did trust them, which I don’t for reasons EAs are generally aware of, I’m also not at all optimistic that their chatbot would be remotely useful at advising subsistence farmers on market and soil conditions in their locality.
And especially not remotely confident it’d be better than an information website, which might not be VC-fundable, but would be a whole lot cheaper to create and keep bullshit-free
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / apply to each organisation
Quite a few development and EA adjacent organisations think AI will be quite important, if not the most important factor for future development. It is already being used by many companies, charities and governments around the world.
IDInsight—Ask-a-Metric: Your AI data analyst on WhatsApp
The Agency Fund—AI for Global Development Accelerator: Introducing our cohort
How AI is driving India’s next agricultural revolution
How Neil King and David Baker are using AI to create more effective vaccines
Although I think there probably are some great AI ideas that could help the world’s poorest people, it’s not easy to think of these or implement them. Usually the economies of the poorest people involve surprisingly little technology and are based around hand powered agriculture, basic goods and basic services. Internet where it is strong is often surprisingly expensive.
Y combinator startups aim to make money from the richest people in earth, people’s who’s economy and lives are tied to technology and the Internet.
So the economic reality and ecosystems are completely different and it’s hard for AI tools to penetrate super poor systems, while there’s endless ways to capture value in high income countries.
I think there may be some super valuable AI ideas that could change the game for the world’s poorest, but it’s not obvious or clear yet.
And where there are good ideas, convincing governments and NGOs to take on new ideas is super difficult. We at OneDay health have made a cool healthcare mapping tool based on AI generated population and road data which has transformed our ability to target our healthcare, and has potential to help improve healthcare on the margins in multiple ways. NGOs and government showing some interest but besides a small USAID pilot in Pakistan it has been quite hard to get uptake.
I’m a sample size one who lives in a more income context and e been thinking hard about it, but I’m struggling to come up with too maybe potentially useful AI ideas right now.
I’m surprised to see no ideas that incorporate AI. Y-Combinator, the for-profit equivalent of AIM, is now ~75% AI startups. If AIM has looked into relevant ideas, I’d be curious to know what deterred them.
I’m quite pro-AI.
At the same time, I think one challenge here is that it’s hard to imagine many worldviews and corresponding projects where the following would be simultaneously true:
1. Global health charities are more efficient than AI-safety charities.
2. The AI project in question wouldn’t become obsolete after a few years, due to much better LLMs or TAI.
3. The project itself is highly exciting, perhaps in part because of advances in LLMs that are expected in a few years.
So basically, I think this might require a fairly narrow worldview that not too many people fall into. The people highly AI-pilled are focused on projects more specific to AI progress, and the least AI-pilled aren’t very bought into the use of AI.
(Correspondingly, my personal position is that I’m more on the AI-pilled side, and can’t think of many AI-related global welfare projects that would excite me now over other AI projects.)
YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals. I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products. The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either...
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
As every charity listed is focused on human wellbeing, let’s focus on that. I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would pose an existential threat to humanity
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / applying to each organisation, than the relative merit of the ideas.
Not really. YC doesn’t just care about percentage of value capture, it also cares about the total amount of value available to capture. This tends towards its target market being deep-pocketed corporations and consumers with disposable income to spend on AI app platforms or subscriAI tools for writing better software, and completely ignoring the Global South and people who don’t use the internet much.
AIM cares about the opposite: people that don’t have access to basics in life and its cost-effectiveness is measured on non-financial returns
But if the advice is bad it might actually be net negative (and AI trained on an internet dominated by the developed world is likely to be suboptimal at generating responses to people with limited literacy on medical conditions specific to their region and poverty level in a language that features relatively little in OpenAI’s corpus). And training generative AI to be good at specialised tasks to life-or-death levels of reliability is definitely not cheap (and nor is getting that chatbot in front of people who tend not to be prolific users of the internet)
Unlike many EAs, I agree that the threat to humanity posed by ChatGPT is negligible, but there’s a difference between that and trusting OpenAI enough to think building products piggybacking on their infrastructure is potentially one of the most effective uses of donor funds. Even if I did trust them, which I don’t for reasons EAs are generally aware of, I’m also not at all optimistic that their chatbot would be remotely useful at advising subsistence farmers on market and soil conditions in their locality.
And especially not remotely confident it’d be better than an information website, which might not be VC-fundable, but would be a whole lot cheaper to create and keep bullshit-free
I agree this is also a significant factor
Quite a few development and EA adjacent organisations think AI will be quite important, if not the most important factor for future development. It is already being used by many companies, charities and governments around the world.
IDInsight—Ask-a-Metric: Your AI data analyst on WhatsApp
The Agency Fund—AI for Global Development Accelerator: Introducing our cohort
How AI is driving India’s next agricultural revolution
How Neil King and David Baker are using AI to create more effective vaccines
Kenyan farmers deploying AI to increase productivity
How the farmers without smartphones are using AI
Although I think there probably are some great AI ideas that could help the world’s poorest people, it’s not easy to think of these or implement them. Usually the economies of the poorest people involve surprisingly little technology and are based around hand powered agriculture, basic goods and basic services. Internet where it is strong is often surprisingly expensive.
Y combinator startups aim to make money from the richest people in earth, people’s who’s economy and lives are tied to technology and the Internet.
So the economic reality and ecosystems are completely different and it’s hard for AI tools to penetrate super poor systems, while there’s endless ways to capture value in high income countries.
I think there may be some super valuable AI ideas that could change the game for the world’s poorest, but it’s not obvious or clear yet.
And where there are good ideas, convincing governments and NGOs to take on new ideas is super difficult. We at OneDay health have made a cool healthcare mapping tool based on AI generated population and road data which has transformed our ability to target our healthcare, and has potential to help improve healthcare on the margins in multiple ways. NGOs and government showing some interest but besides a small USAID pilot in Pakistan it has been quite hard to get uptake.
https://forum.effectivealtruism.org/posts/GpAngpFmn3HFrBLnt/health-aim-a-mapping-tool-helping-health-providers-reach
I’m a sample size one who lives in a more income context and e been thinking hard about it, but I’m struggling to come up with too maybe potentially useful AI ideas right now.