YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals. I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products. The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either...
YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products
As every charity listed is focused on human wellbeing, let’s focus on that. I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either..
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would pose an existential threat to humanity
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / applying to each organisation, than the relative merit of the ideas.
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
Not really. YC doesn’t just care about percentage of value capture, it also cares about the total amount of value available to capture. This tends towards its target market being deep-pocketed corporations and consumers with disposable income to spend on AI app platforms or subscriAI tools for writing better software, and completely ignoring the Global South and people who don’t use the internet much.
AIM cares about the opposite: people that don’t have access to basics in life and its cost-effectiveness is measured on non-financial returns
I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
But if the advice is bad it might actually be net negative (and AI trained on an internet dominated by the developed world is likely to be suboptimal at generating responses to people with limited literacy on medical conditions specific to their region and poverty level in a language that features relatively little in OpenAI’s corpus). And training generative AI to be good at specialised tasks to life-or-death levels of reliability is definitely not cheap (and nor is getting that chatbot in front of people who tend not to be prolific users of the internet)
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would post an existential threat to humanity
Unlike many EAs, I agree that the threat to humanity posed by ChatGPT is negligible, but there’s a difference between that and trusting OpenAI enough to think building products piggybacking on their infrastructure is potentially one of the most effective uses of donor funds. Even if I did trust them, which I don’t for reasons EAs are generally aware of, I’m also not at all optimistic that their chatbot would be remotely useful at advising subsistence farmers on market and soil conditions in their locality.
And especially not remotely confident it’d be better than an information website, which might not be VC-fundable, but would be a whole lot cheaper to create and keep bullshit-free
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / apply to each organisation
Quite a few development and EA adjacent organisations think AI will be quite important, if not the most important factor for future development. It is already being used by many companies, charities and governments around the world.
IDInsight—Ask-a-Metric: Your AI data analyst on WhatsApp
The Agency Fund—AI for Global Development Accelerator: Introducing our cohort
How AI is driving India’s next agricultural revolution
How Neil King and David Baker are using AI to create more effective vaccines
YC aims at making VCs money; the Charity Entrepreneurship programme focuses on helping poor people and animals. I don’t think the best ideas for helping poor people and animals are as likely to involve generative content creation as the best ideas for developed world B2B services and consumer products. The EA ecosystem isn’t exactly as optimistic about the impact of developing LLM agents as VCs either...
I think both are trying to create value at scale. YC cares about what percentage of that value they’re able to capture. AIM doesn’t. I suspect one ought, by default, assume a large overlap between the two.
As every charity listed is focused on human wellbeing, let’s focus on that. I think access to generative AI is better placed to help poorer people than it is to help richer people—it produces lower quality outputs than otherwise available to rich people, but dramatically better than those accessible to poor people. For example, the poorest can’t afford medical advice while the rich get doctors appointments the same week.
It think the type of agent matters. It’s unclear how a chatGPT wrapper aimed at giving good advice to subsistence farmers, for example, would pose an existential threat to humanity
The more I think about it, the more I suspect the gap is actually more to do with the type of person running / applying to each organisation, than the relative merit of the ideas.
Not really. YC doesn’t just care about percentage of value capture, it also cares about the total amount of value available to capture. This tends towards its target market being deep-pocketed corporations and consumers with disposable income to spend on AI app platforms or subscriAI tools for writing better software, and completely ignoring the Global South and people who don’t use the internet much.
AIM cares about the opposite: people that don’t have access to basics in life and its cost-effectiveness is measured on non-financial returns
But if the advice is bad it might actually be net negative (and AI trained on an internet dominated by the developed world is likely to be suboptimal at generating responses to people with limited literacy on medical conditions specific to their region and poverty level in a language that features relatively little in OpenAI’s corpus). And training generative AI to be good at specialised tasks to life-or-death levels of reliability is definitely not cheap (and nor is getting that chatbot in front of people who tend not to be prolific users of the internet)
Unlike many EAs, I agree that the threat to humanity posed by ChatGPT is negligible, but there’s a difference between that and trusting OpenAI enough to think building products piggybacking on their infrastructure is potentially one of the most effective uses of donor funds. Even if I did trust them, which I don’t for reasons EAs are generally aware of, I’m also not at all optimistic that their chatbot would be remotely useful at advising subsistence farmers on market and soil conditions in their locality.
And especially not remotely confident it’d be better than an information website, which might not be VC-fundable, but would be a whole lot cheaper to create and keep bullshit-free
I agree this is also a significant factor
Quite a few development and EA adjacent organisations think AI will be quite important, if not the most important factor for future development. It is already being used by many companies, charities and governments around the world.
IDInsight—Ask-a-Metric: Your AI data analyst on WhatsApp
The Agency Fund—AI for Global Development Accelerator: Introducing our cohort
How AI is driving India’s next agricultural revolution
How Neil King and David Baker are using AI to create more effective vaccines
Kenyan farmers deploying AI to increase productivity
How the farmers without smartphones are using AI