Cool news: Jesse Eisenberg donated a kidney to a stranger, and said it was after hearing a podcast on ‘effective altruism’ where they talked about kidney donations. He mentioned it in this podcast. I assume this might lead back to Dylan Matthews.
Has anyone talked about the role the Green Revolution probably played in making factory farming economically viable?
Among other enabling factors (e.g. antibiotics), factory farming, especially of pigs and poultry, depends on cheap grain feed. The system only works at scale if feed costs are low enough to make confinement viable relative to pasture. The Green Revolution roughly tripled global grain production between 1960 and 2000, and maize in particular became cheap enough to feed to animals at industrial volumes.
Every time I see a celebration of Norman Borlaug and the Green Revolution, I can’t help but recoil a little, thinking about the unintended consequence on the tens of billions of non-human animals impacted.
If this is true, I feel conflicted in how to think about the Green Revolution. Clearly good intentions, saved hundreds of millions of humna, condemned hundreds of billions to lives of animals to pain. I feel like the EA community has the right virtues to hold this complexity.
Which comes back to the question: how should we talk about the Green Revolution?
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
Orgs provide model subscriptions to their teams.
People share the ways they’ve been using AI in slack channels or recurring meetings.
There are educational webinars or fellowships.
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
Significantly upgrading usage requires a lot of dedicated time to experiment and learn in ways that can feel hard during a busy work week.
A great way to learn can be trying a task just outside of one’s ability with someone on hand to help, which is quite hard to set up in the age of remote work.
For folks who don’t have a coding/IT background, it’s hard to know what activities could be automated, or what supportive infrastructure is needed to pull it off.
I think the following would meaningfullyimprove how much individuals and organisations use AI:
Extended time for peer-to-peer co-working on trying to solve problems with AI (e.g. every second Friday afternoon.)
A full week of staff training on AI use, so that lessons can be followed by practice (HT to Eleanor McAree for this one).
Organisations with 20+ staff should hire an AI specialist who goes from team to team and person to person to help them use AI to increase their productivity on an ongoing basis (I think if someone builds a technical solution, it usually requires maintenance by someone with that level of proficiency).
Smaller organisations could have fractional AI specialists on retainer to do the same thing.
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I’m struggling to think of many things you’d actually need a full model subscription for (rather than just asking the occasional question to a free model).
I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.
Yeah I have, and my impression from those I’ve spoken with is that this has not been the case. You don’t think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:
I know grantmakers who have significantly automated parts of their work.
I know people who have classified 1,000 people in their CRM across a range of people using AI instead of manually.
I’ve seen some impressive use of AI to go through 1000′s of academic papers looking for novel solutions to a welfare that might exist but is not widely known.
Cool news: Jesse Eisenberg donated a kidney to a stranger, and said it was after hearing a podcast on ‘effective altruism’ where they talked about kidney donations. He mentioned it in this podcast. I assume this might lead back to Dylan Matthews.
Mark Zuckerberg’s roommate ✅
Guy who played Mark Zuckerberg in the movie ✅
Actual Mark Zuckerberg when?
Has anyone talked about the role the Green Revolution probably played in making factory farming economically viable?
Among other enabling factors (e.g. antibiotics), factory farming, especially of pigs and poultry, depends on cheap grain feed. The system only works at scale if feed costs are low enough to make confinement viable relative to pasture. The Green Revolution roughly tripled global grain production between 1960 and 2000, and maize in particular became cheap enough to feed to animals at industrial volumes.
Every time I see a celebration of Norman Borlaug and the Green Revolution, I can’t help but recoil a little, thinking about the unintended consequence on the tens of billions of non-human animals impacted.
If this is true, I feel conflicted in how to think about the Green Revolution. Clearly good intentions, saved hundreds of millions of humna, condemned hundreds of billions to lives of animals to pain. I feel like the EA community has the right virtues to hold this complexity.
Which comes back to the question: how should we talk about the Green Revolution?
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
Orgs provide model subscriptions to their teams.
People share the ways they’ve been using AI in slack channels or recurring meetings.
There are educational webinars or fellowships.
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
Significantly upgrading usage requires a lot of dedicated time to experiment and learn in ways that can feel hard during a busy work week.
A great way to learn can be trying a task just outside of one’s ability with someone on hand to help, which is quite hard to set up in the age of remote work.
For folks who don’t have a coding/IT background, it’s hard to know what activities could be automated, or what supportive infrastructure is needed to pull it off.
I think the following would meaningfully improve how much individuals and organisations use AI:
Extended time for peer-to-peer co-working on trying to solve problems with AI (e.g. every second Friday afternoon.)
A full week of staff training on AI use, so that lessons can be followed by practice (HT to Eleanor McAree for this one).
Organisations with 20+ staff should hire an AI specialist who goes from team to team and person to person to help them use AI to increase their productivity on an ongoing basis (I think if someone builds a technical solution, it usually requires maintenance by someone with that level of proficiency).
Smaller organisations could have fractional AI specialists on retainer to do the same thing.
What do people think? What have I missed?
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I’m struggling to think of many things you’d actually need a full model subscription for (rather than just asking the occasional question to a free model).
I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.
Yeah I have, and my impression from those I’ve spoken with is that this has not been the case. You don’t think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:
I know grantmakers who have significantly automated parts of their work.
I know people who have classified 1,000 people in their CRM across a range of people using AI instead of manually.
I’ve seen some impressive use of AI to go through 1000′s of academic papers looking for novel solutions to a welfare that might exist but is not widely known.