I’m an academic research scientist currently working at the intersection of data science and (agro)-ecology. I found out about EA through farmed animal welfare channels, and I quickly grew to like its focus on reasoning transparency and the community’s willingness to think through complex moral issues. I’m especially interested in (but not yet super knowledgeable about) complex systems and meta-science, and I’d be happy to chat about those things.
RachelM
“I also don’t share the intuitive impulse to not eat meat. I’ve owned pets, I’ve watched the documentaries about factory farming, I’ve worked with animals on farms, I’ve read the essays about animal cognition, and none of that has sparked a particular intellectual or emotional impulse to not eat meat.”
Has any of that changed the kind of meat you eat, in terms of how the animals lived and died? I do get the argument-from-deliciousness (the sheer enjoyment of really good cheese is why I’m not fully vegan despite having been happily vegetarian for >30 years), but I’d find it really hard to eat, say, something containing eggs from caged hens. The visceral horror would outweigh any good sensory feelings, for me. Do you encounter any of that?
Really thought-provoking report, I’m glad you did this work. A couple of questions:
- Having spent a couple of months working on this topic, do you still think AI science capabilities are especially important to explore, cf AI in other contexts? I ask because I’ve been thinking and reading a lot about this recently, and I keep changing my mind about the answer.
- The ontology section seems very interesting but the language is too unfamiliar/technical for me to follow. Any chance you could give a few-sentence, ELI5-type overview?
What are the key cruxes between people who think AGI is about to kill us all, and those who don’t? I’m at the stage where I can read something like this and think “ok so we’re all going to die”, then follow it up with this and be like “ah great we’re all fine then”. I don’t yet have the expertise to critically evaluate the arguments in any depth. Has anyone written something that explains where people begin to diverge, and why, in a reasonably accessible way?
I’d join, time zones permitting.
I agree, and I actually have the same question about the benefits of AI. It all seems a bit hand-wavy, like ‘stuff will be better and we’ll definitely solve climate change’. More specifics in both directions would be helpful.
I’ve been vegetarian (but with steadily decreasing levels of animal products) for >30 years now. I’ve almost never taken supplements. I did take vitamin D and calcium when I broke a bone in a hiking accident last year, thinking they might be helpful and probably aren’t harmful.
I’m sceptical of a lot of nutrition science, kind of lazy—just looking at the regime in Chi’s comment makes me want to take a nap! - and also suspect that humans can adapt well to a range of diets that contain a lot of whole foods. My partner and I eat a lot of home-grown food, including some eggs from a couple of semi-feral hens we have hanging around, which I think gives me a more balanced diet than many people are able to achieve.
I do long-distance running, including some pretty gnarly trails, and I feel like if I’m able to do that, I’m probably doing OK nutrient-wise. Maybe I’m doing this all wrong but it seems to be working so far...
Thanks a lot, Vicky. It seems both empowering and humbling—“I’m helping to remove a lot of suffering from the world!” and also “There are so many beings I can’t help!”
Roughly how many applications do you expect to receive for the incubation programme, and how many progress to each round of the selection process? What are the main reasons why people do or don’t progress?
I’ve just been looking at the list of ideas you assessed in 2022, and one of them was to do with CO2 stunning of pigs. Even as someone who is reasonably aware of factory farming practices, that was pretty stomach-churning to read. I get that a focus on impact and a well-defined CEA will lead to selecting other ideas, but what do you do with the emotions that must come up as you think about some of the problems you’re researching?
Industrial animal agriculture is a system that is supported by a wide variety of factors, from beliefs about animals being a “resource” to the way the political system is structured. In theory, we could coordinate our work so that we targeted numerous different driving forces at the same time, in order to maximally destabilize the existing system and replace it with something better. That could look like working for cultural change while developing alt. proteins and helping farmers to transition out of animal ag., to give a very broad example. You’d probably want to start in one location and then scale up, or something like that.
I can see many practical and conceptual obstacles to this kind of approach, but it also seems to make a lot of sense. What do you think? And as someone with a great overview of the movement, how much of this do you already see happening?