Yes, I’m conditioning on no singularity here.
Isaac King
I tentatively agree with that, I just expect those to be the same person.
Whether it’s economically a good idea, I don’t think it is, at least not right now. People want to do it primarily because it’s cool, not because it’s useful. (There’s also the cynical view that Elon is hyping it up in order to induce demand for his own company, which seems plausible to me given that his arguments about X-risk are so transparently wrong and he’s even admitted as much in the past.)
However, once the first settlements are established and the sunk costs have already been paid, it will be much easier to make them an economic positive. It’s also possible that we get a space race 2.0, as other superpowers like China become concerned about the US establishing a dominant interplanetary presence and try to create their own.
The same would have been said about reusable rockets and human-level AI 15 years ago. I don’t understand how one can look at a billion dollar company with concrete plans to colonize Mars and the technology to do so, and conclude that the probability of this happening is so low it can be dismissed.
That’s my point; that is explicitly not the plan. Elon wants to establish a colony right away, and he’s in the process of building up the infrastructure to do it. Frankley I think it makes no sense to compare the two; the US went to the moon as part of the cold war with the Soviet Union; the whole point was just showing off. The Apollo program was much too expensive to support a long-term lunar colony, and there were never serious plans to do so. The current space race is completely different. Elon is not trying to compete with anyone. He just wants there to be a civilization on Mars, and unlike the Saturn V, Starship is designed to be cheap and reusable enough to make that possible.
I haven’t yet read A City on Mars, but I’ve heard that it’s pretty poor. e.g. Peter Hague’s review: https://planetocracy.org/p/review-of-a-city-on-mars-part-ii.
Value lock-in is happening *now*
Is there a funding breakdown anywhere? That is, where does all the money actually come from? (Feastables sales, advertising deals, donations, etc.) What’s the ROI of the average video on Mr. Beast’s main channel, where he has participants compete for some large cash prize? How real are these prizes; do participants actually walk away with that much in winnings, or is that just for show and they’re actually getting less?
For the larger scale projects, like the “built 100 houses” and “built 100 wells”, is there a follow-up with the affected people afterwards to see whether there was any long-term benefit? How are these goals decided on?
Would you ever consider doing a video on factory farmed animal welfare? I understand that this might come with reputational concerns, since people don’t like being reminded of the cruelty that they’re paying for with their food money, but it could also do outsized good by making people aware of a cause area they didn’t previously realize was an issue. Everyone already knows there are starving children in Africa, but many people don’t realize how bad factory farming is. Even if the video convinces just 0.1% of viewers to eat less meat, that could easily outweigh every other donation Mr. Beast Philanthropy has ever made.
Mr. Beast has done some videos that border on psychological experiments, like the “trapped 100 people” video. Would he be interested in doing more of those that are similar to classic ethical thought experiments? Obviously he can’t tie people to train tracks, but there are plenty of interesting experiments that involve only giving people stuff under certain conditions, like putting people in a prisoner’s dilemma for money, or Kavka’s toxin puzzle using a human judge, or even just something as simple as making participants choose between giving $1000 to one person who’s standing in front of them vs. $100,000 to 100 people in poverty. There are all sorts of interesting video ideas that could also get people interested in moral philosophy.
I’m wondering whether it could be worthwhile to establish a new humane animal product certification. Many words have been written on the EA forum about how the existing labels like “free range eggs” and “pasture-raised eggs” still involve horrific conditions for the chickens, and it’s best to avoid them entirely. But eggs, along with other non-meat animal products like milk, wool, and honey can in theory be produced completely humanely, it’s just much more expensive. A EA-aligned certification body that actually cares about animal welfare could maintain a list of producers from whom it’s ethical to purchase.
Obviously factory farms wouldn’t be interested since it’s less profitable, but there are at least a few hundred thousand people in the US, probably a few million, who seriously care about animal welfare and would support a niche brand like this. (This is many more than just people in the effective altruism movement; think about the people who produce documentaries like Dominion.)
I’m thinking it could start by appealing to small family farms, like people who have a single chicken coop in the backyard and supply eggs to their neighbors from time to time. Come up with a comprehensive guide to producing animal products ethically, make it available online, and advertise it to small independent producers. Then offer to have someone visit their farm in person and check the conditions, providing suggestions for improvement if any are needed, and if the criteria are met, add them to a list of certified producers. This would obviously be expensive (at least $1000 in travel costs alone), so subsidize it with EA donations at first, and then as the brand catches on it can start charging producers.
Make the list easily searchable to put buyers in contact with sellers. e.g. personally I don’t eat eggs, but if I could search for ethical producers in my area, I’d be happy to drive for an hour and pay 5x the normal price to pick some up.
Alternatively it could maybe start with non-perishable items like wool, since those can be shipped long distance to people who want them, so it makes more sense for a product with an extremely small number of producers. (But is there really anyone who desperately wants wool instead of synthetics and would pay a premium for ethical sourcing? Not sure.)
I’m curious whether this has been looked into before, and if so, why it was decided against. I feel like there’d be an opportunity here to partner with more traditional animal rights groups and “back to the land” groups, while also supporting EAs who would like to consume ethical animal products, and raising awareness in the general population of the insufficiency of the existing standards like “free range eggs”.
Isaac King’s Quick takes
Creating identical copies of people is not claimed to sum to less moral worth than one person. It’s claimed to sum to no more than one person. Torturing one person is still quite bad.
Downvoting as you seem to have not read or chosen to ignore the first section; I explain in that section why it would matter less to torture a copy. I can’t meaningfully respond to criticisms that don’t engage with the argument I presented.
Probably, yeah. But that seems hard to square with a consistent theory of moral value, given that there’s a continuum between “good” and “bad” experiences.
I would add to #2 that the number of shrimp being farmed is equally if not more relevant than brain size. The total number of experiences is surely still quite large in normal human terms, but could be small relative to the massive numbers of shrimp in existence.
I didn’t mean it to be evidence for the statement, just an explanation of what I meant by the phrase.
Do you disagree that most people value that? My impression is that wireheading and hedonium are widely seen as undesirable.
On building Omelas for shrimp; the implications of diversity-oriented theories of moral value on factory farming
How well do you think EA handled the FTX scandal?
Yeah, I don’t do it on any non-LW/EAF post.
Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like “current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).
Oh, I agree. Arguments of the form “bad things are theoretically possible, therefore we should worry” are bad and shouldn’t be used. But “bad things are likely” is fine, and seems more likely to reach an average person than “bad things are 50% likely”.
A location doesn’t need to be “better” for it to contribute to the economy. Some countries are almost strictly worse than others in terms of natural resources and climate for living and growing things, but people still live there.