Can you say something about what N-D lasers are and why they present such a strong threat? A google search for “N-D laser” just turns up neodymium lasers and it isn’t clear why they would be as threatening as you present. In the worst case, you build a probe with a very powerful fusion energy source which is able to fire a laser at people sufficiently powerful to kill them, you could probably also build a laser or defense system to strike and kill the probe before existential loss has been caused.
ben.smith
Interstellar travel will probably doom the long-term future
My intuition is that most of the galactic existential risks listed are highly unlikely, and it is possible that the likely ones (self-replicating machines and ASI) may be defense-dominant. An advanced civilization capable of creating self-replicating machines to destroy life in other systems could well be capable of building defense systems against a threat like that.
You could substantially increase your weekly active users, converting monthly active users (MAU) into weekly and even daily users, and increasing MAU as well, by using push notifications to inform users of replies to their posts and comments and other events that are currently only sent as in-forum notifications to most users. Many, many times, I have posted on the forum, sent a comment or reply, and only weeks later seen that there was a response. On the other hand, I will get an email from twitter or bluesky if one person likes my post, and I immediately go on to see who it was. In doing so you will draw people to the forum at the exact time their engagement will encourage others to come back, building up a positive flywheel of engagement.
These features are already built into your forum but are off by default! This surprised me greatly because most online forum—not only feedscrolling websites like X and Facebook, but also forum-style websites like Substack and Wordpress—make it easy or default to get push notifications via email. That builds engagement as I’ve described. Often when I post on Tyler Cowen’s Wordpress-based Marginal Revolution blog, I get a tonne of email notifications of replies and discussions about that topic. It’s a bit overwhelming, but it’s fun!
Users who just use your notification default (notifications within the website, but no few push notifications) probably make up the vast majority of active users and passive users (if not the most active users). If it is possible to identify users who have not deliberately turned off notifications, I strongly suggest that you flip the default to affect those users who haven’t deliberately set a notification policy to send push notifications. This will get a small hit from people who dislike this, but you could mitigate this by e.g., an email in your next digest to inform people of why you are making the change.
I have long thought this was a missing feature on EA Forum; now I know it exists, but is turned off.
@titotal said that it’s not a lot of fun to post here. I agree, and I also think that making it more immediately rewarding to post, by informing people of others’ engagement with their content as soon as it happens, would make it a lot more fun. It will make me personally very happy if you do this!
Fair enough.
My central expectation is that value of one more human life created is roughly about even with the amount of nonhuman suffering that life would cause (based on here https://forum.effectivealtruism.org/posts/eomJTLnuhHAJ2KcjW/comparison-between-the-hedonic-utility-of-human-life-and#Poultry_living_time_per_capita). I’m also willing to assume cultured meat is not too long away. Then the childhood delay til contribution only makes a fractional difference and I tip very slightly back into the pro natalist camp, while still accepting that the meat eater problem is relevant.
I think no one here is trying to use pronatalism to improve animal welfare. The crux for me is more whether pronatalism is net-negative, neutral, or net-positive, and its marginal impact on animal welfare seems to matter in that case. But the total impact of animal suffering dwarfs whatever positive or negative impact pronatalism might have.
I think Richard is right about the general case. It was a bit unintuitive to me until I ran the numbers in a spreadsheet, which you can see here:
Basically, yes, assume that meat eating increases with the size of human population. But the scientific effort towards ending the need to meat eat also increases with the size of the human population, assuming marginal extra people are as equally likely to go into researching the problem as the average person. Under a simple model the two exactly balance out, as you can see in the spreadsheet.
I just think real life breaks the simple model in ways I have described below, in a way that preserves a meat-eater problem.
right—in that simple model, each extra marginal average person decreases the time taken to invent cultured meat at the same rate as they contribute to the problem, and there’s an exact identity between those rates. But there are complicating factors that I think work against assuring us there’s no meat-eater problem:
An extra person starts eating animals from a very young age, but won’t start contributing to the meat-eater problem until they’re intellectually developed enough to make a contribution (21 yers to graduate undergraduate, 25-30 to get a PhD).
There’s a delay between when they invent a solution and when meat eating can actually be phased out, though perhaps that’s implicitly built into the model by the previous point
I do concede that the problem is mitigated somewhat because if we expect cultured meat to take over within the lifetime of a new person, then their harm (and impact) is scaled down proportionately, but the intrinsic hedonic value of their existence isn’t similarly scaled down.
But it doesn’t sound as simple as just “there’s no meat-eater problem”.
Ok, I missed the citation to your source initially because the citation wasn’t in your comment when you first posted it. The source does say less insect abundance in land converted to agricultural use from natural space. So then what i said about increased agricultural use supports your point rather than mine.
Yes I think so.
Great point! Though I think it’s unless clear what the impact of more humans on wild terrestrial invertebrate populations is. Developed countries have mostly stopped clearing land for human living spaces. I could imagine that a higher human population could induce demand for agriculture and increased trash output which could increase terrestrial invertebrate populations.
Pro-natalist success would cause so much animal suffering it is not even a net-positive cause area
Reviving this old thread to discuss the animal welfare objection to pro-natalism that I think is changing my mind on pro-natalism. I’m a regular listener to Simone and Malcolm Collins’s podcast. Since maybe 2021 I’ve gone on an arc of first fairly neutral to then being strongly pro-natalist, third being pro-natalist but not rating it as an effective cause area, and now entering a fourth phase where I might reject pro-natalism altogether.
I value animal welfare and at least on an intellectual level I care equally about their welfare and humanity’s. For every additional human we bring into existence at a time in history where humans have never eaten more meat per capita, on expectation, you will get years or—depending on their diet—perhaps even hundreds of years of animal suffering induced by the additional consumer demand for more meat. This is known as the meat-eater problem, but I haven’t seen anyone explicitly connect it to pro-natalism yet. It seems like an obvious connection to make.
There are significant caveats to add:
this is not an argument against the value of having your own kids, who you then raise with appropriate respect for the welfare of other sentient creatures. While you can’t control their choices as adults, if you raise them right, your expectation they will cause large amounts of suffering will be substantially reduced, potentially enough to make it a net positive choice. However, pro-natalism as a political movement aimed at raising birthrates at large will likely cause more animal suffering outweighing the value of human happiness it will create.
In the long term, we will hopefully invent forms of delicious meat like cultured meat that do not involve sentient animal suffering. The average person might still eat some farmed meat at the time, but hopefully, with delicious cultured meat options available, public opinion may allow for appropriate animal welfare for farmed animals, such that those farmed animals’ lives are at least net positive. When that happens, pro-natalism might make more sense. But we don’t know when cultured meat will appear. It is possible that widespread adoption is several decades away, in a slower AGI timeline world or where some form of cultural or legal turn prevents the widespread adoption of cultured meat even if it is technically possible.
I anticipate some people will argue that more humans will make the long term future go well because in expectation this will create more people going into the long term. I think this is a reasonable position to take but I don’t find it convincing because of the problem of moral cluelessness: there is far too much random chaos (in the butterfly effect sense of the term) for us to have any idea what the effect of more people now will be on the next few generations.
I might make a top level post soon to discuss this, but in the meantime I’m curious if you have any clear response to the animal welfare objection to pro-natalism.
For US opportunities, consider entering the US diversity visa lottery before November 5, 2024--its free and easy!
Reducing global AI competition through the Commerce Control List and Immigration reform: a dual-pronged approach
You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.
For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.
We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn’t 10% or more.
You could model the distribution of your uncertainty with, say, a beta distribution of .
You might wonder, why b=100 and not b=200, or 101? It’s an arbitrary choice, right?
To which I have two responses:
You can go one level up and model the beta parameter on some distribution of all reasonable choices, say, a uniform distribution between 10 and 1000.
While it is arbitrary, I claim that avoiding expected effects because we can’t make a fully non-arbitrary choice is itself an arbitrary choice. This is because we are acting in a dynamic world where every second, opportunities can be lost, and no action is still an action, the action of foregoing the counterfactual option. So by avoiding assigning any outcome, and acting accordingly, you have implicitly, and arbitrarily, assigned an outcome value of 0. When there’s some morally outcome we can only model with some somewhat arbitrary statistical priors, doing so nevertheless seems less arbitrary than just assigning an outcome value of 0.
This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn’t (by default) include a weighting amongst the set.
It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.
If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you’re back to using a single probability function. I think that’s probably what you should do. But that sounds to me indistinguishable from the naive response.
The idea of a “precise probability function” is in general flawed. The whole point of a probability function is you don’t have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That’s the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.
As Yann LeCun recently said, “If you do research and don’t publish, it’s not science.”
With all due respect to Yann LeCun, in my view he is as wrong here as he is dismissive about the risks from AGI.
Publishing is not an intrinsic and definitional part of science. Peer reviewed publishing definitely isn’t—it has only been the default for several decades to a half century or so. It may not be the default in another half century.
If Trump still thinks AI is “maybe the most dangerous thing” I would be wary of giving up on chances to leverage his support on AI safety.
In 2022, individual EAs stood for elected positions within each major party. I understand there are Horizon fellows with both Democrat and Republican affiliations.
If EAs can engage with both parties in those ways, added to the fact the presumptive Republican nominee may be sympathetic, I wouldn’t give up on Republican support for AI safety yet.
ha I see. Your advice might be right but I don’t think “consciousness is quantum”. I wonder if you could say what you mean by that?
Of course I’ve heard that before. In the past when I have heard people say that before, it’s by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:
Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn’t give you reassurance about AGI. We could have a formal definition of intelligence, and causal instantiations of it, without any qualia what-its-like-to-be subjective consciousness existing in the system. There is also conscious experience with minimal intelligence, like experiences of raw pleasure, pain, or observing the blueness of the sky. As I explain in the linked post, consciousness is also orthogonal to agency or goal-directed behavior.
There’s a great deal of research about consciousness. I described one account in my post, and Nick Humphrey does go out on a limb more than most researchers do. But my sense is most neuroscientists of consciousness endorse some account of consciousness roughly equivalent to Nick’s. While probably some (not all or even a majority) would concede the hard problem remains, based on what we do know about the structure of the physical substrates underlying consciousness, it’s hard to imagine what role “quantum” would do.
It fails to add any sense of meaningful free will, because a brain that makes decisions based on random quantum fluctuations doesn’t in any meaningful way have more agency than a brain that makes decisions based on pre-determined physical causal chains. While a [hypothetical] quantum-based brain does avoid being pre-determined by physical causal chains, now it is just pre-determined by random quantum fluctuations.
Lastly I have to confess a bit of prejudice against this view. In the past it seems like this view has been proposed so naively it seems like people are just mashing together two phenomena that no one fully understands, and proposing they’re related because ???? But the only thing they have in common, as far as I know, is that we don’t understand them. That’s not much of a reason to believe in a hypothesis that links them.
Assuming your view was correct, if someone built a quantum computer, would you then be more worried about AGI? That doesn’t seem so far off.
When I pit depopulation against causes that capture the popular imagination and that take up the most time in contemporary political discourse, I think depopulation scores pretty high as a cause and I am glad it is getting more attention.
When I pit it against causes that the EA movement spends the most time on, including AI x-risk, farmed animal welfare, perhaps even wild animal welfare, and global poverty, I find it hard to justify giving it my considered attention because of the outsized importance of the other problems.
AI x-risk is important because the long term future could be at stake in the next few years or decades. The other two causes are important because billions of people and trillions of animals are experiencing needless suffering now. It’s hard to see depopulation holding a candle to those cause areas.
I would like to see more mainstream funding and attention given to working on depopulation. On the other hand, unless I’m missing something, I would not like to see any funding or human capital diverted from ai x-risk, animal welfare, and global poverty.