We were previously comparing two hypotheses:
HoH-argument is mistaken
Living at HoH
Now we’re comparing three:
“Wild times”-argument is mistaken
Living at a wild time, but HoH-argument is mistaken
“Wild time” is almost as unlikely as HoH. Holden is trying to suggest it’s comparably intuitively wild, and it has pretty similar anthropic / “base rate” force.
So if your arguments look solid, “All futures are wild” makes hypothesis 2 look kind of lame/improbable—it has to posit a flaw in an argument, and also that you are living at a wildly improbable time. Meanwhile, hypothesis 1 merely has to posit a flaw in an argument, and hypothesis 3 merely has to a posit HoH (which is only somewhat more to swallow than a wild time).
So now if you are looking for errors, you probably want to focus for errors in the argument that we are living at a “wild time.” Realistically, I think you probably need to reject the possibility that the stars are real and that it is possible for humanity to spread to them. In particular, it’s not too helpful to e.g. be skeptical of some claim about AI timelines or about our ability to influence society’s trajectory.
This is kind of philosophically muddled because (I think) most participants in this discussion already accept a simulation-like argument that “Most observers like us are mistaken about whether it will be possible for them to colonize the stars.” If you set aside the simulation-style arguments, then I think the “all futures are wild” correction is more intuitively compelling.
(I think if you tell people “Yes, our good skeptical epistemology allows us to be pretty confident that the stars don’t exist” they will have a very different reaction than if you tell them “Our good skeptical epistemology tells us that we aren’t the most influential people ever.”)
I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I’m a bit more skeptical though I do think it’s a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn’t true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don’t think it’s unfair to put it in the same reference class as Rodney Brooks’ evaluations to the extent that his was intended as a serious evaluation.
The Nick Bostrom quote (from here) is:
In retrospect we know that the AI project couldn’t possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence.
I would have guessed this is just a funny quip, in the sense that (i) it sure sounds like it’s just a throw-away quip, no evidence is presented for those AI systems being competent at anything (he moves on to other topics in the next sentence), “approximately insect-level” seems appropriate as a generic and punchy stand in for “pretty dumb,” (ii) in the document he is basically just thinking about AI performance on complex tasks and trying to make the point that you shouldn’t be surprised by subhuman performance on those tasks, which doesn’t depend much on the literal comparison to insects, (iii) the actual algorithms described in the section (neural nets and genetic algorithms) wouldn’t plausibly achieve insect-level performance in the 70s since those algorithms in fact do require large training processes (and were in fact used in the 70s to train much tinier neural networks).
(Of course you could also just ask Nick.)
I also think it’s worth noting that the prediction in that section looks reasonably good in hindsight. It was written right at the beginning of resurgent interest in neural networks (right before Yann LeCun’s paper on MNIST with neural networks). The hypothesis “computers were too small in the past so that’s why they were lame” looks like it was a great call, and Nick’s tentative optimism about particular compute-heavy directions looks good. I think overall this is a significantly better take than mainstream opinions in AI. I don’t think this literally affects your point, but it is relevant if the implicit claim is “And people talking about insect comparisons were lead astray by these comparisons.”
I suspect you are more broadly underestimating the extent to which people used “insect-level intelligence” as a generic stand-in for “pretty dumb,” though I haven’t looked at the discussion in Mind Children and Moravec may be making a stronger claim. I’d be more inclined to tread carefully if some historical people tried to actually compare the behavior of their AI system to the behavior of an insect and found it comparable as in posts like this one (it’s not clear to me how such an evaluation would have suggested insect-level robotics in the 90s or even today, I think the best that can be said is that today it seems compatible with insect-level robotics in simulation today). I’ve seen Moravec use the phrase “insect-level intelligence” to refer to the particular behaviors of “following pheromone trails” or “flying towards lights,” so I might also read him as referring to those behaviors in particular. (It’s possible he is underestimating the total extent of insect intelligence, e.g. discounting the complex motor control performed by insects, though I haven’t seen him do that explicitly and it would be a bit off brand.)
ETA: While I don’t think 1990s robotics could plausibly be described as “insect-level,” I actually do think that the linked post on bee vision could plausibly have been written in the 90s and concluded that computer vision was bee-level, it’s just a very hard comparison to make and the performance of the bees in the formal task is fairly unimpressive.
Ironically, although cost-benefit analysts generally ignore the diminishing marginal benefit of money when they are aggregating value across people at a single date, their main case for discounting future commodities is founded on this diminishing marginal benefit.
I think the “main” (i.e. econ 101) case for time discounting (for all policy decisions other than determining savings rates) is roughly the one given by Robin here.
I don’t think there is a big incongruity here. Questions about diminishing returns to wealth become relevant when trying to determine what savings rate might be socially optimal. Analogously, questions about diminishing returns to wealth become relevant when we ask about what level of redistribution might be socially optimal, even if most economists would prefer to bracket them for most other policy discussions.
For governments who have the option to tax, WTP has obvious relevance as a way of comparing a policy to a benchmark of taxation+redistribution. I tentatively think that an idealized state (representing any kind of combination of its constituents’ interests) ought to use a WTP analysis for almost all of its policy decisions. I wrote some opinionated thoughts here.
It’s less clear if this is relevant for a realistic, state and the discussion becomes more complex. I think it depends on a question like “what is the role of cost-effectiveness analysis in contexts where it is a relatively minor input into decision-making?” I think realistically there will be different kinds of cost-benefit analyses for different purposes. Sometimes WTP will be appropriate but probably not most of the time. When those other analyses depend on welfare, I expect they can often be productively framed as “WTP x (utility/$)” with some reasonable estimate for utility/$. But even that abstraction will often break down in cases where WTP is hard-to-observe or beneficiaries are irrational or whatever.
I think for a philanthropist WTP isn’t compelling as a metric, and should usually be combined with an explicit estimate of (utility/$). I don’t think I’ve seen philanthropists using WTP in this way and certainly wouldn’t expect to see someone suggesting that handing money to rich people is more effective since it can be done with lower overhead.
A 5% probability of disaster isn’t any more or less confident/extreme/radical than a 95% probability of disaster; in both cases you’re sticking your neck out to make a very confident prediction.
“X happens” and “X doesn’t happen” are not symmetrical once I know that X is a specific event. Most things at the level of specificity of “humans build an AI that outmaneuvers humans to permanently disempower them” just don’t happen.
The reason we are even entertaining this scenario is because of a special argument that it seems very plausible. If that’s all you’ve got—if there’s no other source of evidence than the argument—then you’ve just got to start talking about the probability that the argument is right.
And the argument actually is a brittle and conjunctive thing. (Humans do need to be able to build such an AI by the relevant date, they do need to decide to do so, the AI they build does need to decide to disempower humans notwithstanding a prima facie incentive for humans to avoid that outcome.)
That doesn’t mean this is the argument or that the argument is brittle in this way—there might be a different argument that explains in one stroke why several of these things will happen. In that case, it’s going to be more productive to talk about that.
(For example, in the context of the multi-stage argument undershooting success probabilities, it’s that people will be competently trying to achieve X and most of uncertainty is estimating how hard and how effectively people are trying—which is correlated across steps. So you would do better by trying to go for the throat and reason about the common cause of each success, and you will always lose if you don’t see that structure.)
And of course some of those steps may really just be quite likely and one shouldn’t be deterred from putting high probabilities on highly-probable things. E.g. it does seem like people have a very strong incentive to build powerful AI systems (and moreover the extrapolation suggesting that we will be able to build powerful AI systems is actually about the systems we observe in practice and already goes much of the way to suggesting that we will do so). Though I do think that the median MIRI staff-member’s view is overconfident on many of these points.
Is your impression that if customers were willing to pay for it, then that wouldn’t be sufficient cause to say that it benefited customers? (Does that mean that e.g. a standard ensuring that children’s food doesn’t cause discomfort also can’t be protected, since it benefits customers’ kids rather than customers themselves?)
These cases are also interesting for alignment agreements between AI labs, and it’s interesting to see it playing out in practice. Cullen wrote about this here much better than I will.
Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.
On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurts non-customer humans, and AI customers care more about other humans than they do about chickens, (ii) deploying unaligned AI actually likely hurts other AI customers in particular (since they will be the main ones competing with the unaligned but more sophisticated AI). So it’s likely that every individual AI customer would benefit.
Unfortunately, it seems like the same thing could be true in the chicken case—every individual customer could prefer the world with the welfare agreement—and it wouldn’t change the regulator’s decision.
For example, suppose that Dutch consumers eat 100 million chickens a year, 10/year for each of 10 million customers. Customer surveys discover that customers would only be willing to pay $0.01 for a chicken to have more space and a slightly longer life, but that these reforms increase chicken prices by $1. So they strike down the reform.
But with welfare standards in place, each customer pays an extra $10/year for chicken and 100 million chickens have improved lives, with a cost per chicken of less than $0.0000001/chicken, thousands of times lower than their WTP. (This is the same dynamic described here.) So every chicken consumer prefers the world where the standards are in place, despite not being willing to pay money to improve the lives of the tiny number of chickens they eat personally. This seems to be a very common reaction to discussions of animal welfare (“what difference does my consumption make? I can’t change the way most chickens are treated...”)
Because the number of chicken-eaters is so large, the relevant question in the survey should be “Would you prefer that someone else pay $X in order to improve chicken welfare?”, making a tradeoff between two strangers. That’s the relevant question for them, since the welfare standards mostly affect other people.
Analogously, if you ask AI consumers “Would you prefer have an aligned AI, or a slightly more sophisticated unaligned AI?” they could easily all say “I want the more sophisticated one,” even if every single human would be better off if there were an agreement to make only aligned AI. If an anti-trust regulator used the same standard as in this case, it seems like they would throw out an alignment agreement because of that, even knowing that it would make every single human worse off.
I still think in practice AI alignment agreements would be fine for a variety of reasons. For example, I think if you ran a customer survey it’s likely people would say they prefer use aligned AI even if it would disadvantage them personally because public sentiment towards AI is very different and the regulatory impulse is stronger. (Though I find it hard to believe that anything would end up hinging on such a survey, and even more strongly I think it would never come to this because there would be much less political pressure to enforce anti-trust.)
I guess I wouldn’t recommend the donor lottery to people who wouldn’t be happy entering a regular lottery for their charitable giving
If I won a donor lottery, I would consider myself to have no obligation whatsoever towards the other lottery participants, and I think many other lottery participants feel the same way. So it’s potentially quite bad if some participants are thinking of me as an “allocator” of their money. To the extent there is ambiguity in the current setup, it seems important to try to eliminate that.
I think that acceleration is autocorrelated—if things are accelerating rapidly at time T they are also more likely to be accelerating rapidly at time T+1. That’s intuitively pretty likely, and it seems to show up pretty strongly in the data. Roodman makes no attempt to model it, in the interest of simplicity and analytical tractability. We are currently in a stagnant period, and so I think you should expect continuing stagnation. I’m not sure exactly how large the effect (and obviously it depends on the model) is but I think it’s at least a 20-40 year delay. (There are two related angles to get a sense for the effect: one is to observe that autocorrelations seem to fade away on the timescale of a few doublings, rather than being driven by some amount of calendar time, and the other is to just look at the fact that we’ve had something like ~40 years of relative stagnation.)
I think it’s plausible that historical acceleration is driven by population growth, and that just won’t really happen going forward. So at a minimum we should be uncertain betwe3en roodman’s model and one that separates out population explicitly, which will tend to stagnate around the time population is limited by fertility rather than productivity.
(I agree with Max Daniel below that I don’t think that Nordhaus’ methodology is inherently more trustworthy. I think it’s dealing with a relatively small amount of pretty short-term data, and is generally using a much more opinionated model of what technological change would look like.)
The relevant section is VII. Summarizing the six empirical tests:
You’d expect productivity growth to accelerate as you approach the singularity, but it is slowing.
The capital share should approach 100% as you approach the singularity. The share is growing, but at the slow rate of ~0.5%/year. At that rate it would take roughly 100 years to approach 100%.
Capital should get very cheap as you approach the singularity. But capital costs (outside of computers) are falling relatively slowly.
The total stock of capital should get large as you approach the singularity. In fact the stock of capital is slowly falling relative to output.
Information should become an increasingly important part of the capital stock as you approach the singularity. This share is increasing, but will also take >100 years to become dominant.
Wage grow should accelerate as you approach the singularity, but it is slowing.
I would group these into two basic classes of evidence:
We aren’t getting much more productive, but that’s what a singularity is supposed to be all about.
Capital and IT extrapolations are potentially compatible with a singularity, but only a timescale of 100+ years.
I’d agree that these seem like two points of evidence against singularity-soon, and I think that if I were going on outside-view economic arguments I’d probably be <50% singularity by 2100. (Though I’d still have a meaningful probability soon, and even at 100 years the prospect of a singularity would be one of the most important facts about the basic shape of the future.)
There are some more detailed aspects of the model that I don’t buy, e.g. the very high share of information capital and persistent slow growth of physical capital. But I don’t think they really affect the bottom line.
If the market can’t price 30-year cashflows, it can’t price anything, since for any infinitely-lived asset (eg stocks!), most of the present-discounted value of future cash flows is far in the future.
If an asset pays me far in the future, then long-term interest rates are one factor affecting its price. But it seems to me that in most cases that factor still explains a minority of variation in prices (and because it’s a slowly-varying factor it’s quite hard to make money by predicting it).
For example, there is a ton of uncertainty about how much money any given company is going to make next year. We get frequent feedback signals about those predictions, and people who win bets on them immediately get returns that let them show how good they are and invest more, and so that’s the kind of case where I’d be more scared of outpredicting the market.
So I guess that’s saying that I expect the relative prices of stocks to be much more efficient than the absolute level.
See eg this Ralph Koijen thread and linked paper, “the first 10 years of dividends only make up ~20% of the value of the stock market. 80% is due to value of cash flows beyond 10 years”
Haven’t looked at the claim but it looks kind of misleading. Dividend yield for SPY is <2% which I guess is what they are talking about? But buyback yield is a further 3%, and with a 5% yield you’re getting 40% of the value in the first 10 years, which sounds more like it. So that would mean that you’ve gotten half of the value within 13.5 years instead of 31 years.
Technically the stock is still valued based on the future dividends, and a buyback is just decreasing outstanding shares and so increasing earnings per share. But for the purpose of pricing the stock it should make no difference whether earnings are distributed as dividends or buybacks, so the fact that buybacks push cashflows to the future can’t possibly affect the difficulty of pricing stocks.
Put a different way, the value of a buyback to investors doesn’t depend on the actual size of future cashflows, nor on the discount rate. Those are both cancelled out because they are factored into the price at which the company is able to buy back its shares. (E.g. if PepsiCo was making all of its earnings in the next 5 years, and ploughing them into buybacks, after which they made a steady stream of not-much-money, then PepsiCo prices would still be equal to the NPV of dividends, but the current PepsiCo price would just be an estimate of earnings over the next 5 years and would have almost no relationship to long-term interest rates.)
Even if this is right it doesn’t affect your overall point too much though, since 10-20 year time horizons are practically as bad as 30-60 year time horizons.
I I think the market just doesn’t put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it’s a very small part of my AI-boom portfolio.)
I think it’s very hard for the market to get 30-year debt prices right because the time horizons are so long and they depend on super hard empirical questions with ~0 feedback. Prices are also determined by supply and demand across a truly huge number of traders, and making this trade locks up your money forever and can’t be leveraged too much. So market forecasts are basically just a reflection of broad intellectual consensus about the future of growth (rather than views of the “smart money” or anything), and the mispricing is just a restatement of the fact that AI-boom is a contrarian position.
Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance—I think the most important argument for me is the analogy to computers.
It’s possible to write “Humanity survives the next billion years” as a conjunction of a billion events (humanity survives year 1, and year 2, and...). It’s also possible to write “humanity goes extinct next year” as a conjunction of a billion events (Alice dies, and Bob dies, and...). Both of those are quite weak prima facie justifications for assigning high confidence. You could say that the second conjunction is different, because the billionth person is very likely to die once the others have died (since there has apparently been some kind of catastrophe), but the same is true for survival. In both cases there are conceivable events that would cause every term of the conjunction to be true, and we need to address the probability of those common causes directly. Being able to write the claim as a conjunction doesn’t seem to help you get to extreme probabilities without an argument about independence.
I feel you should be very hesitant to assign 99%+ probabilities without a good argument, and I don’t think this is about anchoring to percent. The burden of proof gets stronger and stronger as you move closer to 1, and 100 is getting to be a big number. I think this is less likely to be a tractable disagreement than the other bullets but it seems worth mentioning for completeness. I’m curious if you think there are other natural statements where the kind of heuristic you are describing (or any other similarly abstract heuristic) would justifiably get you to such high confidences. I agree with Max Daniel’s point that it doesn’t work for realistic versions of claims like “This coin will come up heads 30 times in a row.” You say that it’s not exclusive to simplified models but I think I’d be similarly skeptical of any application of this principle. (More generally, I think it’s not surprising to assign very small probabilities to complex statements based on weak evidence, but that it will happen much more rarely for simple statements. It doesn’t seem promising to get into that though.)
I think space colonization is probably possible, though getting up to probabilities like 50% for space colonization feasibility would be a much longer discussion. (I personally think >50% probability is much more reasonable than <10%.) If there is a significant probability that we colonize space, and that spreading out makes the survival of different colonists independent (as it appears it would), then it seems like we end up with some significant probability of survival. That said, I would also assign ~1/2 probability to surviving a billion years even if we were confined to Earth. I could imagine being argued down to 1⁄4 or even 1⁄8 but each successive factor of 2 seems much harder. So in some sense the disagreement isn’t really about colonization.
Stepping back, I think the key object-level questions are something like “Is there any way to build a civilization that is very stable?” and “Will people try?” It seems to me you should have a fairly high probability on “yes” to both questions. I don’t think you have to invoke super-aligned AI to justify that conclusion—it’s easy to imagine organizing society in a way which drives existing extinction risks to negligible levels, and once that’s done it’s not clear where you’d get to 90%+ probabilities for new risks emerging that are much harder to reduce. (I’m not sure which step of this you get off the boat for—is it that you can’t imagine a world that say reduced the risk of an engineered pandemic killing everyone to < 1/billion per year? Or that you think it’s very likely other much harder-to-reduce risks would emerge?)
A lot of this is about burden of proof arguments. Is the burden of proof on someone to exhibit a risk that’s very hard to reduce, or someone to argue that there exists no risk that is hard to reduce? Once we’re talking about 10% or 1% probabilities it seems clear to me that the burden of proof is on the confident person. You could try to say “The claim of ‘no bad risks’ is a conjunction over all possible risks, so it’s pretty unlikely” but I could just as well say “The claim about ‘the risk is irreducible’ is a conjunction over all possible reduction strategies, so it’s pretty unlikely” so I don’t think this gets us out of the stalemate (and the stalemate is plenty to justify uncertainty).
I do furthermore think that we can discuss concrete (kind of crazy) civilizations that are likely to have negligible levels of risk, given that e.g. (i) we have existence proofs for highly reliable machines over billion-year timescales, namely life, (ii) we have existence proofs for computers if you can build reliable machinery of any kind, (iii) it’s easy to construct programs that appear to be morally relevant but which would manifestly keep running indefinitely. We can’t get too far with this kind of concrete argument, since any particular future we can imagine is bound to be pretty unlikely. But it’s relevant to me that e.g. stable-civilization scenarios seem about as gut-level plausible to me as non-AI extinction scenarios do in the 21st century.
Consider the analogous question “Is it possible to build computers that successfully carry out trillions of operations without errors that corrupt the final result?” My understanding is that in the early 20th century this question was seriously debated (though that’s not important to my point), and it feels very similar to your question. It’s very easy for a computational error to cascade and change the final result of a computation. It’s possible to take various precautions to reduce the probability of an uncorrected error, but why think that it’s possible to reduce that risk to levels lower than 1 in a trillion, given that all observed computers have had fairly high error rates? Moreover, it seems that error rates are growing as we build bigger and bigger computers, since each element has an independent failure rate, including the machinery designed to correct errors. To really settle this we need to get into engineering details, but until you’ve gotten into those details I think it’s clearly unwise to assign very low probability to building a computer that carries out trillions of steps successfully—the space of possible designs is large and people are going to try to find one that works, so you’d need to have some good argument about why to be confident that they are going to fail.
You could say that computers are an exceptional example I’ve chosen with hindsight. But I’m left wondering if there are any valid applications of this kind of heuristic—what’s the reference class of which “highly reliable computers” are exceptional rather than typical?
If someone said:”A billion years is a long time. Any given thing that can plausibly happen should probably be expected to happen over that time period” then I’d ask about why life survived the last billion years.
You could say that “a billion years” is a really long time for human civilization (given that important changes tend to happen within decades or centuries) but not a long time for intelligent life (given that important changes takes millions of years). This is similar to what happens if you appeal to current levels of extinction risk being really high. I don’t buy this because life on earth is currently at a period of unprecedentedly rapid change. You should have some reasonable probability of returning to more historically typical timescales of hundreds of millions of years, which in turn gives you a reasonable overall probability on surviving for hundreds of millions of years. (Actually I think we should have >50% probabilities for reversion to lower timescales, since we can tell that the current period of rapid growth will soon be over. Over our history rapid change and rapid growth have basically coincided, so it’s particularly plausible that returning to slow-growth will also return to slow-change.)
Applying the rule of thumb for estimating lifetimes to “the human species” rather than “intelligent life” seems like it’s doing a huge amount of work. It might be reasonable to do the extrapolation using some mixture between these reference classes (and others), but in order to get extreme probabilities for extinction you’d need to have an extreme mixture. This is part of the general pattern why you don’t usually end up with 99% probabilities for interesting questions without real arguments—you need to not only have a way of estimating that has very high confidence, you need to be very confident in that way of estimating.
You could appeal to some similar outside view to say “humanity will undergo changes similar in magnitude to those that have occurred over the last billion years;” I think that’s way more plausible (though I still wouldn’t believe 99%) but I don’t think that it matters for claims about the expected moral value of the future.
The doomsday argument can plausibly arrive at very high confidences based on anthropic considerations (if you accept those anthropic principles with very high confidence). I think many long-termists would endorse the conclusion that the vast majority of observers like us do not actually live in a large and colonizable universe—not at 99.999999% but at least at 99%. Personally I would reject the inference that we probably don’t live in a large universe because I reject the implicit symmetry principle. At any rate, these lines of argument go in a rather different direction than the rest of your post and I don’t feel like it’s what you are getting at.
Scaling down all the amounts of time, here’s how that situation sounds to me: US output doubles in 15 years (basically the fastest it ever has), then doubles again in 7 years. The end of the 7 year doubling is the first time that your hypothetical observer would say “OK yeah maybe we are transitioning to a new faster growth mode,” and stuff started getting clearly crazy during the 7 year doubling. That scenario wouldn’t be surprising to me. If that scenario sounds typical to you then it’s not clear there’s anything we really disagree about.
Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards.
0.14%/year growth sustained over 500 years is a doubling. If you did that between 5000BC and 1000AD then that would be 4000x growth. I think we have a lot of uncertainty about how much growth actually occurred but we’re pretty sure it’s not 4000x (e.g. going from 1 million people to 4 billion people). Standard kind of made-up estimates are more like 50x (e.g. those cited in Roodman’s report), half that fast.
There is lots of variance in growth rates, and it would temporarily be above that level given that populations would grow way faster than that when they have enough resources. That makes it harder to tell what’s going on but I think you should still be surprised to see such high growth rates sustained for many centuries.
(assuming you discount 1350 as I do as an artefact of recovering from various disasters
This doesn’t seem to work, especially if you look at the UK. Just consider a long enough period of time (like 1000AD to 1500AD) to include both the disasters and the recovery. At that point, disasters should if anything decrease growth rates. Yet this period saw historically atypically fast growth.
Some thoughts on the historical analogy:
If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers
I think European GDP was already pretty crazy by 1700. There’s been a lot of recent arguing about the particular numbers and I am definitely open to just being wrong about this, but so far nothing has changed my basic picture.
After a minute of thinking my best guess for finding the most reliable time series was from the Maddison project. I pulled their dataset from here.
Here’s UK population:
1000AD: 2 million
1500AD: 3.9 million (0.14%/year growth)
1700AD: 8.6 million (0.39%)
1820AD: 21.2 million (0.76%)
A 0.14%/year growth rate was already very fast by historical standards, and by 1700 things seemed really crazy.
Here’s population in Spain:
1000AD: 4 million
1500AD: 6.8 million (0.11%)
1700AD: 8.8 million (0.13%)
1820AD: 12.2 million (0.28%)
The 1500-1700 acceleration is less marked here but still seems like growth was fast.
Here’s the world using the data we’ve all been using in the past (which I think is much more uncertain):
10000BC: 4 million
3000BC: 14 million (0.02%)
1000BC: 50 million (0.06%)
1000AD: 265 million (0.08%)
1500AD: 425 million (0.09%)
1700AD: 610 million (0.18%)
1820AD: 1 billion (0.41%)
This puts the 0.14%/year growth in the UK in context, and also suggests that things were generally blowing up by 1700AD.
I think that looking at the country-level data is probably better since it’s more robust, unless your objection is “GWP isn’t what matters because some countries’ GDP will be growing much faster.”
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
My process was to check the “About the forum” link on the left hand side, see that there was a section on “What we discourage” that made no mention of hiring, then search for a few job ads posted on the forum and check that no disapproval was expressed in the comments of those posts.
I think that a scaled up version of GPT-3 can be directly applied to problems like “Here’s a situation. Here’s the desired result. What action will achieve that result?” (E.g. you can already use it to get answers like “What copy will get the user to subscribe to our newsletter?” and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)
I think that if GPT-3 was more powerful then many people would apply it to problems like that. I’m concerned that such systems will then be much better at steering the future than humans are, and that none of these systems will be actually trying to help people get what they want.
A bunch of people have written about this scenario and whether/how it could be risky. I wish that I had better writing to refer people to. Here’s a post I wrote last year to try to communicate what I’m concerned about.
Hires would need to be able to move to the US.