Physician—general internal medicine. Interested in progress, existential risk and epistemology. Did some earning to give, but now have my doubts. Heavier on ideas than execution.
astupple
The danger of nuclear war is greater than it has ever been. Why donating to and supporting Back from the Brink is an effective response to this threat
Can a Transparent Idea Directory reduce transaction costs of new ideas?
Like me, I suspect many EA’s do a lot of “micro advising” to friends and younger colleagues. (In medicine, this happens almost on a daily basis). I know I’m an amateur, and I do my best to direct people to the available resources, but it seems like creating some basic pointers on how to give casual advice may be helpful.
Alternatively, I see the value in a higher activation energy for potentially reachable advisees- if they truly are considering adjusting their careers, then they’ll take the time to look at the official EA material.
Nonetheless, it seems like even this advice to amateurs like myself could be helpful—“Give your best casual advice. If things are promising, give them links to official EA content.”
Refuting longtermism with Fermat’s Last Theorem
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was “likely enough to be worth it.” And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he’d still consider quitting. He might even be more convinced that it’s hopeless.
How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles, ie low hanging fruit, are out there?
Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.
This is fantastically helpful, thank you so much for taking the time.
Makes me ponder the value of an “EA Curator.” There’s such an overwhelming amount of mind-bending content in the EA universe and its adjacent possible. This list of podcasts clearly only scratches the surface, yet I find myself wondering how I’m going to fit this in with the dozens of other podcast episodes, audiobooks, and print books I have on my plate, let alone other modes of discovery (and worse, how this at some point impinges on the time I have to do actual work on ideas that are so important).
Many EA’s have lists of books.… perhaps there could be an EA Reddit thread for simply voting up or down inspiringly-EA books, articles, blog posts, podcast episodes, etc?
Or, just a list of EA lists? Rob Wiblin’s list of podcasts indexed along with anyone else’s podcast list? Bonus for a method to vote individual lists up or down?
While I completely see what you’re saying, at the risk of sounding obtuse, I think the opposite of your opener may be true.
“People who do things are not, in general, idea constrained”
The contrary of this statement may be the fundamental point of EA (or at least a variant of it): People who do things in general (outside of EA) tend to act on bad ideas. In fact, EA is more about the ideas underlying what we do than it is about the doing itself. Millions of affluent people are doing things (going to school, work, upgrading their cars and homes, giving to charity), without examining the underlying ideas. EA’s success is its ability to convert doers to adopt its ideas. It’s creating a pool of doers who use EA ideas instead of conventional wisdom.
Perhaps there are two classes of doers, those already in the EA community who “get it,” and those outside who are just plugging away at life. When I think of filling talent gaps, I think that can be filled by (A) EA community members developing skills, and (B) recruiting skilled people to join the community. Group A probably doesn’t need good ideas because they’ve already accepted the ideas of our favorite thinkers etc. The marginal benefit of even better ideas is small. Instead, group A is better off if it simply gets down to the hard work of growing talent. But group B is laboring under bad ideas, and for many, it might not take much at all to get them to substitute bad ideas for EA-ideas. My guess is that, to grow talent, it is easier to convert doers from group B than to optimize doers in group A (which is certainly not to say group A shouldn’t do the hard work of optimizing their talent).
There is an odd circularity here- I think I just argued myself out of my original stance. I seem to have just concluded that we shouldn’t focus on the ideas of the EA community (which was my original intention) and instead should focus on methods of recruiting.
Maybe I’m arguing that we should develop recruiting ideas?
Also- any suggestions for good formal discussions of the philosophy and sociology of ideas (beyond the slightly nauseating pop business literature)? “Where Good Ideas Come From” by Steven Johnson is excellent, but not philosophically rigorous.
1- The Singularity is Near changed everything for me, made me quit my job and go to med school. I’ve since purchased it for many people, but I no longer do. Instead, I have been sending people copies of Home Deus by Yuval Noah Harari. Broader scope, more sociology, psychology and ethics. 2- The Selfish Gene (I think this moored me to reality closer than Steven Pinker’s work) 3- The Black Swan (Thinking Fast and Slow, Freakonomics, Predictably Irrational etc are probably better explications of irrationality, while Taleb is a pretty clear victim of his own criticisms, but Taleb’s style really shook me and I think it is the best for changing minds.) 4- Waking Up (A careful reading of Ken Wilber has been most influential for me, but I don’t recommend it because it needs a very skeptical eye. I’ve been lucky. Waking Up does most of the same work, but doesn’t get lost in the rabbit hole.) 5- Doing Good Better (not a shocker, but it really is an accessible slam dunk)
I love it. Creating lists of plausible outcomes is very valuable, we can leave alone to idea of assigning probabilities.
Basically, predictions about the future are fine as long as they include the caveat “unless we figure out something else.” That caveat can’t be ascribed a meaningful probability because we can’t know discoveries before we discovery them, we can’t know things before we know them.
Beautiful! We can’t determine “something we haven’t thought of” as simply “1 - all the things we’ve thought of”.
I mistakenly included my response to another comment, I’m pasting it below.
I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
Great point—Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point—Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters—they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer—he was familiar with the state of atomic physics and therefore many of the relevant discoveries—he even dedicated the book to an atomic scientist. And Wells’s “atomic bombs” were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It’s pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don’t think we are seeing knowledge of discoveries before they are discovered.
Szilard’s prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus’s predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like “but people have forecasted things shortly before they have appeared.” True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells’s idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that’s interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it’s one thing to make a great guess about the future. It’s entirely different to quantify the likelihood of this guess—I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.
I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
Great point—Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point—Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters—they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer—he was familiar with the state of atomic physics and therefore many of the relevant discoveries—he even dedicated the book to an atomic scientist. And Wells’s “atomic bombs” were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It’s pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don’t think we are seeing knowledge of discoveries before they are discovered.
Szilard’s prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus’s predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like “but people have forecasted things shortly before they have appeared.” True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells’s idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that’s interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it’s one thing to make a great guess about the future. It’s entirely different to quantify the likelihood of this guess—I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.
I have to look at Tetlock again—there’s a difference between predicting what will be determined to be the cause of Arafat’s death (historical, fact collecting) and predicting how new discoveries in the future will affect future politics. Nonetheless, I wouldn’t be surprised that some people are better than others at predicting future events in human affairs. An example would be predicting that Moore’s Law holds next year. In such a case, one could understand the engineering that is necessary to improve computer chips, perhaps understanding that production of a necessary component will half in price next year based on new supplies being uncovered in some mine. This is more knowledge of slight modifications of current understanding (basically, engineering vs. basic science research). It’s certainly important and impressive, but it’s more refining existing knowledge rathe rather than making new discoveries. Though I do recognize this response reads like me moving the goal posts....
Nice point about human development… I’m not sure how it relates. It seems to me this is biology playing out at a predictable pace. I’d bet that the elements of language development that are not dependent on biology vary greatly in their timelines, and the regularity that this research is discovering is almost purely biological. If we had the technology to do so, we could alter this biological development, and suddenly the old rules about milestones would fail. Put another way—reproducible experiments in psychology tell us about physiology of the brain, but nothing about minds, because mental phenomena are not predictable.
The periodic table is a perfect example of what I’m talking about—Mendelev discovered the periodicity, and then was able to predict features of the natural world (that certain chemical properties would conform to this theory.) So, periodicity was the discovery, and fitting in the elements just conformed to the original discovery.
Here’s another way to put my argument—imagine if every person were given a honda civic at age 16. You could imagine that most people would drive honda civics. An alien observer could think “humans are pre-programmed to choose honda civics.” But in fact, we are free to choose any car we want, it’s just that it’s really handy to keep driving the car we were given. Similarly in the real world—there are commonalities and propensities that can be picked up on by superforecasters, but that doesn’t mean they can’t be overwritten if someone has a mind to do so.
Great points though, I’ve got some thinking to do.
An update that came from the discussion:
Let’s split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they’ll be nonsensical. That’s because we cannot know how knowledge creation will affect 2. We don’t even need any fancy reasoning, it’s already implied in the definition of terms like knowledge creation and discovery. You can’t discover something before you discover it, before it’s created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don’t get involved. However, once we are capable, there’s no way now to know what we’ll do with the planets and asteroids in the future. Maybe we’ll find use for some mineral found predominantly in some asteroids, or maybe we’ll use a planet to block heat from the sun as it expands, or maybe we’ll detect some other risk/benefit and make changes accordingly. In fact, this last type of change will predominate the farther we get into the future.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there’s no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that’s why we know a fair bit about them). We shouldn’t waste effort trying to calculate the risk because we can’t do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey—if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it’s worth it to take some precautions.
Thank you!
How about this: let’s split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they’ll be nonsensical. That’s because we cannot know how knowledge creation will affect 2. We don’t even need any fancy reasoning, it’s already implied in the definition of terms like knowledge creation and discovery. You can’t discover something before you discover it, before it’s created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don’t get involved. However, once we are capable, there’s no way now to know what we’ll do with the planets and asteroids in the future. Maybe we’ll find use for some mineral found predominantly in some asteroids, or maybe we’ll use a planet to block heat from the sun as it expands, or maybe we’ll detect some other risk/benefit and make changes accordingly.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there’s no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that’s why we know a fair bit about them). We shouldn’t waste effort trying to calculate the risk because we can’t do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey—if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it’s worth it to take some precautions.
A thanksgiving turkey has an excellent model that predicts the farmer wants him to be safe and happy. But an explanation of thanksgiving traditions tells us a lot more about the risks of slaughter than the number of days the turkey has been fed and protected.
With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict.
Just like with the turkey, we should pay attention to the explanation, not just try to make predictions based on past data.
With all of this, probability terminology is baked into the language and it is hard to speak without incorporating it. With the previous post, it was co-authored, and I wanted to remove that phrase, but concessions were made.
Yes, but what I’m getting at is How do we know there’s a limited number of low hanging fruit? Or, as we make progress, don’t previously high fruit come into reach? AND, more progress opens more markets/fields.
It seems to me low hanging fruit is a bad analogy because there’s not way to know the number of undiscovered fruit out there. And perhaps it’s infinite. Or, it INCREASES the more we figure out.
My two cents—stagnation isn’t due to supply of good ideas waiting to be discovered, it’s stifling of free and open exploration by our norms that promote institutionalization of discovery.