I follow Crocker’s rules.
niplav
I believe that was a joke.
See also Tomasik 2017.
No consensus as far as I know, but there’s Trophic Cascades Caused by Fishing (Brian Tomasik, 2015). Summary:
One of the ecological effects of human fishing is to change the distribution of prey animals in the food web. Some evidence suggests that harvesting of big predatory fish may increase populations of smaller forage fish and decrease zooplankton populations. Meanwhile, harvesting forage fish directly (to eat as sardines/anchovies or to feed to farmed fish, pigs, or chickens) should tend to decrease forage-fish populations and increase zooplankton populations. On the other hand, it may also be that harvesting more fish reduces total fish biomass in the ocean, without significantly increasing smaller fish populations. There are many other trends that might be observed, and generalization is difficult.
Was this practice clearly delineated as an experiment to the participants?
Related question: How does one become someone like Carl Shulman (or Wei Dai, for that matter)?
The story I know is that if you can change the course of such an object by a slight amount early enough, that should be sufficient to cause significant deviations late in its course. Am I mistaken about this, and the force is not strong enough because the deviation is far too small?
This is a great post, strong upvote.
I am confused about your description who “handles” what. Especially for threats that move at the speed of light (solar flares, super flares, super nova explosions, gamma ray blasts, quasar ignition &c), it seems like the only option is increasing civilizational robustness, right? Additionally, you say that for rogue celestial bodies, “Managing this threat is futile”, which is true at the current level of technology, but if we had the energy of our sun available, surely we could redirect the path of such an object if detected early enough?
The only threat that looks like it truly falls into the category “unmanageable” is false vacuum decay, except maybe by interspersing as much as possible in the reachable universe, and avoiding destabilising fundamental physics experiments (though, ah, fictional evidence).
I felt-sense-disagree. (I haven’t yet downvoted the article, but I strongly considered it). I’ll try to explore why I feel that way.
One reason probably is that I treat posts as having a different claim than other forms of publishing on this forum (and LessWrong)—they (implicitly) make a claim that they’re finished & polished content. When I open a post I expect a person to have done some work that tries to uphold standards of scholarship and care, which this post doesn’t show. I’d’ve been far less disappointed if this were a comment or a shortform post.
The other part is probably paying attention to status and the standards that are put upon people with high status: I expect high status people to not put much effort into whatever they produce as they can coast on status, which seems like the thing that’s happening here. (Although one could argue that the MIRI fraction is losing status/already low-ish status and this consideration doesn’t apply here).
Additionally, I was disappointed that the text didn’t say anything that I wouldn’t have expected, which probably fed into my felt-sense of wanting to downvote. I’m not sure I reflectively endorse this feeling.
Tomasik 2019, Tomasik 2017a and Tomasik 2017b argue against this:
Even if insects are unlikely to be sentient, assuming marginally decreasing sentience with neuron count and nonzero probability of insect sentience imply that eating bigger animals is probably better.
The conditions of insects in insects farming are pretty bad.
Eating plants is usually more efficient.
This sequence is spam and should be deleted.
I think this mainly caused thinking about the future low-status and less rigorous.
Examples: Humans are the main actors in the future, breaking the laws of physics is to be expected, aliens are humanoid and ~same level as humans in terms of technological development, artificial intelligence and possible automation of labor doesn’t change the basic economic setup at all.
Interesting, good to know!
It’s not particularly my place to discuss this
I’m confused at this statement, though.
Precision of Sets of Forecasts
Someone on atomically precise manufacturing: A “big if true” thing that is floating around EA but never really tackled head on. I don’t know how good Eric Drexler is on podcasts, but he’d be an obvious candidate.
Or whatever person wrote this report.
Carl Shulman (again), his interviews on the Dwarkesh Patel podcast were incredible, and there seems to be potential for more
Vaclav Smil, who appears to be very knowledgeable, with a comprehensive model of the entire world. His books are filled with facts.
Lukas Finnveden about his blogposts on the altruistic implications of Evidential Cooperation in Large Worlds
Some employee of MIRI who is not Yudkowsky. I suggest
Tsvi Benson-Tilsen (blog), who has appeared on at least one podcast which I liked. Has looked into human intelligence enhancement and a variety of other problems such as communication. Generally has longer AI timelines.
Or Scott Garrabrant, but I don’t know how interesting his interview would be for a nontechnical audience.
Another interview on wild animal welfare, perhaps with someone from Wild Animal Initiative.
Perhaps invite Brian Tomasik on the podcast?
Romeo Stevens (blog) mainly for his approach to his career: Founded a startup to support himself early on, and is now independent. Doesn’t tend to write his ideas down, here’s an interview which details some of his ideas.
Although I’d consider the counter-arguments against multiplicative decomposition to be decent evidence against it.
Good point, will change the title.
There is Little Evidence on Question Decomposition
Disagreed, animal moral patienthood competes with all the other possible interventions effective altruists could be doing, and does so symmetrically (the opportunity cost cuts in both directions!).
I don’t know. Which EA organisation did he found?