Yep I was going to write this before I saw this comment. I think you may have nailed the main reason why anesthesia doesn’t happen. Pro-life people won’t support it, and pro-choice people would be uncomfortable with admitting the possibility of pain and the implications of that.
That’s an interesting one—I’m a fan of hot takes myself :D. I think “Quick takes” does the job on these though, even if the posts are a bit longer. I’m not sure we need another section. Maybe a “Hot takes” tab could be added to signify that the thought behind a take isn’t so deep?
Love the post, don’t love the names given.
I think “capacity growth” is a bit too vague, something like “tractable, common-sense global interventions” seems better.
I also think “moonshots” is a bit derogatory, something like “speculative, high-uncertainty causes” seems better.
Or put another way, would people engage differently if the forum was run on stock software by a single sysadmin and some regular posters granted volunteer mod privileges?
Well, I mean it isn’t a perfect comparison, but we know roughly what that world looks like because we have both the LessWrong and OG EA Forum datapoints, and both point towards “the Forum gets on the order of 1/5th the usage” and in the case of LessWrong to “the Forum dies completely”.
I do think it goes better if you have at least one well-paid sysadmin, though I definitely wouldn’t remotely be able to do the job on my own.
What would be the pros and cons of adding a semi-hidden-but-permanent Hot Takes section to the Forum? All of my takes are Hot and due to time constraints I would otherwise not post at all. Some would argue that someone like me should not post Hot Takes at all. Anyway, in true lazy fashion here is ChatGPT on the pros and cons:
Pros:Encourages diverse perspectives and stimulates debate.
Can attract more engagement and interest from users.
Provides a platform for expressing unconventional or controversial ideas.
Fosters a culture of intellectual curiosity and open discourse within the community.
Cons:
May lead to increased polarization and conflict within the community.
Risk of spreading misinformation or poorly researched opinions.
Could divert attention from more rigorous and evidence-based discussions.
Potential for reputational damage if controversial opinions are associated with the forum.
Honestly I paused on this point for a bit as I was writing the post. The main reason I left it is because I didn’t want to open up that debate, as EA so obviously has a strategy of “target elite universities”.
But I TOTALLY feel you on this one.
Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth’s temperature could not one day exceed 100 C.
[...]
[Regarding ice melting -- ] That will take time, but very little time on a cosmic scale, maybe a couple of thousand years.I’ll be blunt, remarks like these undermine your credibility. But regardless, I just don’t have any experience or contributions to make on climate change, other than re-emphasizing my general impression that, as a person who cares a lot about existential risk and has talked to various other people who also care a lot about existential risk, there seems to be very strong scientific evidence suggesting that extinction is unlikely.
I feel a little alienation by the emphasis on elite education from both sides of this kind of debate. Not that there’s necessarily much that can be changed there, it’s probably just the nature of the game mostly. But I find a little odd that the “be more normal [with career capital]” camp presumes normal to include being in the upper middle class of the Anglo world. That’s usually the sort of person making the critique. Though I could see a blue-collar worker levying it too.
Thanks @Bella! I added “crux” to the list and linked the article you shared.
On a slight tangent from the above: I think I might have once come across an analysis of EAs’ scores on the Big Five scale, which IIRC found that EAs’ most extreme Big Five trait was high openness. (Perhaps it was Rethink Charity’s annual survey of EAs as e.g. analyzed by ElizabethE here, where [eyeballing these results] on a scale from 1-14, the EA respondents scored an average of 11 for openness, vs. less extreme scores on the other four dimensions?)
If EAs really do have especially high average openness, and high openness is a central driver of high AI xrisk estimates, that could also help explain EAs’ general tendency toward high AI xrisk estimates
I’d be interested in an investigation and comparison of the participants’ Big Five personality scores. As with the XPT, I think it’s likely that the concerned group is higher on the dimensions of openness and neuroticism, and these persistent personality differences caused their persistent differences in predictions.
To flesh out this theory a bit more:
Similar to the XPT, this project failed to find much difference between the two groups’ predictions for the medium term (i.e. through 2030) - at least, not nearly enough disagreement to explain the divergence in their AI risk estimates through 2100. So to explain the divergence, we’d want a factor that (a) was stable over the course of the study, and (b) would influence estimates of xrisk by 2100 but not nearer-term predictions
Compared to the other forecast questions, the question about xrisk by 2100 is especially abstract; generating an estimate requires entering far mode to average out possibilities over a huge set of complex possible worlds. As such, I think predictions on this question are uniquely reliant on one’s high-level priors about whether bizarre and horrible things are generally common or are generally rare—beyond those priors, we really don’t have that much concrete to go on.
I think neuroticism and openness might be strong predictors of these priors:
I think one central component of neuroticism is a global prior on danger.[1] Essentially: is the world essentially a safe place where things are fundamentally okay? Or is the world vulnerable?
I think a central component of openness to experience is something like “openness to weird ideas”[2]: how willing are you to flirt with weird/unusual ideas, especially those that are potentially hazardous or destabilizing to engage with? (Arguments that “the end is nigh” from AI probably fit this bill, once you consider how many religious, social, and political movements have deployed similar arguments to attract followers throughout history.)
Personality traits are by definition mostly stable over time—so if these traits really are the main drivers of the divergence in the groups’ xrisk estimates, that could explain why participants’ estimates didn’t budge over 8 weeks.
- ^
For example, this source identifies “a pervasive perception that the world is a dangerous and threatening place” as a core component of neuroticism.
- ^
I think this roughly lines up with scales c (“openness to theoretical or hypothetical ideas”) and e (“openness to unconventional views of reality”) from here
This description of labor induction abortion says:
The skin on your abdomen is numbed with a painkiller, and then a needle is used to inject a medication (digoxin or potassium chloride) through your abdomen into the fluid around the fetus or the fetus to stop the heartbeat.
That sounds like local anesthesia for the mother, which from what I understand is achieved through an injection which numbs the tissue in a specific area rather than through an IV drip. So I don’t think this protocol would have any anesthetic effect on the fetus, though I’m not a medical expert and could be wrong.
Based on this, I think the sentence “The fetus is administered a lethal injection with no anesthesia” is accurate.
Sounds very difficult when deadly drugs like fentanyl, midazolam and propofol can easily be injected through an intravenous line. You can’t get an IV line on a baby in-utero, I think that’s why injection into the heart is done in that case.
I don’t have time to research this in depth, but am pretty sure this post is missing a lot of nuance about how anesthesia works in abortion. Importantly, because mother and fetus share a circulation, IV sedation that is given to the mother will—to some extent—sedate the fetus as well, depending on the specific regimen used. So it’s not quite right to say “The fetus is administered a lethal injection with no anesthesia.” Correspondingly, I think this post overstates the risk of fetal suffering associated with abortion.
Agreed. I disagree with the general practice of capping the probability distribution over animals’ sentience at 1x that of humans’. (I wouldn’t put much mass above 1x, but it should definitely be more than zero mass.)
I’m not sure. IMHO a major disaster is happening with the climate. Essentially, people have a false belief that there is some kind of set-point, and that after a while the temperature will return to that, but this isn’t the case. Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth’s temperature could not one day exceed 100 C.
It’s always interesting to ask people how high they think sea-level might rise if all the ice melted. This is an uncontroversial calculation which involves no modelling—just looking at how much ice there is, and how much sea-surface area there is. People tend to think it would be maybe a couple of metres. It would actually be 60 m (200 feet). That will take time, but very little time on a cosmic scale, maybe a couple of thousand years.
Right now, if anything what we’re seeing is worse than the average prediction. The glaciers and ice sheets are melting faster. The temperature is increasing faster. Etc. Feedback loops are starting to be powerful. There’s a real chance that the Gulf Stream will stop or reverse, which would be a disaster for Europe, ironically freezing us as a result of global warming …
Among serious climate scientists, the feeling of doom is palpable. I wouldn’t say they are exaggerating. But we, as a global society, have decided that we’d rather have our oil and gas and steaks than prevent the climate disaster. The US seems likely to elect a president who makes it a point of honour to support climate-damaging technologies, just to piss off the scientists and liberals.
It seems to me that the naive way to handle the two envelopes problem (and I’ve never heard of a way better than the naive way) is to diversify your donations across two possible solutions to the two envelopes problem:
donate half your (neartermist) money on the assumption that you should use ratios to fixed human value
donate half your money on the assumption that you should fix the opposite way (eg fruit flies have fixed value)
Which would suggest donating half to animal welfare and probably half to global poverty. (If you let moral weights be linear with neuron count, I think that would still favor animal welfare, but you could get global poverty outweighing animal welfare if moral weight grows super-linearly with neuron count.)
Plausibly there are other neartermist worldviews you might include that don’t relate to the two envelopes problem, e.g. a “only give to the most robust interventions” worldview might favor GiveDirectly. So I could see an allocation of less than 50% to animal welfare.
Don’t have time to reply in depth, but here are some thoughts:
If a risk estimate is used for EA cause prio, it should be our betting odds / subjectie probabilities, that is, average over our epistemic uncertainty. If from our point of view a risk is 10% likely to be >0.001%, and 90% likely to be ~0%, this lower bounds our betting odds at 0.0001%. It doesn’t matter that it’s more likely to be 0%.
Statistics of human height are much better understood than nuclear war because we have billions of humans but no nuclear wars. The situation is more analogous to finding the probability of a 10 meter tall adult human having only ever observed a few thousand monkeys (conventional wars), plus one human infant (WWII) and also knowing that every few individuals humans mutate into an entirely new species (technological progress).
It would be difficult to create a model suggesting a much higher risk because most of the risk comes from black swan events. Maybe one could upper bound the probability by considering huge numbers of possible mechanisms for extinction and ruling them out, but I don’t see how you could get anywhere near 10^-12.
One company. You are right about too many eggs in one basket. I’m expanding my search to more companies and focusing on operations roles.
I learned recently my resume is too generic, not targeted enough to the roles and needs quantifiable accomplishments.
I’m updating...Thank you.
Hi Nick,
I do not think GiveWell would even claim that, as they are not optimising for reliably building global capacity. They “search for charities that save or improve [human] lives the most per dollar”, i.e. they seem to simply be optimising for increasing human welfare. GiveWell also assumes the value of saving a life of a given age is always the same, regardless of the country, which in my mind goes against maximising global capacity. Saving a life in a high income country seems much better to improve global capacity than saving one in a low income country, because productivity is much higher in high income countries.