Academics will not find a new journal run by non-academics credible, much less prestigious. No one would be able to put this journal on an academic CV. So there’s really no benefit to “publishing” relative to posting publicly and letting people vote and comment.
Lila
Metformin isn’t a supplement though. It’s unlikely it would ever get approved as a supplement or OTC, especially given that it has serious side effects.
Really interesting. I appreciate you sharing this and your attitude toward this. Good luck with your career in philosophy—epistemic honesty will take you far.
You might consider cross-posting this on a site like Medium to reach a larger audience.
It’s not either/or. It’s likely not to be a single disease—would probably be more accurate to call it a syndrome.
I’m not sure how the beliefs in Table 3 would lead to positive social change. Mostly just seems like an increase in some vague theism, along with acceptance/complacency/indifference/nihilism. The former is epistemically shaky, and the latter doesn’t seem like an engine for social change.
You might as well randomly go through the list of multimillionaires/billionaires and cold-call them. Maybe not the worst idea, but there’s nothing in particular to suggest this guy would be special.
Technology to do something like this is already being developed, but it’s not nanotechnology: https://www.nature.com/articles/nmeth.3151
Nanotechnology is rarely the most practical way to probe very small things. People have been able to infer molecular structures since the 19th century. Modern molecular biology/biochemistry makes use of electron microscope, fluorescent microscopy, and sequencing-based assays, among other techniques.
What do you mean by nanoscale neural probes? What are the questions that these probes would answer?
Modeling the risk of psychedelics as nonexistent seems like a very selective reading of Carbonaro 2016:
“Eleven percent put self or others at risk of physical harm; factors increasing the likelihood of risk included estimated dose, duration and difficulty of the experience, and absence of physical comfort and social support. Of the respondents, 2.6% behaved in a physically aggressive or violent manner and 2.7% received medical help. Of those whose experience occurred >1 year before, 7.6% sought treatment for enduring psychological symptoms. Three cases appeared associated with onset of enduring psychotic symptoms and three cases with attempted suicide.”
Why?
You reveal that you are highly motivated to argue that exterminating humanity is not in the interest of an AI, regardless of whether that statement is true. So your arguments will present weak evidence at best, given your clear bias.
Is the ai supposed to read this explanation? Seems like it tips your hand?
Neither of those statements are upsetting to me.
It’s often useful to be able to imagine what will be upsetting to other people and why, even if it’s not upsetting to you. Maybe you’ll decide that it’s worth hurting people, but at least make your decisions with an accurate model of the world. (By the way, “because they’re oversensitive” doesn’t count as an explanation.)
So let’s try to think about why someone might be upset if you told them that they’re more likely to be a rapist because of their race. I can think of a few reasons: They feel afraid for their personal safety. They feel it’s unfair to be judged for something they have no control over. They feel self-conscious and humiliated.
Emotional turing tests might be a good habit in general.
I hope you’re just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.
This unpleasant tangent is a great example of why applying aggregate statistics to actual people isn’t a good strategy. It should be clear why people find the following statements upsetting:
Statistically, there are X rapists in the EA community.
Statistically, as a man/black person/Mexican/non-college grad/Muslim, there is X probability you’re a rapist.
Let’s please not go down this path.
I would far prefer being raped over a 1% chance of dying immediately. I think the tradeoff would be something like 100,000 to 1.
I don’t think most of these will convince people to share your views, often because they come from different moral perspectives. They seem too negative or directly contradictory for people to change their minds—particularly the ones on social justice. However, it might help people understand your personal choices. What have been your results?
I’m a 4th year PhD student in bioinformatics. I’ve previously considered doing something similar, though I focused more on stem cell technology, which is most relevant to my current research. However, would definitely be interested in discussing further!
I agree with this for the most part, but let’s not exclude people from EA who, like me, are low-IQ and high-libido.
- 14 Nov 2017 9:55 UTC; 4 points) 's comment on An Exploration of Sexual Violence Reduction for Effective Altruism Potential by (
It seems that you are vastly underestimating the intensity of psychological trauma that comes with rape.
Even if this is descriptively true (and I think it varies a lot—some people aren’t bothered long-term), there’s no reason that this is a desirable outcome. Everything is mediated through attitudes.
It looks like there might be confounders in the time series because there is a negative “effect” on life satisfaction prior to becoming disabled or unemployed. (With divorce and widowhood it’s plausible that some people would see it coming years in advance.)