The present lesswrong link doesn’t work for me. This is the correct one: https://www.lesswrong.com/posts/AG6PAqsN5sjQHmKfm/conversation-on-forecasting-with-vaniver-and-ozzie-gooen
Images can’t be added to comments; is that what you were trying to find a workaround for?
It’s possible to add images to comments by selecting and copying them from anywhere public (note that it doesn’t work if you right click and choose ‘copy image’). In this thread, I do it in this comment.
I see how I can’t do it manually, though, by selecting text. I wouldn’t expect it to be too difficult to add that possibility, though, given that it’s already possible in another way?
With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?
Actually, I’ll test copying an image from a google doc into this comment: (edit: seems to be working!)
Copying all relevant information from the lesswrong faq to an EA forum faq would be a good start. The problem of how to make its existence public knowledge remains, but that’s partly solved automatically by people mentioning/linking to it, and it showing up in google.
There’s a section on writing in the lesswrong faq (named Posting & Commenting). If any information is missing from there, you can suggest adding it in the comments.
Of course, even given that such instructions exists somewhere, it’s important to make sure that it’s findable. Not sure what the best way to do that is.
I’m by no means schooled in academic philosophy, so I could also be wrong about this.
I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian ‘we should keep all the complexities of human value around’-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories’ wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.
My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don’t, which would be an ethical disagreement. The borderlines aren’t very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn’t hedonistic utilitarianism.
(I saw you quoting Nate’s post in another thread. I think you could say that it makes a meta-ethical argument that it’s possible to care about things outside yourself, but that it doesn’t make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people’s experiences.)
For whatever it’s worth, my metaethical intuitions suggest that optimizing for happiness is not a particularly sensible goal.
Might just be a nitpick, but isn’t this an ethical intuition, rather than a metaethical one?
(I remember hearing other people use “metaethics” in cases where I thought they were talking about object level ethics, as well, so I’m trying to understand whether there’s a reason behind this or not.)
Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn’t necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.
(I don’t trust the article’s preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)
We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.
If there are many more undesirable configurations of the world than desirable ones, then we should, a priori, expect that our present configuration is an undesirable one. Also, if the only effect of disruption was to re-randomize the world order, then the only thing you’d need for disruption to be positive is for the current state to be worse than the average civilisation from the distribution. Maybe this is what you mean with “particularly bad state”, but intuitively, I interpret that more like the bottom 15 %.
There are certainly arguments to make for our world being better than average. But I do think that you actually have to make those arguments, and that without them, this abstract model won’t tell you if disruption is good or bad.
If you go to “Edit account”, there’s a check box that says “Activate markdown editor”. If you un-check that one (I would’ve expected it to be unchecked by default, but maybe it isn’t) you get formatting options just by selecting your text.
Although psychadelics is plausibly good from a short-termist view, I think the argument from the long-termist view is quite weak. Insofar as I understand it, psychadelics would improve the long term by
1. Making EAs or other well-intentioned people more capable.
2. Making people more well-intentioned. I interpret this as either causing them to join/stay in the EA community, or causing capable people to become altruistically motivated (in a consequentialist fashion) without the EA community.
Regarding (1), I could see a case for privately encouraging well-intentioned people to use psychadelics, if you believe that psychedelics generally make people more capable. However, pushing for new legislation seems like an exceedingly inefficient way to go about this. Rationality interventions are unique in that they are quite targeted—they identify well-intentioned people and give them the techniques that they need. Pushing for new psychadelic legislation, however, could only help by making the entire population more capable, including the much smaller population of well-intentioned people. I don’t know exactly how hard it is to change legislation, but I’d be surprised if it was worth doing solely due to the effect on EAs and other aligned people. New research suffers from a similar problem: good medical research is expensive, so you probably want to have a pretty specific idea about how it benefits EAs before you invest a lot in it.
Regarding (2), I’d be similarly surprised if campaigning for new legislation → more people use psychadelics → more people become altruistically motivated → more people join the EA community was a better way to get people into EA than just directly investing in community building.
For both (1) and (2), these conclusions might change if you cared less about EAs in particular, and thought that the future would be significantly better if the average person was somewhat more altruistic or somewhat more capabable. I could be interested in hearing such a case. This doesn’t seem very robust to cluelessness, though, given the uncertainty of how psychedelics affect people, and the uncertainty about how increasing general capabilities affects the long term.
Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don’t want to hear, but maybe they need.
I don’t think this position is unpopular in the EA community. You have more than one goal and that’s fine got lots of upvotes, and my impression is that there’s a general consensus that breaks are important and that burnout is a real risk (even though people might not always act according to that consensus).
I’d guess that it’s getting downvotes because it doesn’t really explain why we should be less productive: it just stakes out the position. In my opinion, it would have been more useful if it, for example, presented evidence showing that unproductive time is useful for living a fulfilled life, or presented an argument for why living a fulfilled life is important even for your altruistic values (which Jakob does more of in the comments).
Meta meta note: In general, it seems kind of uncooperative to assume that people need more of things they downvote.
If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give.
I think this is the article you’re thinking about, where they’re talking about the paths of marginal graduates. Note that it’s from 2015 (though at least Will said he still thought it seemed right in 2016) and explicitly labeled with “Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question”.
Fantastic work! Nitpicks:
The last paragraph is repeated in the second to last paragraph.
However, the beneficial effects of the cash transfer may be much lower in a UCT
Is this supposed to say “lower in a CCT”?
As a problem with the ‘big list’, you mention
2. For every reader, such a list would include many paths that they can’t take.
But it seems like there’s another problem, closely related to this one: for every reader, the paths on such a list could have different orderings. If someone has a comparative advantage for a role, it doesn’t necessarily mean that they can’t aim for other roles: but it might mean that they should prefer the role that they have a comparative advantage for. This is especially true once we consider that most people don’t know exactly what they could do and what they’d be good at—instead, their personal lists contains a bunch of things they could aim for, ordered according to different probabilities of having different amounts of impact.
In particular, I think it’s a bad idea to take a ‘big list’, winnow away all the jobs that looks impossible, and then aim for whatever is on top of the list. Instead, your personal list might overlap with others’, but have a completely different ordering (yet hopefully contain a few items that other people haven’t even considered, given that 80k can’t evaluate all opportunities, like you say).
This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.
Hm, I thought one of the main worries was that major global powers wouldn’t have to agree, since any country would be able to launch a geoengineering program on their own, changing the climate for the whole planet.
Do you think that global governance is good enough to disincentivize lone states from launching a program, purely from fear of punishment? Or would it be possible to somehow reverse the effects?
Actually, would you even need to be a state to launch a program like this? I’m not sure how cheap it could become, or if it’d be possible to launch in secret.
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
I am not so sure about the specific numerical estimates you give, as opposed to the ballpark being within a few orders of magnitude for SIA and ADT+total views (plus auxiliary assumptions)
I definitely agree about some numbers. Maybe I should have been more explicit about this in the post, but I have low credence in the exact distribution of f (as well as fl, fi, and fs): it depends far too much on the absolute rate of planet formation and the speed at which civilisations travel.
However, I’m much more willing to believe that the average fraction of space that would be occupied by alien civilisations in our absence is somewhere between 30 % and 95 %, or so. A lot of the arbitrary assumptions that affects f cancels out when running the simulation, and the remaining parameters affects the result surprisingly little. My main (known) uncertainties are
Whether it’s safe to assume that intergalactic colonisation is possible. From the perspective of total consequentialism, this is largely a pragmatic question about where we can have the most impact (which is affected by a lot of messy empirical questions).
How much the results would change if we allowed for a late increase in life more sudden than the one in Appendix C (either because of a sudden shift in planet formation or because of something like gamma ray bursts). Anthropics should affect our credence in this, as you point out, and the anthropic update would be quite large in favor. However, the prior probability of a very sudden increase seems small. That prior is very hard to quantify, and I think my simulation would be less reliable in the more extreme cases, so this possibility is quite hard to analyse.
Do you agree, or do you have other reasons to doubt the 30%-95% number?
This seems overall too pessimistic to me as a pre-anthropic prior for colonization
I agree that the mean is too pessimistic. The distribution is too optimistic about the impossibility of lower numbers, though, which is what matters after the anthropic update. I mostly just wanted a distribution that illustrated the idea about the late filter without having it ruin the rest of the analysis.f has almost exactly the same distribution after updating, anyway, as long as fs assigns negligible probability to numbers below 10−10.
Given that the risk of nuclear war conditional on climate change seems considerably lower than the unconditional risk of nuclear war
Given that the risk of nuclear war conditional on climate change seems considerably lower than the unconditional risk of nuclear war
Do you really mean that P(nuclear war | climate change) is less than P(nuclear war)? Or is this supposed to say that the risk of nuclear war and climate change is less than the unconditional probability of nuclear war? Or something else?