I did the online course Writing in the sciences, by Kristin Sainani. I liked it a lot and I think it helped me write much better. I actually did it twice!
mikbp
But if his text is so bad, why should anyone feel “icky” about longtermism because of it? Although I’m by far not stranger to longtermism (I’m here!), I’m really not too much into EA and I’m not a phylosopher nor have I studied it ever, so my theoretical knowledge of the topic is limited, and when I read Torres’ texts it is clear to me that they don’t really hold.
When I’m interested in one topic for which I’m not really qualified to know if what I read/hear about it holds true or is one sided, I tend to search for criticisms about it to check. What I’ve read from Torres or linked by him about longtermism, actually make me think that it seems to be difficult to fairly criticise longtermism.
I think reading Torres’ texts may well turn people away if they don’t really know much else about the topic, but “getting more people to read the original [Torres’ paper]” after having read a good piece shouldn’t be a problem.
And coming back to my starting question, if a person who has good information sources feel “icky” about a topic because of a bad piece of information, maybe it is okay that he/she is not too involved in the topic, no?
I find this post very interesting. However, I don’t think the dual-use should worry us much. I cannot estimate how much harder it is in general to divert an asteroid toward Earth than away from it, but I can confidently say that it is several orders of magnitude higher than 10x (the precision needed would be staggering). In addition, to divert an asteroid toward Earth, one needs an asteroid. The closer the better. The fact that the risk of a big-enough asteroid hitting the Earth is so low indicates that there are not too many candidates. This factor has to be taken into account as well.
But, even if diverting an asteroid towards the Earth would be only 10 times harder than diverting it from the Earth, dual-use does not need to be a big concern. To actually manage to divert an asteroid towards the Earth one does not only need to divert it, one also needs to prevent the rest of humanity from diverting it away on time, which is much easier. So, as long as a small bunch of independent institutions are able and ready to divert asteroids, dual-use does not seem a concern to me.
- 1 Oct 2022 19:23 UTC; 1 point) 's comment on NASA will re-direct an asteroid tonight as a test for planetary defence (link-post) by (
I write only as user, I don’t have any further knowledge but I have never seen it. There are the hair dressers that collaborate with “whip organisations” but as far as I know, they only collect the hair of the people who want to donate it.
In general, I don’t think it is very common that people want to cut >20cm of hair in one go, and it makes the hair dresser’s work somehow less natural, as they usually don’t cut all hair at once (i.e. make a ponytail and cut it). Maybe those collaborating hair dresses would ask a customer who wants to cut their hair in one go if they may donate it?
I just read an interview with Roberto Saviano (author of the book Gomorrah in which he denounced the organised crime in Italy) in which he says that his quest against the mafia has destroyed his life, not only he needs protection 24⁄7, he feels very alone. In his new book he explains the problems that the judge Giovanni Falcone run into because of his fight against the mafia, that led to his death. So, Salviano is now in “selling mode” in precisely this topic, but still, it made me think that making the life of whistle blowers and the like (like him or even the judge) may be an effective way to do good. Protection may not be neglected —although it may depend case by case— but in general making their life more livable and easier to navigate may help them focus better in their reporting work and help fight injustice. I don’t think this has been checked, so I just wanted to leave this comment here in case anyone wants to make a preliminary research to assess whether it is doable and effective.
Measure growth from peak-to-peak or trough-to-trough rather than trough-to-peak – I knew crypto was in a huge bull market
What does this mean?
[I just quickly listened to the post and I’m not philosopher, nor I know deeply about Ergodicity Economics]
Maybe Ergodicity Economics (EE) [site, wikipedia] could be relevant here? It is relevant for the St. Petersburg experiment. It has to do with expected values of stochastic processes not being equal to their time averages (ensemble average ≠ time average).
I am sure there is much more to EE than this, but the one insight that I took from it when I got to know about EE is that when one of the outcomes of the game is to lose everything, expected value does not do a good job describing it. And, at least for x-risks this is exactly the case (and I consider catastrophic risks also in this domain).
It seems that EE is not very well known in the EA community, at least the term is almost not mentioned in the forum, so I thought I would mention it here in case anyone wants to dig in more. I’m for sure not the better trained to go deeper into it nor do I have the time to attempt to do it.
One post that seems to address the issue of EE within the EA is this one.
I hope this is a good lead!
[Epistemic status: I have just read the summary of the post by the author and by Zoe Williams]
Doesn’t it make more sense to just set the discount rate dynamically accounting for the current (at the present) estimations?
If/when we reach a point where the long-term existential catastrophe rate approaches zero, then it will be the moment to set the discount rate to zero. Now it is not, so the discount rate should be higher as the author proposes.
Is there a reason for not using in a dynamic discount rate?
I really appreciate your effort defending a paper containing parts you strongly disagree with from (what you consider) bad arguments!
“I don’t mean to say the heuristic always holds” I understand that, I’m not going that way.
“on average, lead to better outcomes” That’s what in this case I don’t see. Starting a company entails a large opportunity cost—you can basically not do anything else for a period of time—coupled with a large chance of failing. My intuition is that, as a general advice, it may well be net negative, at least as personal advise.
Now I see that it may well not be net negative in the aggregate if the successful instances more than compensate the failures, so it may be a good community heuristic. Was that your idea?
When I read the post, I interpreted this list as heuristics addressed to individuals, not community heuristics.
I’m confused by
Be a founder—do something that involves starting a company
what is the added value of starting a company? I mean, if what you want to do is not done by anyone, badly done, you think you could do it better, etc, it makes total sense. But I don’t think that founding a company intrinsically adds value. Additionally, starting a company is something with high risk of failure and so overwhelming that you basically cannot do anything for quite a long time.
Another thing is choosing a path to impact that is being a serial entrepreneur: starting one company has a high risk of failure, but also high expected returns if you succeed. Being a serial entrepreneur leverages on the experience obtained during all the possible failures to eventually achieve high returns. I’m really not sure that, on average, the experience obtained by starting a (one) company—unless you have a great idea that maximises the options to succeed—outweighs the opportunity costs it entails.
I do research in topics that I’m very interested in, as you. And I am also very interested in EA, rationality and so on, as you seem to be. I was wandering if you share a problem I have: I really don’t know where my work starts or ends. I mean, many stuff I do (read, watch, write) for fun could be clearly considered work, and vice versa, many things I do for work could be considered leisure. It does also seem to be the case for you.
Did you consider writing this post as part of your work, for example? Or reading EA posts? Or reading any blog or article about philosophy of mind, AI and so on? I guess you consider the work coordinating the digital minds group as part of your (paid) work, right? Where do you set the boundary. Or better, how do you set the boundary?
I really struggle with that. It is often not very relevant to clearly distinguish what is actual work and what it is not, but other times it is. In addition, it is not strange that something that started as ‘for fun’ ends being part of your research or main working activity (e.g. the digital minds group for you).
I’m sad that such events are often needed to make some common sense ideas arise, but I am very happy that they nonetheless arose!
Some particular comments:
if someone who might feel ‘on your side’ appears to be doing unusually well, try to increase scrutiny rather than reduce it
Yes! This is general. Mostly everyone is interested that their side is “good”. Taking shortcuts, low moral standards, etc. help doing particularly well, so one needs to be particularly careful with those people.
we should be skeptical about the idea that EAs have better judgement about anything that’s not a key area of EA or personal expertise, and should use conventional wisdom, expert views or baserates for those (e.g. how to run an org; likelihood of making money; likelihood of fraud. A rule of thumb to keep in mind: “don’t defer to someone solely because they’re an EA.” Look for specific expertise or evaluate the arguments directly.
I’m less into the vision of EA as an identity / subculture / community, especially one that occupies all parts of your life
Yes, in general actively fighting against EA communities to become silos and for EA enterprises to have workers outside EA communities would be of great value. I’ve seen quite a lot of outside criticism to EA in this line but did not notice any change. This is why I was so happy when I red this post and when I saw this passage in a comment from Rob about his interview with SBF:
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past.
Maybe this idea is more accepted than I thought? I would ask core EAs who agree with this to be more vocal about it. You may already act accordingly in your life, but it would be valuable that “the community” can see that, at least some relevant EA figures have a life outside EA. And I say this not only for individual’s better work-life balance, this is also very important community-wide.
Why is gender separated into “men” and “non-men”? I find it very weird but I guess there is a reason. Is something like “men”, “women”, “other” not optimal for any reason? If so, is there a reason to keep “men” instead of “women”?
Hi. Thinking about it, I probably overstated a bit about Nordhaus’ acceptance. Instead of saying “basically no one in fields related to sustainability research” I think “many do not” is probably more accurate. I’m in my bubble and there may be very different bubbles around. And I guess a bad model is better than no model, as one can improve it instead of starting from scratch.
About what you ask for:
(a) I’m not sure. Steve Keen (@ProfSteveKeen in Twitter) is very vocal about how bad Nordhause’s model is, maybe he’s got something. But (rightly) pointing that something is wrong is much easier than building something better, so I’m not sure if he’s got anything. I know he was working on one (several?) paper(s) with Tim Garrett. But Tim is physicist, so it may be something beyond GDP (Tim has a model showing that
CO2 emissionsenergy demand correlates very well with historically cumulative GDP somehow implying that we are actually not decoupling from resource needs).In general, the problem is that any meaningful CC will have knock-on effects that are ultimately impossible to predict. One can put numbers on those, but then for each Temp there have to be at least different scenarios (only Temp, plus X damage from more extreme environmental phenomena, plus Y effect of war, plus Z from migrations, combinations, degrees...). And on top of that, there are unknown unknowns (e.g. last summer hat heatwaves that melted infrastructure in some parts of the US [I think], which AFAIK basically no one had predicted).
(b) and (c)… maybe the people in the SCER and in the Global Catastrophic Risk Institute? These folks study knock-on effects that CC could produce. Beard from SCER spoke about it in the FLI podcast and I think Luke Kempt also has worked on related topics. I’m less familiar with the people in GCRI. But in general, these all look at overwhelmingly-negative scenarios (CC triggering a nuclear war and so on), so it sounds like what you want.
I hope this helps.
Have I understood your argument right?
Yes, but 2 applies to rampant CC (climate change) and 1 within the CC argument.
If I understand it right, your reply challenges the idea that the net effect of having progeny is exacerbate CC, at least in the long run, which I thought you deliberately intended not to do in the post. I think this is a completely different argument, one that I don’t want to go to because it lacks any end: CC is a continuum, its effects are not linear, tipping points are not well understood, its effects will knock-on other effects, it all depends on technology and infrastructure (effective EROIs from alternatives to fossil fuels are lower than we could wish, it is at least very hard to change the infrastructure of the whole society to accommodate a complete change in energy sources, carbon capture and storage at scale may be possible or may not...), unknown unknowns...
I guess my central point was that you cannot argue that CC should not be a significant factor deciding on having children or not (if you care for total happiness), without arguing whether having children is something that will effectively exacerbate CC in the long run or not. And I think you were trying to do that.
If having children does effectively exacerbate CC in the long run, even if its effects in the happiness of the very next (few?) generation(s) may still be net-positive, in the long run this is overwhelmingly negative (in the far off limit, Earth is like Venus). If having children does not effectively exacerbate CC in the long run, there is no debate. And everything in between (now it does but later not, it does but up to a certain point, it does but actual human population will decrease...) is a hugely messy debate.
BTW, the GDP calculations are useless without knowing its assumptions. And if they come from Nordhaus, his calculations seem to be really, really bad, with utterly unrealistic assumptions. Things like calculating the differences in GDP of regions with, XºC average temperature difference and extrapolating it to CC without accounting, for example, that many regions on Earth will be inhabitable if their average temperature increases XºC. Note that, although he’s got a Nobel price for it, basically no one in fields related to sustainability research (except for some economists, I guess) accept them.
I’ve seen this paper: The effects of communicating uncertainty around statistics, on public trust. I thought its findings may be extensible for communicating uncertainty around not-statistics, so potentially useful for the community.
I forgot to ask you who are those “degrowthers” that you refer to. I never came across them. Could you please give me a couple of names?
GDP contraction (=somebody’s income contraction)
This is obvious. And, again, the point is that the relationship between GDP and social outcomes after some point breaks down or becomes irrelevant.
Many things can lead to degrowth, and some could be necesary. What I point out is that degrowth is allwayws a negative side consequence. You do not plan for it, you suffer it (the less, the better).
It seems strange to argue in favour of not planning for a negative consequence of something that may be necessary.
Julia Nefsky is giving a research seminar in the Institute for Futures Studies titled “Expected utility, the pond analogy and imperfect duties”, which sounds interesting for the community. It will be on September 27 at 10:00-11:45 (CEST) and can be attended for free in person or online (via zoom). You can find the abstract here and register here.
I don’t know Julia or her work and I’m not philosopher, so I cannot directly assess the expected quality of the seminar, but I’ve seen several seminars from the Institute for Futures Studies that where very good (eg. from Olle Häggström—and in Sep 20 Anders Sandberg gives one as well).
I hope this is useful information.