I did the online course Writing in the sciences, by Kristin Sainani. I liked it a lot and I think it helped me write much better. I actually did it twice!
mikbp
[Question] Mediocre EAs: career paths and how do they engage with EA?
The Case for Shorttermism—by Robert Wright
[Question] Matsés—Are languages providing epistemic certainty of statements not of the interest of the EA community?
[Question] Anyone has a reference for “It’s estimated that abolition cost Britain 2% of its GDP for 50 years”?
“Is this risk actually existential?” may be less important than we think
But if his text is so bad, why should anyone feel “icky” about longtermism because of it? Although I’m by far not stranger to longtermism (I’m here!), I’m really not too much into EA and I’m not a phylosopher nor have I studied it ever, so my theoretical knowledge of the topic is limited, and when I read Torres’ texts it is clear to me that they don’t really hold.
When I’m interested in one topic for which I’m not really qualified to know if what I read/hear about it holds true or is one sided, I tend to search for criticisms about it to check. What I’ve read from Torres or linked by him about longtermism, actually make me think that it seems to be difficult to fairly criticise longtermism.
I think reading Torres’ texts may well turn people away if they don’t really know much else about the topic, but “getting more people to read the original [Torres’ paper]” after having read a good piece shouldn’t be a problem.
And coming back to my starting question, if a person who has good information sources feel “icky” about a topic because of a bad piece of information, maybe it is okay that he/she is not too involved in the topic, no?
Invitation to participate in AGI global governance Real-Time Delphi questionnaire—The Millennium Project
[Question] Is there anyone looking at the importance of simplifying complex socio-technical systems?
I find this post very interesting. However, I don’t think the dual-use should worry us much. I cannot estimate how much harder it is in general to divert an asteroid toward Earth than away from it, but I can confidently say that it is several orders of magnitude higher than 10x (the precision needed would be staggering). In addition, to divert an asteroid toward Earth, one needs an asteroid. The closer the better. The fact that the risk of a big-enough asteroid hitting the Earth is so low indicates that there are not too many candidates. This factor has to be taken into account as well.
But, even if diverting an asteroid towards the Earth would be only 10 times harder than diverting it from the Earth, dual-use does not need to be a big concern. To actually manage to divert an asteroid towards the Earth one does not only need to divert it, one also needs to prevent the rest of humanity from diverting it away on time, which is much easier. So, as long as a small bunch of independent institutions are able and ready to divert asteroids, dual-use does not seem a concern to me.
- 1 Oct 2022 19:23 UTC; 1 point) 's comment on NASA will re-direct an asteroid tonight as a test for planetary defence (link-post) by (
[Question] Tractors that need to be connected to function?
[Question] Why Stanislav Petrov was not awarded the Nobel Peace Price?
I write only as user, I don’t have any further knowledge but I have never seen it. There are the hair dressers that collaborate with “whip organisations” but as far as I know, they only collect the hair of the people who want to donate it.
In general, I don’t think it is very common that people want to cut >20cm of hair in one go, and it makes the hair dresser’s work somehow less natural, as they usually don’t cut all hair at once (i.e. make a ponytail and cut it). Maybe those collaborating hair dresses would ask a customer who wants to cut their hair in one go if they may donate it?
ChatGPT is capable of cognitive empathy!
I just read an interview with Roberto Saviano (author of the book Gomorrah in which he denounced the organised crime in Italy) in which he says that his quest against the mafia has destroyed his life, not only he needs protection 24⁄7, he feels very alone. In his new book he explains the problems that the judge Giovanni Falcone run into because of his fight against the mafia, that led to his death. So, Salviano is now in “selling mode” in precisely this topic, but still, it made me think that making the life of whistle blowers and the like (like him or even the judge) may be an effective way to do good. Protection may not be neglected —although it may depend case by case— but in general making their life more livable and easier to navigate may help them focus better in their reporting work and help fight injustice. I don’t think this has been checked, so I just wanted to leave this comment here in case anyone wants to make a preliminary research to assess whether it is doable and effective.
Measure growth from peak-to-peak or trough-to-trough rather than trough-to-peak – I knew crypto was in a huge bull market
What does this mean?
[I just quickly listened to the post and I’m not philosopher, nor I know deeply about Ergodicity Economics]
Maybe Ergodicity Economics (EE) [site, wikipedia] could be relevant here? It is relevant for the St. Petersburg experiment. It has to do with expected values of stochastic processes not being equal to their time averages (ensemble average ≠ time average).
I am sure there is much more to EE than this, but the one insight that I took from it when I got to know about EE is that when one of the outcomes of the game is to lose everything, expected value does not do a good job describing it. And, at least for x-risks this is exactly the case (and I consider catastrophic risks also in this domain).
It seems that EE is not very well known in the EA community, at least the term is almost not mentioned in the forum, so I thought I would mention it here in case anyone wants to dig in more. I’m for sure not the better trained to go deeper into it nor do I have the time to attempt to do it.
One post that seems to address the issue of EE within the EA is this one.
I hope this is a good lead!
[Epistemic status: I have just read the summary of the post by the author and by Zoe Williams]
Doesn’t it make more sense to just set the discount rate dynamically accounting for the current (at the present) estimations?
If/when we reach a point where the long-term existential catastrophe rate approaches zero, then it will be the moment to set the discount rate to zero. Now it is not, so the discount rate should be higher as the author proposes.
Is there a reason for not using in a dynamic discount rate?
I really appreciate your effort defending a paper containing parts you strongly disagree with from (what you consider) bad arguments!
Julia Nefsky is giving a research seminar in the Institute for Futures Studies titled “Expected utility, the pond analogy and imperfect duties”, which sounds interesting for the community. It will be on September 27 at 10:00-11:45 (CEST) and can be attended for free in person or online (via zoom). You can find the abstract here and register here.
I don’t know Julia or her work and I’m not philosopher, so I cannot directly assess the expected quality of the seminar, but I’ve seen several seminars from the Institute for Futures Studies that where very good (eg. from Olle Häggström—and in Sep 20 Anders Sandberg gives one as well).
I hope this is useful information.