Physics Student with a wide range of interest.
Co-organizer of EA Tübingen.
Physics Student with a wide range of interest.
Co-organizer of EA Tübingen.
Wasn’t it announced at launch, that this would be implemented at some point?
I think people who think about existential risk should devote some of their energy to thinking about risks that are not themselves existential but might be existential if combined with other risks. For example, climate change is not an existential risk, but it plausibly plays a role in many combination existential risks, such as by increasing international tensions or by rendering much of the globe difficult to inhabit. Similarly, many global catastrophic risks may in fact be existential if combined with other global catastrophic risks, such as a nuclear war combined with a pandemic.
I think those would be called ‘Context Risk’. I haven’t read that word in many places, but i first heard of it in Phil Torres’ book about x-risks.
Very good post, thank you for collecting everything.
I’d be interested in a closer look into the field of energy (especially nuclear fussion, modern nuclear energy technology), i don’t really know if there are neglected areas or positions.
Not an expert on the foundations of QM, but a few points on your question:
For some interpretations the mathematics does change somewhat (e.g. Bohmian Mechanics, Collapse Theories)
Some interpretations actually do make testable predictions (like the Many Wolds Interpretation), but they tend to be quite hard to test in practice
Some people have argued that some interpretations follow more naturally from the mathematics. It’s pretty clear in my opinion that Bohmian Mechanics is postulating additional structure on top of the mathematics we have now, while many-worlds is not really doing that.
How many human lives would it be worth sacrificing to preserve the existence of Shakespeare’s works? If we were required to engage in human sacrifice in order to save his works from eradication, how many humans would be too many?
This strikes me as a good way of making people think of the distinction between instrumental- and terminal values.
I don’t see how using Intelligence (1) as a definition undermines the orthogonality thesis.
Intelligence(1): Intelligence as being able to perform most or all of the cognitive tasks that humans can perform. (See page 22)
This only makes reference to abilities and not to the underlaying motivation. Looking at high functioning sociopaths you might argue we have an example of agents that often perform very well at all most human abilities but still have attitudes towards other people that might be quite different from most people and lack a lot of ordinary inhibitions.
This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.
I don’t agree, i personally can easily imagine an agent that can argue for moral positions convincingly by analysing huge amounts of data about human preferences, that can use statistical techniques to infer the behaviour and attitude of humans and then use that knowledge to maximize something like positive affection or trust and many other things.
Good and important points. I feel maybe the same care should be taken towards people who have various kinds of anti-capitalist beliefs.
I share you irriation with this article. This struck me as a normal opinion Vox opinion piece, which never should have been postet on Future Perfect.
I think some of your points of criticism might be explained by the fact that we had to/wanted to keep the article below a certain length. But i also believe that when Dylan writes for Future Perfect about such political topics, he should make sure to argue every point carefully as well as be especially rigorous and careful in his arguments.
was trying to figure out how opinionated the Wiki should be
Certainly an important question. 80k certainly explains why they don’t recommend certain careers and it’s important for them to continue to do so. In my opinion we should make our reasons for considering a cause effective very clear, so they can be challenged. In practice, of course, how such an entry depends strongly on the wording. I would prefer to word it like “Cause X has traditionally been considered not neglected enough/not tractabe/too small by EA organisations. … According to that reasoning you’d have to show Y to establish X as an effective cause. …” instead of “X is not effective, because …”.
Thank you, this was very helpful. I think to possibility to do volunteer work remotely is something that should be stressed more and also communicated in EA local groups more frequently.
I share the impression that dedication is less encouraged in EA these days than five years ago
Not sure i agree with this. Certainly there is less focus on donating hug sums of money, but that may also be explained by the shift to EA Orgs now often recommending direct work. But i think the EA community as a hole now focusses less on attracting huge ammounts of people and more on keeping the existing members engaged and dedicated and influencing their career choice (if i remember correctly the strategic write-ups from both CEA and EAF seem to reflect this).
For instance, the recent strategy write-up by CEA mentions dedication as an important factor:
We can think of the amount of good someone can be expected to do as being the product of three factors (in a mathematical sense):
Resources: The extent of the resources (money, useful labor, etc.) they have to offer;
Dedication: The proportion of these resources that are devoted to helping;
Realization: How efficiently the resources devoted to helping are used
But i agree that there is a lot of focus on ‘talent’ and dedication seems to take a second role behind it. This may be defensable but i think that we could probably stress dedication a bit more, because talking about ‘dedication’ may turn less people of than talk about ‘talent’. To me talent seems more like something you have while dedication seems like something that ‘merely’ requires willpower. I would generaly be more worried about ‘lacking talent’ than ‘lacking dedication’, but I don’t really know how many people share that intuition.
Reading the book as Epub in iBooks, in enumerations there are often certain sentences that have a bigger font size than the normal text (for instance in the section “A Proposed Adjustment to the Astronomical Waste Argument”). I can’t post a picture here but i don’t think it was intendet to be that way. Hope that helps.
If i could only recommend one book to someone should i recommend this or Doing Good Better? Not really sure about that. What do you think?
Very helpful post. As someone running an german EA group i didn’t really find anything that doesn’t apply to us in the same way it did for you.
One interesting thing is your focus on 1on1 conversations: We have never attempted something like this, mostly because we thought it would be at least a bit weird for both parties involved. Did you have the same fear and where proven wrong or is this a problem you run into with some people?
Thank you very much for this important work. This should be an important consideration for everyone and an important factor in career planning. I’ll make sure to say something about that in our local EA group at some point.
Great Interview.