Yeah sorry I didn’t intend to disagree with you on whether it was a management dispute or an ethics dispute, just that it wasn’t only the issue you explicitly named.
Ben Millwood
on its own quick takes? controllable by anyone? or do you authorise it to post on your own quick takes?
(full disclosure, I don’t personally use twitter so I doubt I’ll do this, but maybe it’s useful to you to clarify)
L/acc, who think that LEEP have gone too far
This sounds like it’s disagreeing with the parent comment but I’m not sure if it is?
Thanks for the link! I’m sure there’s a tonne of existing work in this area, and haven’t really evaluated to what extent this is already covered by it.
I don’t think the EA movement as a whole can sensibly be assigned a scope, really. But I think we should collectively be open to doing whatever reasonably practicable, ethical things seem most important, without restricting ourselves to only certain kinds of behaviour fitting that description.
Do you have a call to action here? Are you expecting that someone reading this on the forum has any ability to make it more (or less) likely to happen?
I broadly think it’s cool to be raising novel (to me) possibilities like this, and I think you’ve done a good job of illustrating that it’s not obviously out of line with existing practice. Thanks for writing it!
Minor formatting / typographical things: I think the image is misplaced from where the text refers to it. Also, weirdly, a lot of the single quotation marks in the text are duplicated?
I think this was an example of a disagreement they had, but not the whole disagreement. (Another alleged example was the thing where Tara didn’t want Sam to run some trading algorithm unattended, which he agreed to and then did anyway.)
I normally think of community health as dealing with interpersonal stuff, and wouldn’t have expected them to be equipped to evaluate whether a business was being run responsibly. It seems closer to some of the stuff they’re doing now, but at the time the team was pretty constrained by available staff time (and finding it difficult to hire), so I wouldn’t expect them to have been doing anything outside of their core competency.
Maybe a lesson is that we should be / should have been clearer about scopes, so there’s more of an opportunity to notice when something doesn’t belong to anyone?
Tools for shaping probability intuitions. You can give a bunch of events, casual relationships or implications between them, and probabilities for each, or their conjunctions, or conditional probabilities for such things. The tool will infer what you don’t supply to the extent possible, and will point out contradictions in your conditional vs. absolute probabilities, and give you recommendations for how to resolve them.
I’m going to make a quick take thread of EA-relevant software projects I could work on. Agree / disagree vote if you think I should/ should not do some particular project.
agree with “not a prediction market” but think “just an opinion poll” undersells it; people are evaluated and rewarded on their accuracy
I hear Will not as saying that going 35mph is in itself wrong in this analogy (necessarily), but that EA is now more-than-average vulnerable to attack and mistrust, so we need to signal our trustworthiness more clearly than others do.
I guess I think of caring about future people as the core of longtermism, so if you’re already signed up to that, I would already call you a longtermist? I think most people aren’t signed up for that, though.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
IMO, most x-risk from AI probably doesn’t come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/most/all humans survive, but fair enough.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
Perhaps we did not emphasise enough the simple point “never commit a crime”. As I said in the previous point, there have been extensive warnings against naive “ends justify the means” thinking from many leaders (MacAskill, Ord, Karnofsky, CEA Guiding Principles, 80,000 Hours career advice, etc).
Nevertheless, we could do even more, for example in 80,000 Hours resources or career/student groups, to emphasise this point. There didn’t seem to be much explicit “don’t ever commit a crime” warnings (I assume because this should have been so blindingly obvious to any reasonable or moral person).
There are many immoral laws in the world, particularly but not exclusively if you look outside Europe and the US, e.g. EAs living in countries where homosexuality is illegal should, I think, have our support in breaking the law if they want to.
In fact, I think most people with a cursory understanding of the history of activism will be aware of the role that civil disobedience has sometimes had in correcting injustice, so breaking laws can sometimes be even virtuous. In extreme cases, one can even imagine it being morally obligatory.
I think a categorical “never commit crimes” is hard to take seriously without some explicit response to this context. I definitely don’t think we should claim it’s obvious that no-one should ever break the law.
It is intuitively “obvious” that Sam’s crimes aren’t crimes like these. (I pretty much always second-guess the word obvious, but I’m happy to use it here.) But that’s because we can judge for ourselves that they’re harmful and immoral, not because they’re against the law. Perhaps someone could make an argument that sometimes you should follow the law even when your own morality says you should do something else, but I don’t think it’s going to be a simple or obvious argument.
Longtermism suggests a different focus within existential risks, because it feels very differently about “99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation” and “100% of humanity is destroyed, civilisation ends”, even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which don’t tend to counter-adapt when some survive.
Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.
Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.
Answering this question depends a little on having a sense of what the “non-longtermist status quo” is, but:
I think there’s more than one popular way of thinking about issues like this,
in particular I think it’s definitely not universal to take existential risk seriously,
I think common-sense and the status quo include some (at least partial) longtermism, e.g. I think popular rhetoric around climate change has often held the assumption that we were taking action primarily with our descendants in mind, rather than ourselves.
This seems like an impressive set of capabilities, exciting to hear about the new org :)
Did CSER write more about your work for them anywhere? Interested to read more about it.