Still feeling a bit disillusioned after pursuing academic research up to Post-Doctoral level, spent some time teaching languages and working at a democracy NGO, I feel that I haven’t found a way to do good for the world and sustain myself and my wife at the same time.
Haris Shekeris
Aaah ok, that helps a lot!
Plus I think I had misread your (and Katja’s piece, at least the summary) originally, now that I’ve had a little more sleep I think I understand a bit better what you’re onto!
Best Wishes,
Haris
Hello :) - apologies and provisos first, I admit that I haven’t read Katja’s post so what I will say may be already covered by her. I don’t know if this is relevant, but I feel (stress, feel!) that a qualitative difference between states and corporations is that the former are (or ought to be, at least) accountable to their citizens (if not in general all citizens of the world in a weaker sense) and their function is the wellbeing and protection of their citizens whereas corporations are only accountable to their stakeholders and their primary function may be to get as rich as possible. So, the motivation to activate AI (here, I’m a bit ignorant, I don’t know if there’s such a thing as an activate or a kill switch that would prevent AI from becoming fully autonomous or surpass humans) will be different for governments and firms, and in the former case governments, same as with nuclear weapons, may decide to keep AI as a form of deterrence and not activated whereas corporations may not.
I really hope this is pertinent and helpful and not too ignorant!Best Wishes,
Haris
yeah, I agree that what’s most important is the bigger picture, and regarding that, I totally agree with your conclusions!
(though, coming just after listening to Ian Morris’s podcast on his history book (here: https://80000hours.org/podcast/episodes/ian-morris-big-picture-history/) I just got the thought that politics may dictate a quantification also for the scenaria for nuclear conflict as politically speaking, use of a nuclear weapon by say Russia would be qualitatively different to use by say Pakistan because of (for lack of better words) there’s a ‘hierarchy’ of states judged on their power on the world stage. Something which I think it would be hard for a model to capture. But I may be wrong and you have covered this, and as I said before, I do agree with your conclusions and estimates
Wow, many thanks, quite an eye-opener though I’m quite new to the literature on this question and to the forum itself!
Just a worry about the modelling which may have been taken care of anyway through the mention of inferential uncertainty (I haven’t checked the definition of this so far).
So, here goes: I wonder whether the inclusion of wars of more than say 20 years ago in the calculations (for example for the conventional war injuries) are pertinent, since the conditions then were significantly different. More specifically, for example, there was no internet and no possibility of cyber-warfare and the world was a far less interconnected place.
More generally, I wonder whether even considering the last ten or twenty years or even two years the political conditions are such that render each conflict a sui generis event, and whether this should be a worry for any modeller.
Best Wishes,
Haris
Should effective realists be afraid of the popular dictum ‘the road to Hell is paved with the best of intentions?’ In other words, how can we be sure that our interventions will be on the right side of history long after we’re dead, and that they won’t end up causing great misfortune despite our stated intention to do good?
A bit of a newbie in EA (two-three weeks of reading and discovering stuff) so this may prove to be quite irrelevant, but here it goes anyway. I’m wondering if EAs should be worried about stories like the following (if needed i think i can find the scientific literature behind it):
https://www.sciencetimes.com/articles/40818/20221104/terrifying-model-provides-glimpse-humans-look-year-3000-overusing-technology.htm
My worry is that the standard EA literature where it is assumed that there will be thousands of generations left if humans are left alone may overlook some mundane effects or scenaria such as those stemming from such studies.
An example, based on the above, could be that humans could be unrecognizable from today but due to the mundane reasons of using well-established technologies that exist today (laptops and smartphones). An unlikely extension of this may be that in say 1000 years homo sapiens goes extinct because evolution-for-optimized software use (i mean smartphones and laptops) has fiddled with the reproduction system (or simply people decided rationally they no longer wanted to have sex, either for pleasure or for child-raising purposes).
Another example could be long-term effects (unknown-unknowns at the moment though, but there’s the example of the fish turning hermaphrodite due to exposure to antidepressants in a lake in the 90s, something that alerted people to the effects of small yet steady concentrations of medicines in human bodies) of substances in the body, which again change the average human body. An example of such a scenario would be if we discovered soon, say in 2025 that a concentration of above say 2μgs of microplastics in the gut begins to seriously mess up sperm or egg quality, hence rendering reproduction impossible.
Of course, we can always assume that major scientific bodies may produce advice to reverse such adverse effects, but what if these are as effective as anti-smoking campaigns? (imagine a current campaign to urgently reduce internet time to 30mins per day in advanced western countries because a cutting-edge scientific report has linked it to a rise in deadly brain tumours—how would that scenario play out? My predictions would involve some sort of denialism and conspiracy theories as well as rioting if the technological fix doesn’t come fast, to say the least). Remember that even with COVID, which was a global pandemic, the desired response ( for example everybody or most people in the world getting vaccinated so that they don’t die nor do they become altruists by not spreading the virus, at least when there was the uncertainty about how deadly the virus would be) largely failed due to humanity bringing out its worst self (politics among countries such as for securing more vaccines for their citizens or pharmaceutical companies maximizing their profits, to cite just two examples).
Once again, apologies if this is a bit off-topic or totally misses the point of EA.