I think it’d be bizarre if the war in Ukraine didn’t shift out funding priorities in some way. WW3 now appears likelier, and likelier to happen sooner. Presumably, this should shift more of a focus towards A. preventing it and B. minimizing its harm.
Jpmos
The episode was fun to listen to! It was also my introduction to the podcast, which I’ll be listening to more of.
There were some good takes, but I was not impressed by the following section of Samo’s:
So, as US hegemony recedes globally, I think there will be more wars. Not necessarily in Europe, though I predict there will be wars in the following parts of Europe:
I think the Mediterranean, there will be one, there will be some large wars fought in the Mediterranean as there will be a new balance of power between European countries that aspire to be Mediterranean powers, especially France, and to a much lesser extent, Spain and Italy versus Eastern rising powers, such as Turkey.
He’s predicting that members of NATO will fight? What—in the next 50 years? What would a metaculus question asking about that poll? Less than 5% I’d guess.
… And I think there’ll be wars in all of the former Soviet spaces because the balance of power is going to be more and more unfavorable to the American side as the US slowly and inevitably has to withdraw from the world because it’s relative economic and political weight is smaller.
Wait, so there’ll be wars because the US is relatively weaker compared to … who excactly? China? Because I certainly doubt it’d be Russia who’ll rise from the ashes riding glorious Khrushchev levels of growth.
So I’m not talking about an absolute decline. It’s just that the very fact that China has risen means that even if Russia continues to grow weaker in the future, possibly due to sanctions, possibly due to political instability, et cetera, et cetera, China can still start projecting power all over the place.
That makes it seem like China will be the driving force behind wars in “all the Soviet spaces”? And Russia will be happy with that? That seems wild.
[Crosspost] How do Effective Altruist recommendations change in times of war? [From Marginal Revolution]
Thank you for writing that Ed!
Hi, I’m afraid I don’t have any terribly helpful advice. My family and other people I know are having the same struggle.
The best I can come up with is that the metaculus community gives a 20% chance of WW3 breaking out before 2050. That’s definitely way too high, but I assume that most of the probability mass is distributed somewhat evenly over time. The same community also places a 2% chance on a NATO nation invoking article 5 in the next year, which would presumably not equate to nuclear war in the same circumstance.
However, I think recent events should make the EA community ask themselves and each other “what should we do if these risks increase?” At what probability of WW3 do we start shifting EA resources towards work on prevention / recovery? At what chance do we as a community start moving to safer locations?
My family is trying to formulate a plan that’s something like “if the probability of WW3 in the next year surpasses 33%, then we’re going to temporarily relocate to another country until the tensions subside peacefully.”Obviously that’s not possible for many, but talking about it has settled our nerves a bit.
I hope you find some peace.
Thank you! I edited the post to reflect your updated text.
Shortening & enlightening dark ages as a sub-area of catastrophic risk reduction
Would you be able to provide a plainer language summary of the papers conclusions or arguments? I think I’m interested in the topics discussed in the paper. But it’s unclear me what the arguments actually are, so I’m inclined to disengage.
Take this sentence, which seems important:
“We argue that while the moral uncertainty approach cannot vindicate an exceptionless public justification principle, it gives us reason to adopt public justification as a pro tanto institutional commitment.”
I do not understand this and so I do not see how this is a valuable addition to the critical topic of moral uncertainty.
That crisis was resolved when President Dwight Eisenhower sent the National Guard to Arkansas to integrate Central High School.
Small note: A division of the US military was called in response to Faubus ordering the Arkansas National Guard to block integration. I think the details show how the situation was one of the most precarious Federal-State conflicts since the civil war, and I think that’d influence how I would respond to the question.
A related thought:
Some humans are much less sensitive to physical pain.
1. Could an observer correctly differentiate between those with normal and abnormally low sensitivity to pain?
2. For humans who’re relatively insensitive to pain, but still exhibit the appropriate response to harm signals (assuming they exist), would analgesics diminish the “appropriateness” of their response to a harm signal?
Edit: This comment now makes less sense, given that Abby has revised the language of her comment.
Abby,I strongly endorse what you say in your last paragraph:
Please provide evidence that “dissonance in the brain” as measured by a “Consonance Dissonance Noise Signature” is associated with suffering? … I’m willing to change my skepticism about this theory if you have this evidence.
However, I’d like to push back on the tone of your reply. If you’re sorry for posting a negative non-constructive comment, why not try to be a bit more constructive? Why not say something like “I am deeply skeptical of this theory and do not at this moment think it’s worth EAs spending time on. [insert reasons]. I would be willing to change my view if there was evidence.”
Apologies for being pedantic, but I think it’s worth the effort to try and keep the conversation on the forum as constructive as possible!
I found that this episode increased my faith in the EA community a little bit. One of my caricatures of other EAs when I first found the community was “it’s good these people exist but they’d make terrible friends because they’re so impartial they’d leave me in a rut to squeeze the epsilon out of an EV that bears a resemblance to a probability.”
It was a bit of an (irrational?) fear that EAs and EA orgs were constituted by hyper-utilitarians that’d sacrifice their friends / employees if the felicific calculus didn’t add up.
But most people I’ve met in (at least my section of) the EA community have been unusually kind and compassionate people. Some I am very glad to call my friends. And I don’t think they would jettison me if I gained a debilitating illness, which makes me more motivated to do good.
Note: Of course there’s instrumental utilitarian reasons to act in a manner more consistent with commonsense decency.
---------------------------------------------
This made me want to hear more narratives and cases like this that give a helpful but honest report of what someone’s experience of mental health was like. I’ve thus far avoided the extant literature out of a fear that reading / listening to cases of people experience severe mental illness would degrade my own well-being.
In particular, I’d like to hear about other people in the EA community and hear more stories (there’ve kind of been a few on the forum) who weren’t as lucky as Howie.
I’ve tracked my time for a year working remotely doing research and it comes out to between 25 and 35 hours a week.
I’d guess a little more than half is deep work where I am fully engaged and undistracted. Most of the time this means taking no breaks for a several hour stretch every day. It’s not uncommon for at least half of the deep work to be misguided or not best spent on reflection.
I’m not sure what to imagine when I hear an amount of weekly hours when working remotely. Working 40 hours a week at an office or on a job site can be relaxing compared to the weeks where I track 30 hours or less since it’s common to spread five hours of work across a “normal” eight hour work day span of time.
I next describe what different quantities of hours worked looks and feels like. Basically, my guess is that 1 hour of remote work for me = 1.5 hours of “office work” so 20 (40) hours worked = 30 (60) hours spent “at the office”.
In a 20-25ish hour work week (if not caused by low mood) I typically am balanced and happy, feeling like I have most of my afternoons and evenings free to exercising, see friends and create things. This would be ideal to maintain, and having these weeks keeps me from burning out. (Aside: normally the intensity of work weeks cycles between high and low intensity work weeks).
25-30 hours. In these weeks I maintain the habits I find essential to keep going, but it feels like just barely. On half the days I finish work, and then immediately go for a run before it is dark, return home to frantically cook dinner then squeeze another hour working before winding down which normally does not include a discrete leisure pursuit beyond listening to a podcast while tidying up the office-house. On two maybe three if I’m lucky work days I do something that’s fun but not exercise for at least an hour.
30-35. I have one or two periods of time during the work week spent doing something deliberately not work related. My relationships feel a bit strained, if there’s a quiet time it’s spent in transit or doing chores. I imagine this as hard to maintain, and in the deadline weeks (or god forbid months) where this persists I feel myself wearing thin.
35-40. If I’m working this much something has gone wrong. There is nothing but work. It feels as if I spend the whole day, every work day engaged in work or thinking about it. I may not leave the house for a couple days. This normally means a few chunks of the weekend slipping back to do something “light” and “easy”. Every non-dinner meal (which are often few and hastily prepared) is consumed at my desk which I’m at minutes after waking. There is sometimes a break for dinner, but if I can I’ll eat that at my desk too. During these (rare) weeks things start to fall apart.
These weeks are frequently followed by a hangover week where I crash, and work 20ish hours.
I am imagining movies with heroes where it wasn’t their job (so not the soldier in 1917 / most war movies) or they weren’t in some sense “chosen” (most superhero / fantasy movies).
Seven samurai: where some samurai reluctantly attempt to protect a village.
Princess mononoke: I just think this is a good hero story.
Hacksaw Ridge (I didn’t really want to include any war movies, but I think this merits inclusion because it’s about a conscientious objector. Very violent.)
Haven’t seen Hotel Rwanda but it may merit inclusion.
The title of this post did not inform me about the claim “that EAs have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact -- [and] this is a deeply rooted mistake.”
I came very close to not actually reading what is an interesting claim I’d like to see explored because it came close to the end and there was no hint of it in the title or the start of the post. Since it is still relatively early in the life of this post you may want to consider revising the title and layout of the post to communicate more effectively.
After the apocalypse
I think this is interesting in of itself but also related to something I haven’t seen explored much in general: How important is it that EA ideas exist a long time? How important is it that they are widely held? How would we package an idea to propagate through time? How could we learn from religions?
More directly to the topic: is this a point in favor of EAs forming a hub in New Zealand?
Comparative lit studies of whether ambitious science fiction (might not be well operationalized) is correlated with ambitious science fact.
I’ve seen some discussion around this topic but I feel like it hasn’t been satisfyingly motivated. For personal reasons I’d like to hear more about this.
Nice post and useful discussion. I did think this post would be a meta-comment about the EA forum, not a (continued) discussion of arguments against strong longtermism.
One thing I would note is that cryptocurrency as a cause area is independent of cryptocurrency having have a net benefit or a net harmful effect; potentially cryptocurrency could destabilize global financial systems, so if one has a less positive view on cryptocurrency, regulating cryptocurrency (whether by governments, or by self-regulation within the ecosystem) and making sure at least some cryptocurrencies have a positive impact (thus reducing the overall net harm) could still be a potential cause area.
Good point! I think I’d like to see more spelling out of how exactly it could transform things (for better or worse). With my lame understanding: once I see that cryptocurrency is a solid store of value, then I can see it potentially threatening central banks and the ability for states to generate revenue through taxes. However, I find it hard to believe governments would let cryptocurrencies get to that point—if cryptocurrencies are in fact capable of getting to that point.
Another thing that is worth pointing out with cryptocurrencies is how they interact with the digitization of the economy. In general greater digitzation may not be a bad thing. But it’s possible that cryptocurrency led digitization may make corruption easier (I’m imagining it’d behave similarly to cash).
Howdy,
The outlook for cryptocurrencies as a cause area seems rather mixed from my pretty uninformed viewpoint. I’d like to highlight some reasons outside of their speculative potential. I think the best argument can be made for cryptocurrencies adding value through poverty alleviation.
Epistemic disclosure: Any knowledge comes from reading the news not studying the topic.
Pros:
May make it much much easier / far less costly to send remittances which make up larger and arguably more helpful inflows than aid in many LMICs. I think this is worth thinking about.
Similarly it can compete with incompetent central banks & provide a means of transaction in a country experiencing hyperinflation / economic collapse. This seems potentially valuable but I wonder if this could release pressure on bad governments in critical moments. I’m not too interested with how it competes with what I view as generally competent central banks (found in most HICs)
Experiment with with new governance strategies and voting techniques. Those techniques can obviously be experimented with in other places, but perhaps not to the same scale as quickly.
Cons:
Massive energy consumption. I’m thinking of bitcoin, the mining of which reportedly sucks up ~0.3% of the world’s energy consumption. If so, that’s a lot. Supposedly miners mostly use renewables, but it wouldn’t surprise me that bitcoin’s impact on the world was net negative even if only 10% of the energy used to mine it came from coal. I’m not sure if that energy, counterfactually would be used.
This one is not strongly held: It may be a talent black hole. I don’t know if this view is some vestige of at one time having more leftist sympathies, but I still worry that complex financial markets can siphon off scientists from having a much higher impact.
General reservation: For comms reasons I think EAs should be conservative / cautious on the margin around controversial topics or cause areas that’d distort our image in such a way that could damage our LR growth.
Mixed: Making drug / illegal markets more efficient.
Funds criminal organizations. This view may be a little old fashioned, but criminal organizations have large negative externalities. Particularly Latin American drug cartels which may benefit from cryptocurrencies.
Increases safety of consumers of illegal drugs.
One generic argument I see raised in this post is “here’s a way that lots of money could be made so --> earn to give / save.” Keeping the mostly-efficient market hypothesis as a prior, I’m skeptical of most propositions that include the first part of that quotation.
In summary: Cryptocurrencies probably aren’t going anywhere. Supporting their use in failed states / their use for remittances seems potentially useful if a currency could be found that’s not so volatile. Making them less energy hungry seems potentially useful if someone is well placed to do so.
I really appreciated this, having watched GPI come out with papers that seemed really neat but incomprehensible to me.