Working on strengthening democracy and EA community building. Please DM me if you’re interested in contributing to the above. Anonymous feedback form: https://www.admonymous.co/kuhanj
kuhanj
Humanizing Expected Value
Will’s list from his recent post has good candidates too:
AI character[5]
AI welfare / digital minds
the economic and political rights of AIs
AI-driven persuasion and epistemic disruption
AI for better reasoning, decision-making and coordination
the risk of (AI-enabled) human coups
democracy preservation
gradual disempowerment
biorisk
space governance
s-risks
macrostrategy
meta
Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren’t significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that “heroic responsibility”/going overboard can be fraught.
Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it’s less intellectually stimulating, more stressful, etc; how many EAs are willing to donate a large portion of their income; how many EAs think about prioritization and population ethics very rigorously; etc. I’m very appreciative of how much more I see these in EA world than outside it, and I realize the above are unreasonable to expect from people.
Strong agree. There are many more tractable, effective opportunities than people realize. Unfortunately, many of these can’t be discussed publicly. I’m hosting an event at EAG NYC on US democracy preservation Saturday at 4pm, and there will be a social near the venue right after at 5. I’d love for conference attendees to join! Details will be on Swapcard.
While I really like the HPMOR quote, I don’t really resonate with heroic responsibility, and don’t resonate with the “Everything is my fault” framing. Responsibility is a helpful social coordination tool, but it doesn’t feel very “real” to me. I try to take the most helpful/impactful actions, even if they don’t seem like “my responsibility” (while being cooperative and not unilateral and with reasonable constraints).
I’m sympathetic to taking on heroic responsibility causing harm in certain cases, but I don’t see strong enough evidence that it causes more harm than good. The examples of moral courage from my talk all seem like examples of heroic responsibility with positive outcomes. The converse points to your bullets also generally seem more compelling to me:
1) It seems more likely to me that people taking too little responsibility for making the world better off has caused a lot more harm (like billionaires not doing more to reduce poverty, factory farming, climate change, AI risk, etc, or improve the media/disinformation landscape and political environment, etc). The harm is just much less visible since these are mostly failures of omission, not execution errors. It seems obvious to me the world could be much better off today, and the trajectory of the future could look much better than it does right now.
2) Not really the converse, but I don’t know of anyone leaving an impactful role because they can’t see how it will solve everything? I’ve never heard of anyone whose bar for taking on a job is “must be able to solve everything.”
3) I see tons of apathy, greed, laziness, inefficiency, etc that lead to worse outcomes. The world is on fire in various ways, but the vast majority of people don’t act like it.
4) Overvaluing conventional wisdom also causes tons of harm. How many well-resourced people never question general societal ethical norms (e.g. around the ethics of killing animals for food, or how much to donate, or how much social impact should be a priority in your career compared to salary, etc etc etc).
5) I’d argue EAs (and humans in general) are much more prone to prioritizing higher probability/certainty, lower-EV options over higher-EV, lower-probability options (Givewell donations over pro-global-health USG lobbying or political donations feels like a likely candidate). It’s very emotionally difficult to do something that has a low chance of succeeding. AI safety does seem like a strong counterexample in the EA community, but I’d guess a lot of the community’s members’ prioritization of AI safety and specific work people do has more to do with intellectual interest and it being high-status in the community than rigorous impact-optimization.
Two cruxes for whether to err more in the direction of doing things the normal way: 1) How well you expect things to go by default. 2) How easy it is to do good vs. cause harm.
I don’t feel great about 1), and honestly feel pretty good about 2), largely because I think that doing common-sense good things tends to actually be good, and doing good galaxy-brained ends-justify-the-means things that seem bad to normal people (like committing fraud or violence or whatever) are usually actually bad.
Thank you for the kind words Jonas!
Your comment reminded me of another passage from one of my favorite Rob talks, Selflessness and a Life of Love:“Another thing about the abolitionist movement is that, if you look at the history of it, it actually took sixty or seventy or eighty years to actually make an effect. And some of the people who started it didn’t live to see the fruits of it. So there’s something about this giving myself to benefit others. I will never see them, I will never meet them, I will never get anything from them, whether that’s people or parts of the earth. And having this long view. And somehow it cannot be, in that case, about the limited self. It cannot be, because the limited self is not getting anything out of it. [...] But how might we have this sense of urgency without despair? Meeting the enormity of the suffering in the world with a sense of urgency in the heart, engagement in the heart, but without despair. How can we have, as human beings, a love that keeps going no matter what? And we call that ‘equanimity.’ It’s an aspect of equanimity, that it stays steady no matter what. The love, the compassion stays steady. [...] If we’re, in the practice, cultivating this sense of keeping the mind up and bright, and it’s still open, and it’s still sensitive, and the heart is open and receptive, but the consciousness is buoyant, that means it won’t sink when it meets the suffering in the world. The compassion will be buoyant.”
Thanks Will! Our first chat back at Stanford in 2019 about how valuable EA community building and university group organizing are played an important role in me deciding to prioritize it over the following several years, and I’m very grateful I did! Thanks for the fantastic advice. :)
Taking ethics seriously, and enjoying the process
Taking uni organizing really seriously was upstream of MATS, EA Courses/Virtual Programs, and BlueDot (shoutout to Dewi) getting started among other things. IMO this work is extremely valuable and heavily under-prioritized in the community compared to research. Group organizing can be quite helpful for training communications skills, entrepreneurship, agency, grit, improved intuitions about theories of change, management, networking/providing value to other people, general organization/ability to get things done, and many other flexible skills that from personal experience can significantly increase your impact.
I wrote up some arguments for tractability on my forum post about the tractability of electoral politics here. I also agree with this take about neglectedness being an often unhelpful heuristic for figuring out what’s most impactful to work on. People I know who have worked on electoral politics have repeatedly found surprising opportunities for impact.
Not uncommon, and I’m happy to chat about efforts to change this. (This offer is open to other forum readers too, please feel free to DM me).
Celebrating your intention to do good effectively
Not that I know of! I can ask if they’re open to something in this vein.
How long does the happiness continue when you’re not meditating? A range of times would be helpful
Initially the afterglow would last 30 minutes to a few hours. Over time it’s gotten closer to a default state unless various stressors (usually work-related) build up and I don’t spend enough time processing them. I’ve been trading off higher mindfulness to get more work done and am not sure if I’m making the right trade-offs, but I expect it’ll become clearer over time as I get more data on how my productivity varies with my mindfulness level.
How long does it take you to get into the state each time?
When my mindfulness levels are high it can be almost instantaneous and persist outside of meditation. When it’s not, I can still usually get to a fairly strong jhana within 30 minutes.
How many hours of meditation did you have to do before you could reliably achieve the state?
In my case maybe 5-8 hours of meditation on retreat before the earlier jhanas felt easy to straightforwardly access? I did get lucky experiencing a jhana quite early on during my retreat. I also found cold showers and listening to my favorite music pre-meditation made getting into a jhana much faster.
What percentage of the time when you try to get into the state do you succeed?
ATM I think 90-95%?
I intended to distinguish upregulated breathing/controlled hyperventilation like the linked video from (any kind of) meditation with the intention of getting into jhanas.
Fair and understandable criticisms. Some quick responses:
1) I’ve attempted to share resources and pointers that I hope can get people similar benefits for free without signing up for a retreat (like Rob Burbea’s retreat videos, Nadia Asparouhova’s write-up with meditation instructions, and other content). Since I found most of these after my Jhourney retreat I can’t speak from experience about their effectiveness. I’d be excited for more people to experiment and share what does and doesn’t work for them, and for people with more experience to share what’s worked for them (on the meditation front, emotion processing, and more). I also don’t intend to suggest that Jhourney has access to insights that are only discoverable by doing one of their retreats. They do seem to be taking the prospect that jhanas can be accessed quickly much more seriously than many others, and have encouraging results.2) As I mentioned, my experience appears to have been somewhat of an outlier, and I don’t have a great understanding as to why. Insofar as whatever worked for me can help others, I aim to share. That said, Twitter discourse about jhanas and Jhourney seems to match my impression, other unaffiliated people have discussed Jhourney retreats seeming to generate many outlier positive experiences.
3) It doesn’t surprise me at all that there’s low-hanging fruit on the mindfulness front. Buddhist texts are very poorly (anti-helpfully) translated. There has not been that much serious exploration, optimization pressure, and investment into improving and democratizing mindfulness education and wellbeing. This extends beyond mindfulness. Why did it take as long as it did for GLP-1 medications to become widespread? Many self-help interventions are incentivized against actually fixing people’s problems (e.g. therapists stop getting paid if they permanently fix your problems). There are other orgs that seem to generate very positive experiences working in related areas, like Art of Accomplishment content and courses for processing emotions, making better decisions, and better connecting with others.
4) I don’t know Jhourney’s team well and don’t want to speak on their behalf (but I do think they’re well-intentioned). I’ve found their official and staff Twitter accounts share the most relevant instructions they provide on retreat—e.g. they publicly discuss cultivating positivity likely being more effective for accessing jhanas, forgiveness meditation (which I’m realizing I should add to the main post) and guided recordings, and many other insights.
My impression is that expected donations/fees for week-long meditation retreats is often in the $1000+ range (though granted this is for in-person retreats, and I haven’t explored this in detail). We did have daily personalized instruction, and staff were available on-call throughout our retreat. Given how quickly Jhourney’s retreats sell out, from a profit-maximizing perspective it seems like they could be charging more. I also don’t know what they do with their profits. I wouldn’t be surprised if they donated a decent amount, or spent it in ways they think make sense on altruistic grounds. They say in their blog post about their plans that they aspire to change the lives of tens of millions with the following steps:Build a school to demonstrate that it’s possible to transform wellbeing with meditation
Invest the money and attention from the school into technology to accelerate that process
Deliver superwellbeing more quickly and reliably
Thanks! You can fill out this form to get notified about future retreats. Their in-person retreats might well be worth doing as well if you’re able to, and generate similar results according to their survey. They’re more expensive and require taking more time off work. But given their track record I wouldn’t be surprised if it was worth the money and time. I have a friend who has done an in-person and online retreat with them and preferred the in-person one.
That said, I have a hard time imagining my experience being as positive doing the retreat in person, largely because I got a lot of value out of feeling comfortable expressing my emotions however felt natural (and crying in particular). I would not have felt comfortable potentially disrupting others while meditating in the same room.
And strong +1 to trying things. I wish I had read Romeo Steven’s meditation FAQ (and the rest of his blog) years ago, and this excerpt in particular.There needs to be some sort of guiding principle on when to keep going and when to try something different. The answer, from surveys and measurements taken during longer term practice intensives, seems to be about 30 hours of practice. If a practice hasn’t shown some sort of tangible, legible benefit in your thinking process, emotional stability, or skillful behavior in the world it very very likely isn’t the practice for you right now. This doesn’t mean it is a bad practice or that others might not derive great benefit from it. This also doesn’t mean it might not be useful to you in the future. But it isn’t the practice for you right now. Granted, there are exceptions to every rule, and some people get something out of gritting their teeth and sticking with a practice for a long time. But I strongly suspect they could have had an easier time trying other things. 30 hours might sound like a long time, but its just a month of practice at one hour per day. This caps how much of a time waste any given technique is. In the beginning it is very likely that you can get away with less: two weeks of practice time should show some results. If you try lots of things for two weeks each and nothing works you may need to resort to the longer standard of 30 hours.
Jhourney recommends approaching meditation like a scientist outside of sessions (e.g. considering experiments and variables to isolate), but with child-like playfulness while meditating. I’ve found that approach quite helpful. It led to an impromptu experiment to listen to music to amplify positive emotions while meditating, which IIRC preceded my first jhana of the retreat.
Speedrunning on-demand bliss for improved productivity, wellbeing, and thinking
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I don’t think it’s likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/agentic AIs, and/or other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. My current low-resilience impression is probably 90+%.
And the above considerations and credences make how good the next intelligent species are vs. humans fairly inconsequential.
The argument sure is a string :P