email: jurkovich.nikola@gmail.com
Nikola
Getting Actual Value from “Info Value”: Example from a Failed Experiment
An organic pitch for undergrads: do it when people ask what your major is
Microdooms averted by working on AI Safety
Not all x-risk is the same: implications of non-human-descendants
A model for engagement growth in universities
EA outreach to high school competitors
Prefrosh outreach is a low hanging fruit
How to Organize a Social
I think that it’s relevant that, for some veg*ns, it would take more energy (emotional energy/willpower) not to be veg*n. For instance, having seen some documentaries, I am repulsed by the idea of eating meat due to the sheer emotional force of participating in the atrocities I saw. Maybe this is an indicator that I should spend more time trying to align my emotions to my ethical beliefs (which would, without the strong emotional force, point towards me eating animal products to save energy), but I’m not sure if that’s worth the effort.
Maybe this implies that we shouldn’t recommend documentaries on animal farming to EAs because it would lead to emotional bias against eating animal products? But I’m pretty sure seeing those documentaries expanded my moral circle in a very good way.- 16 Feb 2022 10:03 UTC; 24 points) 's comment on Some thoughts on vegetarianism and veganism by (
Do sour grapes apply to morality?
Good practices for changing minds
Helping newcomers be more objective with career choice
Eric Schmidt on recursive self-improvement
The Precipice—Summary/Review
I think grant evaluators should take into account their intuitions on what kinds of research are most valuable rather than relying on expected value calculations.
In case of EV calculations where the future is part of the equation, I think using microdooms as a measure of impact is pretty practical and can resolve some of the problems inherent with dealing with enormous numbers, because many people have cruxes which are downstream of microdooms. Some think there’ll be 10^40 people, some think there’ll be 10^20. Usually, if two people disagree on how valuable the long-term future is, they don’t have a common unit of measurement for what to do today. But if they both use microdooms, they can compare things 1:1 in terms of their effect on the future, without having to flesh out all of the post-agi cruxes.
Nice to see new people in the Balkans! I’d be down to chat sometime about how EA Croatia started off :)
Strong agree with the idea that we should emphasize actions people are taking and avoid hero-worship-like phrases. I was mostly using my own mental shorthand when I said “superhuman” and forgot to translate to other-people-speak.
Regarding the makeup of fellowship groups, I think probably giving people an option to attend some socials which are generally attended by highly engaged people could be good? So that, if there’s a lack of engagement in their cohorts, they can make up for it by finding a way to interact with engaged people somewhere else.
Haven’t though much about what was most important about the Cambridge residencies, but some important aspects are definitely:
Encouraging us to think big (aim for us one day becoming as good as the best groups, and then even better)
Providing advice and support with organizing
Holding intro talks and events (Kuhan has a very good intro presentation), and having one-on-ones with promising organizers
Check out this post. My views from then have slightly shifted (the numbers stay roughly the same), towards:
If Earth-based life is the only intelligent life that will ever emerge, then humans + other earth life going extinct makes the EV of the future basically 0, aside from non-human Earth-based life optimizing the universe, which would probably be less than 10% of non-extinct-human EV, due to the fact that
Humans being dead updates us towards other stuff eventually going extincts
Many things have to go right for a species to evolve pro-social tendencies in the way humans did, meaning it might not happen before the Earth becomes uninhabitable
This implies we should worry much more about X-Risks to all of Earth life (misaligned AI, nanotech) per unit of probablity than X risks to just humanity, due to the fact that all of Earth life dying would mean that the universe is permanently sterilized of value, while some other species picking up the torch would preserve some possibility of universe optimization, especially in worlds where CEV is very consistent across Earth life
If Earth-based life is not the only intelligent life that will ever emerge, then the stakes become much lower because we’ll only get our allotted bubble anyways, meaning that
If humans go extinct, then some alien species will eventually grab our part of space
Then the EV of the universe (that we can affect) is roughly bounded by how much big our bubble is (even including trade, becasue the most sensible portion of a trade deal is proportional to bubble size), which is probably on the scale of tens of thousands to billions of light-years(?) wide, bounding our portion of the universe to probably less than 1% of the non-alien scenario
This implies that we should care roughly equally about human-bounded and Earth-bounded X-risks per unit of probability, as there probably wouldn’t be time for another Earth species to pick up the torch between the time humans go extinct and the time Earth makes contact with aliens (at which point it’s game over)
Thanks, you’re completely right, that sounds negative. Changed the title to “Helping newcomers be more objective with career choice”, which probably gets across what we’re trying to get across better.
Thank you for writing this. I’ve been repeating this point to many people and now I can point them to this post.
For context, for college-aged people in the US, the two most likely causes of death in a given year are suicide and vehicle accidents, both at around 1 in 6000. Estimates of global nuclear war in a given year are comparable to both of these. Given a AGI timeline of 50% by 2045, it’s quite hard to distribute that 50% over ~20 years and assign much less than 1 in 6000 to the next 365 days. Meaning that even right now, in 2022, existential risks are high up on the list of most probable causes of death for college aged-people. (assuming P(death|AGI) is >0.1 in the next few years)
One project I’ve been thinking about is making (or having someone else make) a medical infographic that takes existential risks seriously, and ranks them accurately as some of the highest probability causes of death (per year) for college-aged people. I’m worried about this seeming too preachy/weird to people who don’t buy the estimates though.