Oh man, I really love this. I used to do door knocking / phone banking in high school and they were quite fun. What wasn’t fun was I struggled to find candidates who I really believed in. But everything I’ve read about Carrick, and also meeting him, I think this is a really special opportunity to be part of something awesome. Count me in.
Phosphorous
[Question] Seeking Math + Programming Accelerated Learning Advice
Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction).
Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.)
Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Rocks teach you about, like, living with yourself.
Ah yes excellent point, I have included above!
I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.
I’d be very curious how far EAG could go by just focusing on Soylent / Huel / Mealsquares.
I can imagine you could get pretty serious savings by making a deal with a small number of brands for a large purchase. I would suspect EA’s account for a serious proportion of these brand’s revenue, and EAG’s are the ideal place for them to market to new EA’s and EA adjacent individuals, so they’d stand to gain a lot from even just providing merchandise at-cost.
Also, when hosting retreats, Soylent was always the most in demand item.
I have not loved the catering at past EAG events (EAG London, EAG SF, EAG DC… the exception was EAGx Boston, whose catering I thought was very good. Though no disrespect to CEA—I’ve handled event catering and it is hell. ). At all of these, I actually would have preferred it if instead of some (most) catered meals there were just assorted Huels (There has been soylent—but I can’t stand soylent).
Lastly, I think this would be pretty funny. It’s a pretty severe change, but I’d be curious to see how EAG participants would respond to this if proposed in a survey, and it’s a fun visual symbol of “efficiency and cost effectiveness vibes” so there’s some signaling benefit.
Learning as much Deep Learning math as I could in 24 hours
Apply to HAIST/MAIA’s AI Governance Workshop in DC (Feb 17-20)
Perhaps outing my lack of math knowledge, but better to ask than sit in ignorance. In the simple model, why is it
V[WX]=v(1−(1−f)r)∞∑i (1−r)i−1
instead of
V[WX]=v∞∑i (1−(1−f)r)i ?
(apologies for bad math notation haven’t asked a math question on the forum yet) essentially my question is, why is (1-(1-f)r) on the outside of the summation, instead of replacing r inside the summation? Doesn’t putting it on the outside mean that you are summing with the wrong risks? Like, if I put it on the inside, wouldn’t I get
V[WX] = v(1-(1-f)r) / ((1-f)r) ?
I have been trying to implement the learning by writing tips for the past four weeks, and have really felt the need for a community for feedback + accountability + expectation setting so I’m not totally alone in the wilderness. I can imagine many others feeling the same way—thank you for organizing this, and I think this is a really interesting model I would like to experiment with in my own community building.