I actually think this is a pretty reasonable division now, removed the automatic upvote on my comment.
rileyharris
More EA success stories:
Pandemics. We have now had the first truly global pandemic in decades, perhaps ever.
Nuclear war. Thanks to recent events, the world is closer than ever to a nuclear catastrophe.
It’s not all good news though. Unfortunately, poverty seems to be trending down, there’s less lead in the paint, and some say AI could solve most problems despite the risks.
Summaries of papers on the nature of consciousness (focusing on artificial consciousness in particular).
A post on how EA research differs from academic research, why people who like one distrust the other, and how in the longterm academic research may be more impactful.
A post explaining what I take to be the best reply to Thorstad’s skeptical paper On the Singularity Hypothesis.
Very personal and unconventional research advice that no-one told me that I would have found helpful in my first 2 years of academic research. What I would change about this advice after taking a break and then starting a PhD.
I really like this!
I feel like these actions and attitudes embody many of the virtues of effective altruism. You really genuinely wanted to help somebody, and you took personally costly actions to do so. I feel great about having people like you in the EA Community. My advice is to keep the feeling of how important you were to Tlalok’s life as you do good effectively with other parts of your time and effort, knowing you are perhaps making a profound difference in many lives.
What is the timeline for announcing the result of this competition?
Was the result of this competition ever announced? I can’t seem to locate it.
Are these fellowships open to applicants outside of computer science/engineering etc. doing relevant work?
I really like time shifter but honestly the following has worked better for me:
Fast for ~16 hours prior to 7am in my new time-zone.
Take melatonin, usually ~10pm in my new timezone and again if I wake up and stop feeling sleepy before around 5am in my new timezone. (I have no idea if this second dosing is optimal but it seems to work).
I highly recommend getting a good neck pillow, earplugs, and eye mask if you travel often or on long trips (e.g. if you are Australian and go overseas almost anywhere).
Thanks to Chris Watkins for suggesting the fasting routine.
The schedule looks like it’s all dated for August, is that the right link?
I’d also potentially include the latest version of Carlsmiths chapter on Power-seeking AI.
I think Thorstad’s “Against the singularity hypothesis” might complement the week 10 readings.
A quick clarification: I mean that “maximize expected utility” is what both CDT and EDT do, so saying “In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility” is perhaps misleading
I quite like this post. I think though that your conclusion, to use CDT when probabilities aren’t affected by your choice and use EDT when they are affected, is slightly strange. As you note, CDT gives the same recommendations EDT in cases where your decision affects the probabilities, so it sounds to me like you would actually follow CDT in all situations (and only trivially follow EDT in the special cases where EDT and CDT make the same recommendations).
I think there’s something to pointing out that CDT in fact recommends one boxing wherever your action can affect what is in the boxes, but I think you should be more explicit about how you prefer CDT.
I think near the end of the post you want to call it Bayesian decision theory. That’s a nice name, but I don’t think you need a new name, especially because causal decision theory already captures the same idea, is well known, and points to the distinctive feature of this view: that you care about causal probabilities rather than probabilities that use your own actions as evidence when they make no causal difference.
When you say “This would be the kind of decision theory that smokes, one-boxes, and doesn’t pay the biker ex-post, but “chooses to pay the biker ex-ante.” In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility.” I find this an odd thing to say, and perhaps a bit misleading, because that’s what both EDT and CDT already do, they just have different conceptions of what expected utility is.
+1
David Thorstad (Reflective Altruism/GPI/Vanderbilt) Tyler John (Longview) Rory Stewart (GiveDirectly)
I don’t think my comment is likely to be all that useful, but putting it here anyway.
I personally find it difficult to pay attention to podcasts with more than 2 people. I tried to listen to the first episode for about 30 minutes and this one for about 5 minutes, and I couldn’t comfortably follow them while paying attention to other tasks (walking around, cleaning, cooking etc.).
I think it’s likely that more diversity in the space is good though, as many of the most popular podcasts I see on e.g. Youtube tend to be more than two people. I suspect this is more related to my own personal idiosyncratic preferences, and it might be good to attract new listeners that have different preferences. I can see another commenter was absolutely enthralled!
I also now really like the look of Dwarkesh’s podcast, and plan to listen to it, and I wouldn’t have known about it otherwise!