Was the result of this competition ever announced? I can’t seem to locate it.
rileyharris
Are these fellowships open to applicants outside of computer science/engineering etc. doing relevant work?
I really like time shifter but honestly the following has worked better for me:
Fast for ~16 hours prior to 7am in my new time-zone.
Take melatonin, usually ~10pm in my new timezone and again if I wake up and stop feeling sleepy before around 5am in my new timezone. (I have no idea if this second dosing is optimal but it seems to work).
I highly recommend getting a good neck pillow, earplugs, and eye mask if you travel often or on long trips (e.g. if you are Australian and go overseas almost anywhere).
Thanks to Chris Watkins for suggesting the fasting routine.
The schedule looks like it’s all dated for August, is that the right link?
I’d also potentially include the latest version of Carlsmiths chapter on Power-seeking AI.
I think Thorstad’s “Against the singularity hypothesis” might complement the week 10 readings.
A quick clarification: I mean that “maximize expected utility” is what both CDT and EDT do, so saying “In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility” is perhaps misleading
I quite like this post. I think though that your conclusion, to use CDT when probabilities aren’t affected by your choice and use EDT when they are affected, is slightly strange. As you note, CDT gives the same recommendations EDT in cases where your decision affects the probabilities, so it sounds to me like you would actually follow CDT in all situations (and only trivially follow EDT in the special cases where EDT and CDT make the same recommendations).
I think there’s something to pointing out that CDT in fact recommends one boxing wherever your action can affect what is in the boxes, but I think you should be more explicit about how you prefer CDT.
I think near the end of the post you want to call it Bayesian decision theory. That’s a nice name, but I don’t think you need a new name, especially because causal decision theory already captures the same idea, is well known, and points to the distinctive feature of this view: that you care about causal probabilities rather than probabilities that use your own actions as evidence when they make no causal difference.
When you say “This would be the kind of decision theory that smokes, one-boxes, and doesn’t pay the biker ex-post, but “chooses to pay the biker ex-ante.” In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility.” I find this an odd thing to say, and perhaps a bit misleading, because that’s what both EDT and CDT already do, they just have different conceptions of what expected utility is.
+1
David Thorstad (Reflective Altruism/GPI/Vanderbilt) Tyler John (Longview) Rory Stewart (GiveDirectly)
Thanks for posting, I have a few quick comments I want to make:
-
I recently got into a top program in philosophy despite having clear association with EA (I didn’t cite “EA sources” in my writing sample though, only published papers and OUP books). I agree that you should be careful, especially about relying on “EA Sources” which are not widely viewed as credible.
-
Totally agree that prospects are very bad outside of top 10 and lean towards “even outside of top 5 seriously consider other options”
-
On the other hand, if you really would be okay with failing to find a job in philosophy, it might be reasonable to do a PhD just because you want to. It nice to spend a few years thinking hard about philosophy, especially if you have a funded place, and your program is outside of the US (and therefore shorter)
-
My understanding is that, at a high level, this effect is counterbalanced by the fact that a high rate of extinction risk means the expected value of the future is lower. In this example, we only reduce the risk this century to 10%, but next century it will be 20%, and the one after that it will be 20% and so on. So the risk is 10x higher than in the 2% to 1% scenario. And in general, higher risk lowers the expected value of the future.
In this simple model, these two effects perfectly counterbalance each other for proportional reductions of existential risk. In fact, in this simple model the value of reducing risk is determined entirely by the proportion of the risk reduced and the value of future centuries. (This model is very simplified, and Thorstad explores more complex scenarios in the paper).
“There are three main branches of decision theory: descriptive decision theory (how real agents make decisions), prescriptive decision theory (how real agents should make decisions), and normative decision theory (how ideal agents should make outcomes).”
This doesn’t seem right to me, I would say: an interesting way you can divide up decision theory is between descriptive decision theory (how people make decisions) and normative decision theory (how we should make decisions).
The last line of your description, “how ideal agents should make outcomes” seems especially troubling. I’m not quite sure what you are trying to say.
I think there are good parts of this post, for example, you’re hitting some interesting thought experiments. But several aspects are slightly confusing as written. For example, Newcomb’s problem isn’t (I believe) a counterexample to EDT, but that isn’t clear from your post.
This is a fantastic initiative! I’m not personally vegan, but believe the “default” for catering should be vegan (or at least meat and egg free) with the option for participants to declare special diatery requirements. This would lower consumption of animal products as most people just go with the default option, and push the burden of responsibility to the people going out of their way to eat meat.
That was quick, great work!
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?
If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
My entry is called Project Apep, it’s set in a world where alignment is difficult, but a series of high profile incidents lead to extremely secure and cautious development of AI. It tugs at the tensions between how AI can make the future wonderful or terrible.
I’m working on a related distillation project, I’d love to have a chat so we can coordinate our efforts! (riley@wor.land)
What is the timeline for announcing the result of this competition?