Thanks for the post (and the code). I got curious with the subject...
I’m not convinced this is how I’d talk about Decision Theory or EA, and I miss something about explore vs. exploit and the learning costs (which perhaps could steelman your “math argument” about diversifying), and maybe there’s just too much in a very short space… But it’s amusing, I loved your references, you make a good case for increasing variance (when your losses are capped—i.e., no absorbing states), I’ll probably thinking about it for a while (at least to get some references), and I think it gives interesting insights on the problem of “what to do now that EA is rich / hipster?”
Thanks for the comment, gives me the fuzzies to know that this is useful to someone ☺
On explore/exploit there is this great post by Applied Divinity Studies (https://applieddivinitystudies.com/career-timing/) where they raise the possibility that EAs in particular spend too much time exploring and not exploiting. (But they also point out that interpreting toy models is tricky).
I’ve also written something (https://universalprior.substack.com/p/soldiers-scouts-and-albatrosses) where I argue that the optimal solution for explore-exploit trade offs that evolution has come up with (Levy flights) might also generalize to career decision and thinking more general (extended periods of deep focus on one topic, interrupted by bouts of substantial change/exploration of new terrain).
Thanks for the post (and the code). I got curious with the subject...
I’m not convinced this is how I’d talk about Decision Theory or EA, and I miss something about explore vs. exploit and the learning costs (which perhaps could steelman your “math argument” about diversifying), and maybe there’s just too much in a very short space… But it’s amusing, I loved your references, you make a good case for increasing variance (when your losses are capped—i.e., no absorbing states), I’ll probably thinking about it for a while (at least to get some references), and I think it gives interesting insights on the problem of “what to do now that EA is rich / hipster?”
Thanks for the comment, gives me the fuzzies to know that this is useful to someone ☺
On explore/exploit there is this great post by Applied Divinity Studies (https://applieddivinitystudies.com/career-timing/) where they raise the possibility that EAs in particular spend too much time exploring and not exploiting. (But they also point out that interpreting toy models is tricky).
I’ve also written something (https://universalprior.substack.com/p/soldiers-scouts-and-albatrosses) where I argue that the optimal solution for explore-exploit trade offs that evolution has come up with (Levy flights) might also generalize to career decision and thinking more general (extended periods of deep focus on one topic, interrupted by bouts of substantial change/exploration of new terrain).
Have a nice day!