Working at a ritzy quant firm shouldn’t impact your competitiveness for PhD programs too much (could even improve it), and if you’re getting $1M+ / 5y E2G-worthy offers halfway through ugrad (and have already published!), you’ll probably still be able to get comparable offers if you decide to e.g. master out. So in that regard, it probably doesn’t matter too much which path you take, since neither preclude reinvention.
If it were me, I’d take the bird in hand and work in the quant role… but if I felt myself able to make more meaningful “direct” contributions, focus on not just E2G’ing but also achieving financial independence as soon as possible. PhD program stipends are quite a bit lower than industry pay (at my current school, CS students only make around ~$45k / y), so being able to supplement that income with proceeds from investments would free you from monetary concerns and let you focus your attentions on more valuable pursuits (e.g. you wouldn’t have to waste time on unpleasant trivialities, like household chores, if you could instead hire a regular cleaning service + meal delivery. Hell, spend another year or two at the firm and get yourself a part-time personal assistant for the duration of the grad program to manage your emails for you haha). Focus on solving those claims on your time that can be most cheaply solved first, to give yourself greater opportunities to direct more valuable hours down the line.
nikvetr
Maybe so! Might just be the career questions are a bit too targeted (partner also has had trouble getting advice on how to best leverage her tissue engineering / veterinary background to best serve animal welfare, e.g. working directly with researchers using animal models vs. developing in vitro meat in a more wet bench role). Was just curious to get an outside view, especially from a more “value-aligned” group than might be found in your typical career center or through existing mentors etc. Thank you for your response!
I’d second the Ng Coursera course—very straightforward and easy to follow for those lacking technical backgrounds! Which may be a plus or a minus, depending on your desired rigor.
(removed for privacy + inappropriateness)
Sure! Though unfortunately most of the stuff comes from scattered lectures, workshops, discussions, book chapters, seminars, papers, etc. But for intro multilevel Bayesian regression in R/STAN I’d say John Kruschke’s “Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan” and Richard McElreath’s “Statistical Rethinking: A Bayesian Course with Examples in R and Stan” would be really solid (Richard also has his course lectures up on youtube if you prefer that, though I found his book super readable, so much so that when I took the class with him a few years back I skipped most of his lectures since the room was really hot. But don’t let that dissuade you from watching them, he’s a great guy/speaker and quite fun and funny!).
Purely in terms of building my own intuitions/understanding, though, I’ve found little more helpful than just looking up the relevant algorithms and implementing the damn things from scratch (to talk of reinventing square wheels above lol… though ofc you’d use the far superior underlying code others have written for your actual analysis).
Ah, gotcha. But re: code review, even the most beautifully constructed chains can fail, and how you specify your model can easily cause things to go kabloom even if the machine’s doing everything exactly how it’s supposed to. And it only takes a few minutes to drag your log files into something like Tracer and do some basic peace-of-mind checks (and others, e.g. examine bivariate posterior distributions to assess nonidentifiably wrt your demographic params). More sophisticated diagnostics are scattered across a few programs but don’t take too long to run either (unless you have e.g. hundreds or thousands of chains, like in marginal likelihood estimation w/ stepping stones… a friend’s actually coming out with a program soon—BONSAI—that automates a lot of that grunt work, which might be worth looking out for!). :]
(on phone at gym with shit wifi so can’t provide links/refs atm, sorry!)
Of course (though wheel reinvention can be super helpful educationally), but there are great free public R packages that interface to STAN (I use “rethinking” for my hierarchical Bayesian regression needs but I think Rstan would work, too), so going with someone’s unnamed, private code isn’t necessary imo. How much did the survey cost (was it a lot longer than the included google doc, then? e.g. Did you have screening questions to make sure people read the paragraph?). And model+mcmc specification can have lots of fiddly bits that can easily lead us astray, I’d say
Ah, I guess that’s better than no control, and presumably paying attention to a paragraph of text doesn’t make someone substantially more or less generous. Did you fit a bunch of models with different predictors and test for a sufficient improvement of fit with each? Might do to be wary of overfitting in those regards maybe… though since those aren’t focal Bayes tends to be pretty robust there, imo, so long as you used sensible priors
“I used a multilevel model to estimate the effects among those with and without a bachelor’s degree. So, the bachelor’s estimate borrow’s power from those without a degree, reducing problems with over fitting.”
If I’m understanding correctly, you had a hyperprior on the effect of education level? With just two options? IDK that that would help you much (if you had more: e.g. HS, BA/S, MS, PhD, etc. it might, but I’d try to preserve ordering there, myself).
“These models used STAN, which handles these multilevel models well. Convergence was assessed with gelman-rubin statistics.”
STAN’s great, but certainly not magic or perfect, and though idk them personally I’m sure its authors would strongly advocate paranoia about its output. So you got convergence with multiple (2?) chains from a random (hopefully) starting value? R_hats were all 1? That’s good! Did all the other cheap diagnostics turn up ok (e.g trace plots, autocorrelation times/ESS, marginal histograms, quick within-chain metrics, etc.)?
Ah, interesting! What package? I’ve never heard of something like that before. Usually in the cold, mechanical heart of every R package is the deep desire to be used and shared as far as possible. If it’s just someone’s personal interface code, why not use something more publicly available? Can you write out your basic script in pseudocode (or just math/words?)? Especially the model and MCMC specification bits?
Yep, and alongside it, of course, the raw data!
Yay for Bayesian regression (binomial, I’m guessing? You re-binned your attitude and donations responses? I think an ordered logit would be more appropriate here and result in less of a loss in resolution, or even a dirichlet, but then you’d lose yer ordering)! Those posteriors look decently tight, though I do have some questions!
I’m a little confused on what your control was, exactly. You have both points and distributions in your posterior plots, but you don’t have any control paragraph blurb in you google doc questionnaire. How did you evaluate your control? Did you give them a paragraph entirely unrelated to EA? These plots are the posterior estimates for p_binomial when each dummy variable for treatment is 0? Is “average treatment effect” some posterior predictive difference from the control p (i.e. why it’s exactly 0)?
On a related (and elucidatory) note, could you more explicitly clarify which models you fitted, exactly? Did you do any model comparison or averaging, or evaluate model adequacy? You mention “controlling for other variables in the survey” but I don’t see any e.g. demographic questions in your questionnaire. You said you “examined these relationships overall and among the critical subgroup of those with at least a bachelor’s degree”—did you do this by excluding everyone without a bachelor’s, or by modeling the effects of educational attainment and then doing model comparison to test the legitimacy of those effects (I’d think looking at the posterior for the interaction between your paragraph and education dummies would be the clearest test)? Did you use diffuse, “uninformative” priors (and hyperpriors)? Which ones, exactly?
I assume that since this is a hierarchical analysis you used MCMC (HMC?) to do the fitting. Are your posterior distributions smoothed substantially, e.g. with a kernel density estimator? Or did you just get fantastic performance? What diagnostics did you run to ensure MCMC health? How many chains did you run? Did you use stopping rules? In my experience, hierarchical regression models can be pretty finicky to fit as they get more complex.
Kudos on not just using some wackily inappropriate out-of-the-box frequentist test!
edit: also, what are the boxplot-looking things? 95% HPDIs? CIs? Some other %? Ah wait they’re the sd of your marginal samples?
Thank you for looking into this!
Did you happen to find out if any egg producers did in ovo sexing, or if they were close to adopting it? If Fifth Crown Farm raised all their own hens from hatchlings, what do they do with the male chicks? I‘m a lot less opposed to macerators than the typical person (who is opposed to them), but traditional chick culling alone is enough to dissuade me from egg consumption, independent of direct effects on hens. When I’d last looked into it for the S Bay I couldn’t find any place that did it (and all the producers using it were from outside the US).
On a related note, did you happen to turn up anything on what the hens were fed? I vaguely remember seeing something about fancier places feeding their hens grubs or insect meal, which is a whole separate can of worms.
More broadly, does anyone know of any chick suppliers who perform in ovo sexing? I’ve been tempted to raise some egg-laying hens as pets with more tightly regulated welfare standards, but have been limited by space and time constraints (Bay Area, amirite). Would be nice to reintroduce eggs into my diet after a decade+ without! When I was a lad I ate one, sometimes two dozen eggs every evening to help me bulk, but now that I’m grown I’ve abstained from all but the most incidental egg consumption.