Quick heads up, the email announcement and this post don’t have the same application deadline- email says the 19th, post says the 17th.
Looks great!
Quick heads up, the email announcement and this post don’t have the same application deadline- email says the 19th, post says the 17th.
Looks great!
Interesting post- since my academic training is heavily in political science (+ stats and CS), I’ve thought about this topic some as well. Disclaimer is that I engage with poli sci research pretty heavily through working in electoral politics/follow broader PS through friends who do other work, but I don’t have a poli sci PhD and don’t have a particular identity as a political scientist.
A general thought here is that this post is a little hard to engage with because you’re making two related claims at the same-ish time, and not providing particularly concrete suggested actions specifically related to EA. As I read you, the claims are:
EAs could benefit from familiarity with the formal modeling literature in those 3 areas. It’d be helpful to have some sense of how you envision these being leveraged.
Poli Sci programs (it seems especially PhDs, but I’m reading in there) could produce stronger quantitative researchers who are more equipped to handle new developments in quant methods by deepening engagement with 2a) Foundations of Probability Theory 2b) DAG based causal inference. Not sure there’s an EA related claim here as written.
One thing I’m especially left wondering here is whether you have a specific claim about how relatively important engaging with these topics is, and for which parts of the EA community that’s true. For example, how much of a priority should engaging with the gerrymandering literature be, and for which EAs? Where does this fall in the hierarchy of things EAs could spend time learning about versus say, microeconomic quant tools? Hopefully that’s a helpful point in trying to flesh out the case you’re making here (I realize you posted this as “some thoughts”, and not “here is a deeply researched, group reviewed long form piece with deeply felt calls to action”.)
Moving on to discussing the specific points you make:
On teaching Pearl more- I broadly agree this is a good idea. The most common educational background on my team at the senior level is a poli sci PhD and I interview a decent number of political science PhDs. It seems many folks know a little bit about Pearl’s work and those that do benefit from it, but never had DAGs taught deeply and formally. I think there are signs of this changing in some programs (I don’t have the knowledge to make a general discipline level claim), with a move towards teaching both PO and DAG approaches jointly. I certainly benefited from being taught both together, but I got this in a stats department.
On teaching more probability theory- I believe there are some programs where this is available either directly or through partnership with other departments, and I’m much less confident in a general claim like “all quant poli sci educational programs should teach more of this”. I think the more a prospective students wants and expects to work on methods development, the more this should be emphasized, but I guess my (uncertain) belief right now is deeper education here is available to those who want it, and the discipline does a pretty good job of prioritizing things to teach the average student.
On gerrymandering research- Your suggestion is roughly a “quiver” of more objective methods. My (non-expert) impression is that there are a number of such available tools proposed, even once you get past the somewhat hamfisted solutions like shortest splitline that completely ignore the complex and competing demands that legal precedent place on redistricting. My impression is that these tools are already sufficient to be more fair and objective than current practice, but that implementing them is a problem of political will and organizing (that’s not to say there isn’t promising research being done to improve solutions). So the challenge here to me is how EAs should choose to spend their time given this dilemma- it’s not clear to me that getting improvements implemented in the US is particularly tractable at the moment, and thus I’d argue likely not suitable as a recommendation for broader work.
To clearly caveat with my level of knowledge here, my undergraduate thesis was on why fixing gerrymandering is harder than proposing good algorithms, and I learned quite a bit after that from seeing researchers speak at the MaDS seminar series while I was in grad school at NYU. So I have a decent impression, but you may well know more and have a good basis to disagree.
I’m completely unequipped to respond on the other formal methods ideas you propose, but looping back to the broader response I have to this post, it would be beneficial to have more concrete applications of these ideas for EA, as am well as discussion of how they rank in priorities of things we could learn.
This is a pretty long response already, so will end by saying that this is definitely a topic I’d be interested in discussing more.
For example, I could envision trying to seek out specific EA problems that could benefit from recent hot topics in quant poli sci like conjoint experiments (to name one example). Separately, this is a more a intersection of my background (political practitioner) and quant poli sci, but I’ve been pondering wether it’s a good use of time to produce general educational materials on better understanding campaigning effectively and how elections are won- it seems many EAs fall prey to many of the common misconceptions that typical well-educated but not politically experienced people fall into. To the extent there are folks who might try something like another Flynn campaign or try to give effectively in influencing the 2024 cycle, there seem to be some easy wins in providing better mental models.
Trying to write a response quickly before work starts at the end of long week (working on dem races, being EA-ish), so open to being too hasty or needing to flesh out these ideas. Two immediate reactions:
I am concerned about the tone and timing, and so optics of this post. We’re in a moment where SBF seems to quite plausibly have fraudulently financially ruined people to fund EA ventures, including political ones- not certain of course, but that is a current (potentially most reasonable!) narrative. We’re also worried about a potential backlash, where EAs are perceived as much too willing to believe that “ends justify the means” and we will have more critical eyes on the community. Given this, I think your tone should much more clearly reflect the somberness here, clearly reject what SBF may have done unethically, and be doing more to do everything it can to anticipate how this could be taken (potentially extremely!) the wrong way. To be clear, not making any claims about what your underlying stance is here and I’m guessing you’re just trying to be action oriented in a tough moment , just thinking an edit to clarify that might be prudent.
More broadly, I think any “politics coming out of EA” group you start needs to pretty seriously consider how you will manage such genuine issues and optics to be at all effective. Happy to talk more about this, but the “let’s all coordinate in private about our longterm political plans where our special interest money has declined” tone here feels incredibly concerning to intuitions as a dem political organizer.
A last quick clarifying thought- my claim isn’t just “external people looking might be concerned”, it’s “this is not the tone we should bring to doing politics as a community”.
This comment feels important, like something I’ve been considering trying to spin into a full post. Finding a frame has been hard, because it feels like I’m trying to translate what’s (unfortunately) a distinctively non-EA culture norm into reasoning that EAs will take more seriously.
One thought that I do want to share though is that I don’t think seeing this as something that needs to be weighed against good epistemics feels quite right. I think our prizing good epistemics should mean being able to reason clearly and adjust our reactions to tone/emotional tenor from people who (very understandably!) are speaking from a place of trauma and deep hurt.
The best frame I have so far for a post is reminding people about Julia Galef’s straw-Vulcan argument and arguing what it implies for conversations on (understandably) incredibly emotionally heavy topics, and in tough times more generally. Roughly rehashing the argument because I can’t find a good link on it: Spock frequently makes assumptions that humans will be perfectly rational creatures under all circumstances, and when this leads him astray essentially shrugs and responds “it’s not my fault that I did not predict their actions correctly, they were being irrational!”. Galef’s point of course, is that this is horrible rationality: the failure to reason about how emotions might effect people and adjust accordingly means your epistemics are severely impoverished.
Setting aside the Klingon style rationality argument, there also feels like there should be a argument along the lines of how (to me, incredibly!) obvious it should be that tone like this demands sympathy and willingness to take on the burden of being accommodating from people serious about thinking of themselves as invested in altruism as a value. I’m still figuring out how to express this crisply (and to be clear, without bitterness) so that it will resonate.
If you have thoughts on what the best frame would be here, would love to hear any thoughts you have or discuss more.
Edited to take out something unkind. Trying to practice what I preach here.
I agree with this, but to add on since the post mentioned 3-4 courses.
I would say if you’re picking 3, definitely econometrics, stats/probability to supplement analysis skills. For the third, I would say probably development economics (both to visibly show interest in the topic and have a professor you can try to build a relationship for resources/recommendations in that network). Two potential caveats- if you think the ability to leverage the network of the behavioral econ professor is better, or if that’s a substantially more research skill building class that’s also a pretty good option. Other caveat would be that depending on the level of the course, the Econometrics course could plausibly require or at least benefit a lot from better linear algebra skills- that’d suggest econometrics/stats/lin alg.
If you’re taking 4 to stand out to employers: same logic as I described above probably applies. Would also add that depending on grad school being a possibility for you, many PhDs require or strongly suggest linear algebra.
One final thought here: I’m treating this as if you need to stay within that list- if there is an option to go outside that list (maybe to a CS or stats department?), learning programming/statistical computing skills might be among the highest value couple options.
Trying this now, thank you for the timely heads up. One thing I wanted to elevate from giving Tuesday website and one question.
First: it may be possible to set up multiple recurring donations to multiple orgs and so get multiple matches. No guarantees, but that’s a possible read of the meta rules the Giving Tuesday website mentions. I’ll be trying this, I’d encourage others to as well.
Second, do folks have recommendations for longtermist charities set up to receive more funds this way, especially those that might’ve been hit hard by FTX fallout ? There aren’t any mentioned I immediately recognized here: https://www.eagivingtuesday.org/eagtnonprofits. I would think these are good opportunities for people to be especially efficient given the FTX news; also, some people leaning more longtermist may be more likely to use this platform if they have options made clear to them. I’d do some digging but have to go to work now.
I believe your linked text for existential catastrophe (in the second table) is incorrect- I get a page not found error.
Substantively, I realize this is probably not something you originally asked (nor am I asking for it since presumably this’d take a bunch of time), but I’d be super curious to see what kind of uncertainty estimates folks put in this, and how combining using those uncertainties might look. If you have some intuition on what those intervals look like, that’d be interesting.
The reason I’m curious about this is probably fairly transparent, but given the pretty extensive broader community uncertainty on the topic, aggregating using those might yield a different point estimate, but more importantly might help people understand the problem better by seeing the large degree uncertainty involved. For example, it’d be interesting/useful to see how much probability people put outside a 10-90% range.