Marisa told me last fall that she’d settled on she/her, so that’s what I’ve been using.
IanDavidMoss
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that “global health is most cost-effective on near-term timescales” but what’s really happened is that “a well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.” Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we haven’t yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)
Could you say more about what you see as the practical distinction between a “slow down AI in general” proposal vs. a “pause” proposal?
Research Manager for the Effective Institutions Project
Fun! I’m glad that you’re working with experts on administering this and applaud the intention to post lessons learned. If you haven’t already come across them, you might find these resources on participatory grantmaking helpful.
COO/Chief of Staff for the Effective Institutions Project
a system of governance that has been shown repeatedly to lead to better organizational performance.
This is a pretty strong empirical claim, and I don’t see documentation for it either in your comment or the original post. Can you share what evidence you’re basing this on?
Several years ago, 12 self-identified women and people of color in EA wrote a collaborative article that directly addresses what it’s like to be part of groups and spaces where conversation topics like this come up. It’s worth a read. Making discussions in EA groups inclusive
I’ll bite on the invitation to nominate my own content. This short piece of mine spent little time on the front page and didn’t seem to capture much attention, either positive or negative. I’m not sure why, but I’d love for the ideas in it to get a second look, especially by people who know more about the topic than I do.
Title: Leveraging labor shortages as a pathway to career impact? [note: question mark was added today to better reflect the intended vibe of the post]
Author: Ian David Moss
Why it’s good: I think it surfaces an important and rarely-discussed point that could have significant implications for norms and practices around EA community-building and career guidance if it were determined to be valid.
Introducing the Effective Institutions Project Innovation Fund, a new regranting option for donors
Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt.
With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 100 years from a single one-time $100M commitment (perhaps distributed over multiple years) focusing on a single institution. The comment in the summary about $100 million/year was assuming that the funder(s) would focus on multiple institutions. Thus, the 100 basis points per billion figure is the “correct” one provided our per-institution estimates are in the right order of magnitude.
We’re about to get started on our second iteration of this work and will have more capacity to devote to the cost-effectiveness estimates this time around, so hopefully that will result in less speculative outputs.
Dustin & Cari were also among the largest donors in 2020: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valleys
Wow, I didn’t see it at the time but this was really well written and documented. I’m sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
- 12 Nov 2022 12:13 UTC; 11 points) 's comment on My reaction to FTX: appalled by (
I think it would have been very easy for Jonas to communicate the same thing in less confrontational language. E.g., “FWIW, a source of mine who seems to have some inside knowledge told me that the picture presented here is too pessimistic.” This would have addressed JP’s first point and been received very differently, I expect.
I understood the heart of the post to be in the first sentence: “what should be of greater importance to effective altruists anyway is how the impacts of all [Musk’s] various decisions are, for lack of better terms, high-variance, bordering on volatile.” While Evan doesn’t provide examples of what decisions he’s talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk’s and EA’s paths seem more likely to collide than diverge as time goes on.
I agree this is an important point, but also think identifying top-ranked paths and problems is one of 80K’s core added values, so don’t want to throw out the baby with the bathwater here.
One less extreme intervention that could help would be to keep the list of top recommendations, but not rank them. Instead 80K could list them as “particularly promising pathways” or something like that, emphasizing in the first paragraphs of text that personal fit should be a large part of the decision of choosing a career and that the identification of a top tier of careers is intended to help the reader judge where they might fit.
Another possibility, I don’t know if you all have thought of this, would be to offer something that’s almost like a wizard interface where a user inputs or checks boxes relating to various strengths/weaknesses they have, where they’re authorized to work, core beliefs or moral preferences, etc., and then the program spits back a few options of “you might want to consider careers x, y, and z—for more, sign up for a session with one of our advisors.” Then promote that as the primary draw for the website more than the career guides. Just a thought?
I was also going to say that it’s pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?
I feel like this proposal conflates two ideas that are not necessarily that related:
Lots of people who want to do good in the world aren’t easily able to earn-to-give or do direct work at an EA organization.
Starting altruistically-motivated independent projects is plausibly good for the world.
I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider instead or in addition having people on who work in high(ish)-impact jobs where there are currently labor shortages.
Overall, I think it would be better if you picked which of the two premises you’re most excited about and then went all-in on making the best podcast you could focused on that one.
Hmm, I guess I’m more optimistic about 3 than you are. Billionaires are both very competitive and often care a lot about how they’re perceived, and if a scaled-up and properly framed version of this evaluation were to gain sufficient currency (e.g. via the billionaires who score well on it), you might well see at least some incremental movement. I’d put the chances of that around 5%.
FYI, there’s a US-focused organization with a similar mission to CEAP’s called Unlock Aid that seems to be doing really good work. Open Philanthropy is also doing a lot of engagement with governments directly through its Global Aid Policy program.