I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
Why do you think people think it’s unimportant (rather than, e.g., important but very difficult to achieve due to the age skew issue mentioned in the post)?
I agree that it’s downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don’t really need it, doing things that are short-term good but long-term bad (with the assumption that they’ll have moved on before the bad stuff kicks in), etc. (cf. the book Moral Mazes.) Hiring mission-aligned people is one of the best ways to provide a check on that type of behavior.
*I think some orgs maybe should be more open to hiring people who are aligned with the org’s particular mission but not part of the EA community—eg that’s Wave’s main hiring demographic—but for orgs with more “hardcore EA” missions, it’s not clear how much that expands their applicant pool.
Whoops! Fixed, it was just supposed to point to the same advice-offer post as the first paragraph, to add context :)
In addition to having a lot more on the line, other reasons to expect better of ourselves:
EA had (at least potential) access to a lot of information that investors may not have, in particular about Alameda’s early exodus in 2018.
EA had much more time to investigate and vet SBF—there’s typically a very large premium for investors to move fast during fundraising, to minimize distraction for the CEO/team.
Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be “dumb money;” IIRC they shook hands on huge investments in Uber and WeWork on the basis of a single meeting, and their flagship Vision Fund lost 8% (~$8b) this past quarter alone. I don’t know about OTPP but I imagine they could be similarly diligence-light given their relatively short history as a venture investor. Sequoia is less famously dumb than those two, but still may not have done much vetting if FTX was perceived to be a “hot” deal with lots of time pressure.
Is it likely that FTX/Alameda currently have >50% voting power over Anthropic?
Extremely unlikely. While Anthropic didn’t disclose the valuation, it would be highly unusual for a company to take >50% dilution in a single funding round.
Definitely! In this case I appear to have your email so reached out that way, but for anyone else who’s reading this comment thread, Forum messages or the email address in the post both work as ways to get in touch!
In the “a case for hope” section, it looks like your example analysis assumes that the “AGI timeline” and “AI safety timeline” are independent random variables, since your equation describes sampling from them independently. Isn’t that really unlikely to be true?
Can someone clarify whether I’m interpreting this paragraph correctly?
Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name.
I think what this means is that the CEA board is drawing a distinction between the CEA legal entity / umbrella organization (which is becoming EV) and the public-facing CEA brand (which is staying CEA). AFAIK this change wasn’t announced anywhere separately, only in passing at the beginning of this post which sounds like it’s mostly intended to be about something else?
(As a minor point of feedback on why I was confused: the first sentence of the paragraph makes it sound like EV is a new organization; then the first half of the second sentence makes it sound like EV is a full rebrand of CEA; and only at the end of the paragraph does it make clear that there is intended to be a sharp distinction between CEA-the-legal-entity and CEA-the-project, which I wasn’t previously aware of.)
Sorry that was confusing! I was attempting to distinguish:
Direct epistemic problems: money causes well-intentioned people to have motivated cognition etc. (the downside flagged by the “optics and epistemics” post)
Indirect epistemic problems as a result of the system’s info processing being blocked by not-well-intentioned people
I will try to think of a better title!
Since someone just commented privately to me with this confusion, I will state for the record that this commenter seems likely to be impersonating Matt Yglesias, who already has an EA Forum account with the username “Matthew Yglesias.” (EDIT: apparently it actually is the same Matt with a different account!)
(Object-level response: I endorse Larks’ reply.)
Please note that the Twitter thread linked in the first paragraph starts with a highly factually inaccurate claim. In reality, at EAGxBoston this year there were five talks on global health, six on animal welfare, and four talks and one panel on AI (alignment plus policy). Methodology: I collected these numbers by filtering the official conference app agenda by topic and event type.
I think it’s unfortunate that the original tweet got a lot of retweets / quote-tweets and Jeff hasn’t made a correction. (There is a reply saying “I should add, friend is not 100% sure about the number of talks by subject at EAGx Boston,” but that’s not an actual correction, and it was posted as a separate comment so it’s buried under the “show more replies” button.)
This is not an argument for or against Jeff’s broader point, just an attempt to combat the spread of specific false claims.
This must be somewhat true but FWIW, I think it’s probably less true than most outsiders would expect—I don’t spend very much personal time on in-country stuff (because I have coworkers who are local to those countries who will do a much better job than I could) and so end up having pretty limited (and random/biased) context on what’s going on!
IIRC a lot of people liked this post at the time, but I don’t think the critiques stood up well. Looking back 7 years later, I think the critique that Jacob Steinhardt wrote in response (which is not on the EA forum for some reason?) did a much better job of identifying more real and persistent problems:
Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.
Over-confident claims coupled with insufficient background research.
Over-reliance on a small set of tools for assessing opportunities, which lead many to underestimate the value of things such as “flow-through” effects.
I’m glad I wrote this because it played a part in inspiring Jacob to write up his better version, and I think it was a useful exercise for me and an interesting historical artifact from the early days of EA, but I don’t think the ideas in it ultimately mattered that much.
Interesting. It sounds like you’re saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn’t realize that.
In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.
Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.
Note that billzito didn’t specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.
People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it’s more likely to be because they’re over-hiring rather than because they actually need that many people, in which case:
You’ll be working on less important problems that are more likely to be “fake” or busywork
There will be less of a forcing function for you to be very good at your job (because it will be less company-threatening if you aren’t)
There will be less of a forcing function for you to prioritize correctly (again because nothing super bad will happen if you work on the wrong thing)
You’re more likely to experience a lot of politics and internal misalignment in the org
(I’m not saying these applied to you specifically, just that they’re generally more common at companies that are growing less quickly. Of course, they also happen at some fast-growing companies that grow headcount too quickly!)
It sounds like you interpreted me as saying that rejecting resumes without feedback doesn’t make people sad. I’m not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I’m speaking from experience here).
However, my main point is that providing feedback on resume applications is much more costly to the organization, not that it’s less beneficial to the recipients. For example, someone might feel like they didn’t get a fair chance either way, but if they get concrete feedback they’re much more likely to argue with the org about it.
I’m not saying this means that most people don’t deserve feedback or something—just that when an org gets 100+ applicants for every position, they’re statistically going to have to deal with lots people who are in the 95th-plus percentile of “acting in ways that consume lots of time/attention when rejected,” and that can disincentivize them from engaging more than they have to.
Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume.
I’m a bit confused about the phrasing here because it seems to imply that “Alice’s application is read by a human” and “if Alice is rejected it’s not just because of her resume” are equivalent, but many resume screen processes (including eg Wave’s) involve humans reading all resumes and then rejecting people (just) because of them.
I’m unfamiliar with EA orgs’ interview processes, so I’m not sure whether you’re talking about lack of feedback when someone fails an interview, or when someone’s application is rejected before doing any interviews. It’s really important to differentiate these because because providing feedback on someone’s initial application is a massively harder problem:
There are many more applicants (Wave rejects over 50% of applications without speaking to them and this is based on a relatively loose filter)
Candidates haven’t interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale
The relative cost of rejecting with vs. without feedback is higher (rejecting without feedback takes seconds, rejecting with feedback takes minutes = ~10x longer)
Candidates are more likely to feel that the rejection didn’t give them a fair chance (because they feel that they’d do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection
I feel pretty strongly that if people go through actual interviews they deserve feedback, because it’s a relatively low additional time cost at that point. At the resume screen step, I think the trade-off is less obvious.
I don’t have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.
IMO, giving insufficient positive feedback is a common, and damaging, blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it’s mostly good.
People use feedback not just to determine what to improve at, but also as an overall assessment of whether they’re doing a good job. If you only give negative feedback, you’re effectively biasing this process towards people inferring that they’re doing a bad job. You can try to fight it by explicitly saying “you’re doing a good job” or something, but in my experience this doesn’t really land on an emotional level.
Positive feedback in the form “you are good at X, do more of it” can also be an extremely useful type of feedback! Helping people lean into their strengths more often yields as much or more improvement as helping them shore up their weaknesses.
I’m not particularly good at this myself, but every time I’ve improved at it I’ve had multiple reports say things to the effect of “hey, I noticed you improved at this and it’s awesome and very helpful.”
That said, I agree with you that shit sandwiches are silly and make it obvious that the positive feedback isn’t organic, so they usually backfire. The correct way to give positive feedback is to resist your default to be negatively biased by calling out specific things that are good when you see them.
Don’t forget Zenefits!
Zenefits was valued at $4.5b in 2015 and was all downhill after the incident; they did three rounds of layoffs in four years and were eventually acquired by a no-name company for an undisclosed price in 2022. It’s unclear how much of that decline was directly a result of the fraud, vs. the founder’s departure, vs. them always having had poor fundamentals and being overvalued at $4.5b due to hype.