Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai
RyanCarey
With the likes of FTX, Wave, etc, I agree that the EV of EA startups is now obviously high. But this is because EAs are much better at startups than average, to an extent that was not entirely obvious (at least not to everyone) a decade ago.
Yes, startup people always said we should start startups, but it wasn’t clear whether they were right.
On initial investigations, we could see that given VC-funding, founders exit with an average of $10M (and only ~1% of startups seemed to attain VC funding). Stanford-alumnus founders attained (unconditional on funding) a net worth of ~$10M. YC founders around $20M. And then the first dozen EA and rationalist startups were false starts. Whereas folks in software or finance could make a million a year.
So yes, it seemed good, but exactly how good was not obvious at that time. One could argue it is a good case for more people exploring unusual things, for reasons of value of information.
If that’s a major concern, then could instead call it “Harvard Square Longtermist Space”.
It’s taken!
You could call it the Harvard Square Effective Altruism Space. Relatedly, the New York one could be called the New York EA Space. Makes them easy to find!
No, you’re probably thinking of anthropogenic risk. AI is 1⁄10, whereas the total estimated x-risk is 1⁄6.
Just to rebut a few points there.
On (1) credentials/electives/workload:
A very small fraction of MDs are admitted to joint MD-PhDs. A medical degree is the only one that can come with optional extras—in many other degrees a similar fraction of students would be publishing papers with supervisors. And the PhD that a medic does will not necessarily be as relevant as those of a computer scientist. Basically, it seems like a way of avoiding an apples-with-apples comparison.
15%-elective is terribly little.
Note that the workload may skyrocket in the clinical years
Regarding (2) transferability: I believe you’re overthinking it. From a zoomed out view, medical classes are approximately useless, and this talk of a specialised class becoming useful by being “embedded in a translational framework” is basically waffle.
You understate the case for the usefulness of useful subjects. If I’d studied computer science for undergrad, I could’ve got where I am now 5+ years earlier. Even dropping out of medical school could have accelerated things. In such a scenario, I could’ve been a somewhat more credible applicant for things like top professor positions than is currently the case. (Of course, skill is the main thing, but getting promptly educated, and building a stellar CV at a young age does help, vs studying irrelevant subjects).
Regarding (3) funding
my own prior is generally quite low when it comes to placing trust in getting funds without problems, even when being ‘smart enough to be admitted to a medical degree’.
What kind of evidence would make you update your prior? Many funders say they are willing to fund any person doing excellent longtermist work, and many orgs are continually growing and hiring. To take one extreme example, $50k fellowships are being given out to interested teenagers. It’s a movement that’s >10 years old, with its funding-base growing double-digits per year. If you’re smart enough to get into a German medical degree, and dedicated, then it should be possible to do excellent work...
I agree with you that practising medicine is not very impactful, and that community building in medical schools in some countries (e.g. Australia/DE) is useful. However, I think you’re way overoptimistic about the value of a medical degree. Since that’s our disagreement, that’ll be the focus of my comment. Most of it will be line-by-line reactions to your arguments, and then I sum up a bit, and comment on outreach at the end.
Line-by-line comments
Impact of EAs after med school is more robust to EA meta risks (e.g. cause X existing, EA running out of funding,...) than when specializing earlier during university on cause areas (80%)
Specialising into EA is very non-robust to new cause areas, because it is a severe narrowing that happens early. And how many more billions do EAs have to make before you stop worrying about it running out of funding?
Is becoming a doctor a high impact career path?
No, I personally don’t think soI’m not sure it’s a matter of personal opinion. Greg has convincingly argued the opposite.
In contrast to specific knowledge of pathologies, the fundamental knowledge treated in med school of natural sciences, biological sciences but also sociology/psychology will likely be highly relevant for all of the above cause areas.
A medical degree is about 1⁄4 as dense in fundamental science as a science degree, because it only makes up about half of the non-clinical years, which in turn make up only about half of the program.
Furthermore, it is possibly taught with better signal-to-noise ratio than in other degrees as med school has to focus on the parts of the knowledge relevant for translational (i.e. practical big-picture) applications in humans and medical systems. (again, with large variation)
Most of the EA roles need biomedical science, and there’s much less of this in med than in a science degree, because the medical degree is busy teaching you how to be a doctor. What’s not spent on clinical medicine is often spent on memorising things that are not very broadly relevant, like anatomy, pharmacology, maybe medical ethics, medical law. So I would say the signal-to-noise is much worse.
This kind of big-picture thinking and problem-solving is fundamental in medicine and in my experience often overlooked in other hyperspecialized degrees, despite being highly relevant for any impactful EA endeavor.
Basically every degree can arguably give you special translational skills that help you to be a good EA. Math, law, and philosophy degrees “teach you how to think”. Med, engineering, and business “teach you how to solve problems”. But what is actually relevant is usually science and computer science.
a PhD usually takes another 3-5 years after Bachelor/Master studies in other career paths.
This is not the right comparison. A medical degree doesn’t help you to do many of the things that a doctorate does.
in some careers [a medical degree] will offer a notable amount of additional credential boost just by being considered ‘a physician’ or even ‘a doctor’ such as e.g. in public or political work
It’s not actually that useful in my experience, especially outside Australia/DE. People care a lot more about whether you can actually do the job.
All three of these are largely independent of the specific choice of your degree if you are the type of person open to networking and meeting people outside of university classes, so I would say that the choice of the city/town that you will be studying in matters more than the degree itself.
But all things being equal, it’s much better to build a network of people who study higher-impact topics.
‘Standard EA advice’[4] for high school graduates (never go to med school, go do something more directly relevant for known x-risks) is currently based on a lot of assumptions:
- we are right about the current setting of priorities and cause areas
- we have not missed any further cause XIt definitely doesn’t rely on these assumptions. If we new that the most impactful causes would by currently unknown to us, I think they would be more likely to be far away from med, which is a relatively narrow field. Think of previous crucial considerations: AI risk, anthropics, simulation argument. Not things that are particularly amenable to approach by a medic!
- EA will continue being able to pay all people wishing to do so for work in niche and formerly neglected causes
It doesn’t assume EA will be able to pay all. But EA will be able to fund most who are smart enough to be admitted to a medical degree in Australia or Germany. The contrary was reasonable to worry about a decade ago, but the funding base has grown and diversified so much since then that it’s really not defensible anymore.
- EA will continue being able to raise funds faster than people
Even if we gather a lot more people, we could allocate them to do things like policy that are funding-neutral and very useful, before we allocated people toward med.
indefinitely have quasi-unlimited funding by a handful of software dev philantropists (which assumes:)
The crisis would have to reduce the holdings of half a dozen billionaires by 95% all at once. And even then, there’s things like Founders Pledge, GWWC pledge, and finance people, that also collectively hold billions of dollars of wealth. Not plausible.
While I generally remain cautious while optimistic when eyeing the above, I would propose that a medical degree and the breadth it provides can make your positive impact more robust...
No, once again, medicine is a very narrow degree.
Furthermore, the incredible job safety of a medical degree always offering a comfortable plan B can be a source of confidence when later engaging in high risk high reward career paths such as EA-aligned charity or biotech entrepreneurship.
In my experience, all that was useful was to earn a year’s runway, and funds or . I spent about two years doing that. Then, I had to leave Australia to study. I maintained my medical registration for another year, by travelling back and practising briefly, but around that point it became clear that I was not going back to it. If I could have the time again, I would definitely have tried harder to apply for funds to drop out of my medical degree and study something useful. All up, this would have saved me about six years. Now that we are in an environment of such persistent and robust funding abundance, there is really no excuse!
Summing up on Medical Degrees
First, let’s look at your summary, then I’ll give mine.
So—Is going to medical school generally a high impact choice? What I have been aiming to show is that it is not as clear-cut. If medical school prepares you for a high impact career or not will largely depend on the country, city and specific university that you are doing it in: the extent to which course content is transferable to high impact careers, the workload, the extent to which you will be forced to study irrelevant knowledge (largely specific pathologies), the degree which you can be getting in the end, the opportunities for extracurricular activities, the networking opportunities.
I would say the main advantages of medicine are that you’ll meet some smart students (some network), and a steady income stream. But it’s far worse than the best courses. It’s a narrow course, with less than average transferable knowledge. So it provides you with less optionality than, for example, a science degree. The workload is among the highest, so you will have fewer opportunities for EA-related pursuits. The content usually has less elective material than other courses, so you will be forced to study more irrelevant material. Your network will be suffering from these problems too, so they will be somewhat less likely to pursue high-impact roles than the smartest kids from a science degree.
For folks who are already in a medical degree and are very into longtermist EA, I think it depends to a fair degree how suited you are to working on biosecurity. If you are, then you might want to finish the degree. If you’re not, you might not—you should check that you’re not going to run into visa-related problems, but instead you could pursue a direct-work role, a startup project, or a masters degree. Other caveats include that you often need an undergraduate degree to enter a masters program, and to get a visa for some countries, so this is worth checking into, and existing with a BMedSci is often desirable, if possible.
Thoughts on Medical Student Outreach
I think outreach to medical students is not a bad idea. If you’re going to allocate 3% of EA’s outreach efforts to Harvard, then you probably should allocate at least 0.1% of efforts to medical students in Germany and Australia, or something. The point is that in these countries, there is a nationwide school-leaving grade, and the most competetive courses tend to be medical degrees, so they attract a lot of top talent—the sort of people who would go to Harvard or Oxford if they were born in those respective coutnries. Naturally, many of these folks will be interested in EA ideas, and could benefit from being connected to them. And biorisk is something they can help with (although by my reckoning it seems >30x less important than AI risk), along with perhaps other brain-related topics. But it’s tricky, because you’re naturally pitching these students on something that goes quite against-the-grain for their cohort—leaving their degree, leaving their field, and so on. Much moreso than what we ask of computer scientists, for example. But still, it seems worth a shot.
If it’s so easy for a driven EA to become a billionaire then why do you spend your days podcasting (seriously)?
Whether you use Shapley values seems orthogonal to the question I’m asking—whether to price the shares non-uniformly, in order to estimate relative contributions.
I agree it’s a problem for the entire surplus to go to the seller. But that problem isn’t impactful people getting rich. It’s that if the certs are too expensive there’ll be too few buyers to clear the market. So I agree that payouts should probably be tuneable. If you want the actual impact of a project to still be known, then you could have the “percent of impact purchased” be the tuneable parameter, if buyers aren’t too sensitive to it.
Agree on affecting culture.
Sorry, I could have been clearer. Suppose we want to choose (B) on (2). And I ask for retro funding for a painting I drew.
Let’s consider three cases: a) it had $500 of impact, and I was 0% likely to make the painting anyway. b) it had $1k of impact, 50% likely anyway b) $2k of impact, 75% likely anyway.
The “obvious” solution is that in each case, I can sell some certs, say 100 for $5 ea.
Alternatively, we could say that in: case a) each cert 1-100 is worth $5 b) certs 1-50 are worth $10 ea, 51-100 $0 c) certs 1-25 are worth $20 ea, 26-100 worth $0.
The second solution allows the world to know how useful the painting/project was, though at the cost of some complexity.
The bottom-line I think is that the contract should be clear about which thing it purports to be doing.
I think Yes on (4) - make the founders rich, because most of the value will come from incentivising people to have an impact. Hence (B) on (7) - you auction the shares. The main disadvantage is that the greater cost might dissuade investors, which I think is mitigated a bit by making the certs cheaper via (B) on (2).
Two points on (2): i) i’m not convinced that you (an initial buyer) get to ultimately decide how impact certs are valued. You at most get to write something about the asset in the contract, but ultimately it is the buyers and sellers who will decide how to value it. I feel if they are grantmakers seeking to incentivise good work at a low cost, they may be disinclined from buying the impact that would have happened anyway (even absent their existence as a funder). One way that preference could be satisfied is to give each share a number. Funders will value the first shares most, because they are fully “counterfactual”, but if half of the value comes from a thing that the founder would have done anyway, then shares beyond halfway will be worth nothing. ii) so far we’ve talked about what happens when a funder buys a cert from a founder. There’s also the question of what happens when the next guy buys the cert from the funder. Should you pay for the whole impact of the funder paying the founder, or only the impact of paying the funder (i.e. apply the solution recursively). The latter would arguably be a somewhat distinct solution (c).
I tried to think about (2) a bit here - https://forum.effectivealtruism.org/posts/eb28pDHzZz2RWh9Fh/will-impact-certificates-value-only-impact. It hasn’t had much discussion, so probably I’m still missing a lot of considerations.
I agree there are diminishing returns; I think Ajeya’s report has done a bunch of what needed to be done. I’m less sure about timelines being decision-irrelevant. Maybe not for Miles, but it seems quite relevant for cause prioritisation, career-planning between causes, and prioritizing policies. I also think better timeline-related arguments could on-net improve, not worsen reputation, because improved substance and polish will actually convince some people.
On the other hand, one argument I might add is that researching timelines could shorten them, by motivating people to make AI that will be realised in their lifetimes, so timelines research can do harm.
On net, I guess I weakly agree—we seem not to be under-investing in timelines research, on the current margin. That said, AI forecasting more broadly—that considers when particular AI capabilities might arise—can be more useful than examining timelines alone, and seems quite useful overall.
A public list of regranters makes the system very gameable and vulnerable to individual granters unilaterally funding negative value projects.
I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don’t give the impression you’re worried about.
If it seemed to you like I was raising different issues in the draft, then each to their own, I guess. But these concerns were what I had in mind when I wrote comments like the following:
> 2004–2008: Before I found other EAs
If you’re starting with this, then you should probably include “my” in the title (or similar) because it’s about your experience with EA, rather than just an impartial historical recount… you allocate about 1⁄3 of the word count to autobiographical content that is only loosely related to the early history of EA...
> In general, EA emerged as the convergence from 2008 to 2012 at least 4 distinct but overlapping communities
I think the “EA” name largely emerged from (4), and it’s core institutions mostly from (4) with a bit of (2). You’d be on more solid ground if you said that the EA community—the major contributors—emerged from (1-4), or if you at least clarified this somehow.
> dozens of people worked full-time on EA community-building and research since before 2012
This perhaps conflates “EA” community building with proto-EA community-building. There was plenty of the latter, but not much/any of the former.
> 2012 onward: Growing EA as EA
again, I think you should be more explicit in both the title and intro that you’re just telling the story of your trajectory through the early history, rather than detailing how everything came to be. Because you’re largely concentrating on the bits you were involved in.
That’s very nice of you to say, thanks Michelle!
Regarding THINK, I personally also got the impression that Mark was a sole-founder, albeit one who managed other staff. I had just taken Jacy’s claim of co-founding THINK at face value. If his claim was inaccurate, then clearly Jacy’s piece was more misleading than I had realised.
Comments on Jacy Reese Anthis’ Some Early History of EA (archived version).
Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case.
I’ll follow the chronological structure of Jacy’s post, focusing first on 2008-2012, then 2012-2021. Finally, I’ll discuss “founders” of EA, and sum up.
2008-2012
Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great—so far I agree.
What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostly oriented toward its own “rationality” community.
Finally, Felicifia is discussed at greatest length in the piece, and Jacy clearly has a special affinity to it, based on his history there, as do I. He goes as far as to describe the 2008-12 period as a history of “Felicifia and other proto-EA communities”. Although I would love to take credit for the development of EA in this period, I consider Felicifia to have had the third- or fourth-largest role in “founding EA” of groups on this list. I understand its role as roughly analogous to the one currently played (in 2022) by the EA Forum, as compared to those of CEA and OpenPhil: it provides a loose social scaffolding that extends to parts of the world that lack any other EA organisation. It therefore provides some interesting ideas and leads to the discovery of some interesting people, but it is not where most of the work gets done.
Jacy largely discusses the Felicifia Forum as a key component, rather than the Felicifia group-blog. However, once again, this is not quite what I would focus on. I agree that the Forum contributed a useful social-networking function to EA. However, I suspect we will find that more of the important ideas originated on Seth Baum’s Felicifia group-blog and more of the big contributors started there. Overall, I think the emphasis on the blog should be at least as great as that of the forum.
2012 onwards
Jacy describes how he co-founded THINK in 2012 as the first student network explicitly focused on this emergent community. What he neglects to discuss at this time is that the GWWC and 80k Hours student networks already existed, focusing on effective giving and impactful careers. He also mentions that a forum post dated to 2014 discussed the naming of CEA but fails to note that the events described in the post occurred in 2011, culminating in the name “effective altruism” being selected for that community in December 2011. So steps had already been taken toward having an “EA” moniker and an EA organisation before THINK began.
Co-founders of EA
To wrap things up, let’s get to the question of how this history connects to the “co-founding” of EA.
Some people including me have described themselves as “co-founders” of EA. I hesitate to use this term for anyone because this has been a diverse, diffuse convergence of many communities. However, I think insofar as anyone does speak of founders or founding members, it should be acknowledged that dozens of people worked full-time on EA community-building and research since before 2012, and very few ideas in EA have been the responsibility of one or even a small number of thinkers. We should be consistent in the recognition of these contributions.
There may have been more, but only three people come to mind, who have described themselves as co-founders of EA: Will, Toby, and Jacy. For Will and Toby, this makes absolute sense: they were the main ringleaders of the main group (the Oxford community) that started EA, and they founded the main institutions there. The basis for considering Jacy among the founders, however, is that he was around in the early days (as were a couple of hundred others), and that he started one of the three main student groups—the latest, and least-important among them. In my view, it’s not a reasonable claim to have made.
Having said that, I agree that it is good to emphasise that as the “founders” of EA, Will and Toby only did a minority—perhaps 20% - of the actual work involved in founding it. Moreover, I think there is a related, interesting question: if Will and Toby had not founded EA, would it have happened otherwise? The groundswell of interest that Jacy describes suggests to me an affirmative answer: a large group of people were already becoming increasingly interested in areas relating to applied utilitarianism, and increasingly connected with one another, via GiveWell, academic utilitarian research, Felicifia, utilitarian Facebook groups, and other mechanisms. I lean toward thinking that something like an EA movement would have happened one way or another, although it’s characteristics might have been different.
- 2 Jul 2022 11:19 UTC; 20 points) 's comment on The Future Might Not Be So Great by (
Harvard Square Future Space? (Animals and meta can also be included in improving the future!)