How much would it cost to influence the film to make this happen?
devansh
For what it’s worth, the US higher education system is pretty stratified in terms of intelligence. The best universities are maybe a standard deviation above the 50th best university in SAT scores, and would probably be even higher if the SAT max wasn’t 1600; plus, a lot of the most ambitious and potentially successful students go to them. Moreover, top universities generally attract those students from every field; while, for example, UIUC is probably better than most Ivies at CS, the Ivies will still poach a lot of those students largely because of prestige/reputational effects. Those factors combine to make it pretty likely that the kind of people that can have the most impact in these fields are disproportionately concentrated at top universities.
I mean sure, but what’s important here isn’t really the absolute number of intelligent/ambitious people, but the relative concentration of them. One third of Nobel prizes going to people who didn’t complete their undergrad at a top 100 global university means that 2⁄3 of the Nobel prizes did. Out of ~30K global universities, 2⁄3 of Nobels are concentrated in the top 100. The talent exists outside top universities, but focusing on them with limited resources seems more tractable than spreading thin with lower average intelligence/ambition.
Medium article throws a 404, FWIW.
This seems interesting, but I’m confused as to what the point of this is over “work at an EA org”. It seems like most EA orgs, are bottlenecked a lot more on talent than money, and if you’re doing high-talent work for an EA organization than your marginal hour is likely more valuable than $60/hr. I wonder what subset of the population would benefit substantially from this advice—it seems like earning to give and direct work cover most of the space that earning to volunteer might.
What kind of person is this advice targeted at, and why do you think that this is better than direct work for those people?
(Note: this comment will probably draw heavily from The Precipice, because that’s by far the best argument I’ve heard against temporal discounting. I don’t have a copy of the book with me, so if this is close enough to that explanation you can just disqualify me :P)
In normal situations, it works well to discount things like money based off the time it takes to get them. After all, money is worth less as time goes on, due to inflation; something might happen to you, so you can’t collect the money later on; and there’s an inherent uncertainty in whether or not you’ll actually get the reward you’re promised, later. Human lives aren’t subject to inflation—pain and suffering are pain and suffering across time, whether or not there are more people.Something might happen to the world, and I agree that it’s important to discount based on that, but that discounting works out to be relatively small in the grand scheme of things. People in the long-term future are still inherently valuable because they’re people, and their collective value is very important—and thus it should be a major consideration for people living now.
There’s one thing I’ve been ignoring, and it’s something called “pure time preference,” essentially the inherent preference for having something earlier than later just because of its position in time. Pure time preference shouldn’t be applied to the long term future for one simple reason—if you tried to apply a reasonable discount rate based on it *back* to Ancient Rome, the consuls would conclude that one moment of suffering for one of their subjects was worth as much as the entire human race today suffering for their entire lives.
Basically, we should discount the moral value of people based on the catastrophe risk—the chance that the world ends in the time from now to then, and the gains we strove for won’t mean anything. (Which is a relatively small discount, all things considered, keeping substantial amounts of value in the longterm future—and gets directly reduced by working on existential risk) But it’s not fair to people in the future to discount based on anything else—like pure time preference, or inflation—because given no catastrophe until then, their lives, joys, pains, and suffering are worth just as much as people today, or people living in Ancient Rome.
As a sixteen year old, while I appreciate that this is being talked about and am a massive proponent of teens having more rights, I think your central point is fundamentally wrong—while we’re forced to go to school every day, we certainly don’t get death squads for minding our own business. To my knowledge, teens being sent to juvenile detention for habitual truancy is extremely rare, and it seems like a massive stretch to argue that teens are physically restrained and forced to go to school, especially by threat of death. With parental permission, for example, I can unenroll from school. Society giving parents ultimate authority over their kids is a different thing, and I agree that that is harmful; but I think you can combat that directly instead of railing loudly against the state’s potential for murder and destruction to teenagers.
While your other claims seem somewhat valid, the exceptional hyperbole on this is fundamentally driving people away from your argument, and I think you’d find potentially orders of magnitude more people willing to hear you out if you focused more on your argument that teen mental capacity is near-fully developed. (For example, it would be wonderful if you campaigned for no-fault emancipation without parental consent and the rights of teens to unenroll from school). Your book seems, at a glance, better for this than this post.
Immediate human suffering almost certainly gives way to larger geopolitical effects in moral weight. Weakening Russian efforts likely points in the direction of a lower chance of nuclear war, for example.
FWIW, I am currently running an EA org, and legal help would be generally valuable to me both in the past and in the present. My impression is that scaling an EA law firm would involve a few people at the top being EAs and the rest being perfectly fine as normal, non-aligned lawyers; this would gain a bunch of the benefits of an EA law firm (primarily, I think, a general understanding of what EA’s goals are and a “cost-benefit analysis” type thinking that tries to avoid being overly conservative and is perfectly fine advising its clients to do, e.g., things that are legally grey area but not enforced)
So I’d say my answer is that this seems, at least to me personally, like a very potentially high-impact thing that would be incredibly helpful to me and other people starting small- and medium-sized organizations who need legal advice and support. This being said, I am probably not in the best position to answer that question (not having a birds’-eye view of the EA ecosystem like some others do) and so I’m very interested in other perspectives on this.
Yeah, this is a good consideration; if something like this ended up happening, it would be wonderful if Tyrone could get two or three lawyers to cover the major EA hubs (US, especially CA, the UK, and maybe the Bahamas) - either in physical location or in knowledge.
Go apply for 80K Advising - (Yes, right now)
Ah, good point. Done!
Yep, this is my understanding as well!
Agreed (and only 20% kidding.) Having an 80k post pinned seems wonderful.
I don’t really see how the world is different whether or not you use the first or the second representation here? “Drop out and go work at a job” seems like a plan at a higher level of abstraction than “drop out and work in {area},” which is itself at a higher level of abstraction than “drop out and work in {area|position},” which is a higher level of abstraction than “drop out and work at ORG1.”
What’s the bright line between the first and the second?
Ah, I see. I guess I kind of buy this, but I don’t think it’s nearly as cut-and-dry as you argue, or something. Not sure how much this generalizes, but to me “staying in school” has been an option that conceals approximately as many major suboptions as “leaving school.” I’d argue that for many people, this is approximately true—that is, people have an idea of where they’d want to work or what they’d want to do given leaving school, but broadly “staying in school” could mean anything from staying on ~exactly the status quo to transferring somewhere in a different country, taking a gap year, etc.
>>after a tech company singularity, such as if the tech company develops safe AGI
I think this should be “after AGI”?
“It’s worth noting that the scale of the funding overhang isn’t absolute; there are”
Is this a typo?
Here’s a non-paywalled link available for the next 14 days.
On reading just the summary, the immediate consideration I had was that the EMH would imply that in the counterfactual where I don’t invest in Mind Ease, someone else will, and if I do invest in Mind Ease, someone else will not. After reading the post, it looks like you have two important points here against this—first, early-stage venture markets are not necessarily as subject to the EMH, and second, it’s different in this case because EA-aligned investors would be willing to take a lower financial return than they could get with the same risk otherwise in order to do good. Do you agree that impact investing in the broader financial market into established companies has very little counterfactual impact, or is there something I’m missing there? I’m interested in further research on this concept, and I’m not sure how much EA-aligned for-profits are already working on this.