I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I think it’s probably easier to directly find which cities have the highest salaries for your line of work, than to research which have the highest cost-of-living and hope that this correlates with a high salary for your line of work.
I don’t know how the cost-of-living calculator works, but I suspect if it gives that large of a difference it’s taking a multiplier (e.g. cost of living in Manhattan is 2x cost of living in Houston). If the market is efficient, cost-of-living differences should be additive, and hence not nearly that large. This is substantiated by this completely uncontrolled correlational study in a news article (sorry) in which the difference between cities looks more constant than proportional (e.g. if median starting pay differs by 5k, then so does median mid-career pay).
The other large thing this neglects is remote jobs, which give you the best of both worlds—salary adjusted for high CoL, but low expenses for yourself. One engineer at my company lives in Montana. He makes a Silicon Valley salary and the mortgage on his three-bedroom house on multiple acres is around what I’m paying for half a bedroom in Berkeley.
FQXi has had some success with essay contests in the past. The most recent one appears to have had a prize pool of $20,000 and generated 155 submissions. If you have some unsolved EA-related questions, this might be a good way to buy progress on them.
I think Max Tegmark was involved in the FQXi contests, so you could probably ask him how they went and if he had any advice for similar EA-related ones.
You could offer scholarships! As Jess Whittlestone remarked, as a community we’re quite good at recognizing and putting down ineffective altruism, and not so good at recognizing when our own community members do awesome stuff. Awards or scholarships to EA community members could be a great way to balance that out.
I quoted your post verbatim from FB, and it wasn’t in quotes there. I can add them back if you want.
This isn’t a concrete idea per se, but if you have a large enough income that the deduction cap on donations to registered charities might become an issue, you should consider “trading donations” with other folks to maximize tax deductibility.
For instance, if you want to donate 60% of your income to tax-deductible charities, and someone else wants to donate less than 40% deductibly and 20% non-deductibly, you should instead both donate 50% deductibly and 10% non-deductibly.
You could fund people to go to the EA retreat/summit, Good Done Right, CEA’s Weekend Away, etc. who wouldn’t otherwise go. I know someone who did this and it seems to have turned out fairly well so far. I don’t know if they want to be named publicly but I can probably refer people who are interested.
Even if it’s only a small amount of funding, small funders probably won’t want to do it (because it would take their entire budget and it’s high-variance).
I think you can give away an awful lot of money before hitting diminishing returns, either by providing more scholarships or by increasing the amount. The ultimate win condition for a scholarship might look something like the Thiel Fellowship, for instance, which can absorb a ton of money.
If you give people non-token sums of money it can also change their behavior. Again, the Thiel Fellowship is a good example of this. If someone had awarded me $100k over two years during college, I would probably be doing something pretty different (and maybe more awesome, though of course that’s hard to tell) right now.
EDIT: Downvoter, care to explain?
I agree that having a better affordance for this seems important. That said:
I don’t think the management problem is that large: for instance, I have a full-time job and also a number of EA side projects.
There are principal-agent problems involved with having someone else manage your funds, so you might lose some effectiveness going from managing the projects you want to fund yourself to having someone else do it.
Another random note: It might be hard to convince people that this is viable long-term without some kind of endowment (although I’m not sure).
Why did you invest in individual stocks? Did you have insider information about them?
Are the considerations you mention written up in more depth anywhere? I’m not that familiar with some of them. Pointers?
Nascent EA projects seem like entrepreneurship in that the low hanging fruit tends to be picked, so they consistently require an exceptional, devoted effort in order to actually make the desired impact. If the person who’s working on an EA project is not devoting their life to it, I would generally expect the project to deliver average or below-average returns.
I don’t think your analogy holds up, and I don’t think your conclusion is correct.
Five years ago the EA movement didn’t exist. Entrepreneruship has been around for centuries. This year, more businesses will be started than there are effective altruists in the world, probably by a factor of at least 100. I would be incredibly surprised if the low-hanging fruit depletion in EA projects were anywhere near comparable to that in entrepreneurship.
Furthermore, there’s plenty of evidence that non-full-time EA projects can have good returns. Harvard Effective Altruism has produced exceptionally good returns on an investment of less than 10% of its organizers’ productive time (on average) in college. Max Tegmark founded FLI and FQXI with much less than full-time effort (I don’t know precisely how much). I don’t think anyone at CSER is working on it full-time.
Some projects are good ideas and just aren’t big enough to need full-time management.
I suspect you are not “really trying” / putting in an exceptional effort into your side projects.
Obviously I can’t actually defend myself from this accusation until the results are in, but I think it’s pretty uncharitable based on my track record.
What do you think is the relevant disanalogy between a hypothetical EA fellowship and the Thiel Fellowship? Most Thiel Fellows have access to the same funding sources as EAs (perhaps more, because VC funding is so abundant). But it seems pretty clear to me that the Thiel Fellowship has accomplished things that Thiel couldn’t have done without spending as much money.
Has 80k convinced anyone to try high-variance things using these other sources you listed as financing, other than starting a startup (which is easier to get funding for than lots of other good EA ideas)? If not, it seems like 80k plan changes aren’t a good benchmark here, since they accomplish different things.
I agree that your framework and process can apply to opportunity-level decisions as well as field-level decisions—I just think that it isn’t emphasized in proportion to how useful I found it.
For instance, to me it looks like those pages are framed almost completely in terms of choosing broad career paths rather than choosing between individual opportunities. E.g., the heading on the framework page reads:
Our career evaluation framework helps you compare between different specific career options, like whether to go into consulting or grad school straight out of university; or whether to continue at your current for-profit job or leave to work for a non-profit.
To me this seems to emphasizes the field-level use-case for the framework but not the opportunity-level use case.
(Historical note: I didn’t actually start HEA, merely took it over after a complete leadership turnover. I do think that without my and John Sturm’s intervention it would have gone defunct though.)
I would weaken “effective” to “had impacts that we might want to replicate”, but yes. I actually think it operates quite similarly to an angel investor for projects (applicants are generally expected to come with an idea for what they’d work on), so we may be thinking of mostly the same thing!
In practice, Effective Altruism advocates for measurement and comparison.
I think I’m mostly convinced that the contentious claim is not about effectiveness and altruism, but I don’t think this is the only contentious claim that we do make! At least for some people, I think another contentious claim is that there is any normative force to e.g. the observation that the average American adult could likely save several lives a year if they spent less money on completely inessential things.
If Oxford faced a potential liability in the billions, I’m sure it would insure.
The managers of Harvard’s endowment circa 2008 would beg to differ, I think. (It lost about $10 billion, nearly a third of its value.)
It seems like for some of these institutions, how long of a view they take is substantially determined by contingent factors like who’s the university president at the time.
Scalability means how strongly you are limited by your available resources, as opposed to e.g. demand for your service.
Websites are very scalable because it takes very few resources to serve an extra person. Making houses is not very scalable (currently).
I think that if the Standard EA Recommendation for middle- to low-income people is “come back when you make more money”, no middle- to low-income people (to a first approximation) will ever become interested in EA.
I think if I made 30k a year and asked someone what EA-related things I could do and they told me “you don’t make enough to worry about donating, try to optimize your income some more and then we’ll talk,” my reaction would be “Ack! I don’t want to upend my entire life! I just want to help some people! These guys are mean.” And then I would stop paying attention to effective altruism.
My general heuristic for stuff like this is that it’s more important for general recommendations to look reasonable than for them to be optimal (within reason). This is because by the time someone is wondering whether your policy is actually optimal, they care enough to be thinking like an effective altruist already, and are less likely to be scared off by a wrong answer than someone who’s evaluating the surface-reasonableness.
In 2010 they announced that they would “more than double” their vaccine spending to 10 billion total, by 2020 (e.g. http://www.nytimes.com/2010/01/30/health/30gates.html). That puts it in the mid-to-high three digits range, which is about three times better than AMF. I wouldn’t call that “the same range” as the AMF estimates, especially since it’s no longer so clear that those estimates even apply to marginal dollars given to AMF.