Working on various aspects of Econ + AI.
Parker_Whitfill
Here is a counterargument: focusing on the places where there is altruistic alpha is ‘defecting’ against other value systems. See discussion here
Roughly buy that there is more “alpha” in making the future better because most people are not longtermist but most people do want to avoid extinction.
Good point, but can’t this trade occur just through financial markets without involving 1 on 1 trades among EAs? For example, if you have short timelines, you could take out a loan, donate it all to AI Safety.
Agreed with this. I’m very optimistic about AI solving a lot of incentive problems in science. I don’t know if the end case (full audits) as you mention will happen, but I am very confident we will move in a better direction than where we are now.
I’m working on some software now that will help a bit in this direction!
[Question] Rank best universities for AI Saftey
Since it seems like a major goal is of the Future Fund is to experiment and gain information on types of philanthropy —how much data collection and causal inference are you doing/plan to do on the grant evaluations?
Here are some ideas I quickly came up with that might be interesting.
If you decided whether to fund marginal projects by votes or some scoring system—you could later assess what you think is the impact of funding projects by using a regression-discontinuity-design.
You mentioned that there is some randomness in who you used as re-granters. This has some similarities to the random assignment of judges that is frequently used in applied econ. You could use this to infer if certain features of grantmakers cause better grants (e.g. some grant-makers might tend to believe in providing higher amounts of funding, so you could assess if this more gung-ho attitude leads to better grants, etc.)
Explicitly introduce some randomness on if you approve a grant or not.
In all these cases, you’d need to ex-post assess grant applications a few years later, including the ones you didn’t fund on impact. Then these above strategies would let you assess the causal impact of your grants.
I’d say it’s close and depends on the courses you are missing from an econ minor instead of a major. If those classes are ‘economics of x’ classes (such as media or public finance), then your time is better spent on research. If those classes are still in the core (intermediate micro, macro, econometrics, maybe game theory) I’d probably take those before research.
Of course, you are right that admissions care a lot about research experience—but it seems the very best candidates have all those classes AND a lot of research experience.
I would say an ideal candidate is a math-econ double major, also taking a few classes in stats and computer science. All put together, that’s quite a few classes, but not an unmanageable amount.
One case where this doesn’t seem to apply is an economics Ph.D. For that, it seems taking very difficult classes and doing very well in them is largely a prerequisite for admissions. I am very grateful I took the most difficult classes and spent a large fraction of my time on schoolwork.
The caveat here is that research experience is very helpful too (working as an RA).
Is there a strong reason to close applications in January?
I’m only familiar with the deadlines for economics graduate school, but for that you get decisions back from graduate school in February-March along with the funding package. Therefore, it would be useful to be able to apply for this depending on the funding package you receive (e.g. if you are fully funded you don’t need to apply, but if you are given little or no funding, it would be important to apply) .
I highly recommend cold turkey blocker, link here. It offers many of the features you listed above, including scheduled blocking, blocking the whole internet, blocking specific URL or search phrases (Moreover, this can be done with regex, so you can make the search terms very general), password-protected blocks, no current loopholes (if there are ones please don’t post them, I don’t want to know!) and the loopholes that used to exist (proxies) got fixed.
Pricing seems better than freedom as it’s $40 for lifetime usage. My only complaint is that there is no phone version.
I’d still agree that we should factor in cooperation, but my intuition is then that it’s going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I’d be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
I think one point worth emphasizing is that if the cooperative portfolio is a pareto improvement, then theoretically no altruist, including longtermist EAs, can be made worse off by switching to the cooperative portfolio.
Therefore, even if future generations are heavily neglected, the cooperative portfolio is better according to longtermist EAs (and thus for future generations) than the competitive equilibrium. It may still be too costly to move towards the competitive equilibrium, and it is non-obvious to me how the neglect of future generations changes the cost of trying to move society towards the cooperative portfolio or the gain of defecting. But if the cost of moving society to the cooperative portfolio is very low then we should probably cooperate even if future generations are very neglected.
[Question] Chart showing effectiveness of schooling internvetions
What piece of advice would you give to you 20 year old self?
Strong upvote as a find EA book recommendations very useful, and I’d like to encourage more people to post recommendations.
As an aside, it could be worth noting which books are available as audiobooks.
My vague impression is that this is referred to as pluralism in the philosophy literature, and there are a few philosophers at GPI who subscribe to this view.
Non-Consequentialist Considerations For Cause-Prioritzation Part 2
Non-Consequentialist Considerations For Cause-Prioritzation Part 1
Thanks for the summary and the entire sequence of posts. I thoroughly enjoyed them. In my survey of the broader literature c) is mostly true and I’d certainly like to see more philosophical engagement on those issues.
Is the alignment motivation distinct from just using AI to solve general bargaining problems?