Good post. I would add a notion of idea pervasiveness in the public consciousness. What I mean is how often people think along EA-consistent lines, or make arguments around dinner tables that explicitly or implicitly draw upon EA principles. This will influence how EA-consistent government policy is. Ideas like democracy, impartial justice, and freedom of religion, have strong pervasiveness. You could measure it by surveying people about whether they have heard of EA, and if so, whether they would refer to it in casual conversations, or whether they think it would influence their actions. You could benchmark the responses by asking the same questions about democracy or some other ubiquitous idea.
Michael_Wulfsohn
Making impact researchful
Thanks for the post. I like Economical Writing by Deirdre McCloskey—entertaining as hell!
This is a nice idea. There’ll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change. For example, the probability of getting ISIS to donate to Givewell is practically zero, so it’s likely better to target philanthropists who mean well but haven’t heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.
I have another possible reason why focusing on one project might be better than dividing one’s time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you’ve just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren’t as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn’t have missed.
I haven’t seen much evidence about this, just my own experience. There might also be countervailing effects, like time required for concepts to “sink in”, and synergies, or insights for one project gleaned from involvement in another. It probably varies by task. My impression is that research projects feature very high returns to density of time spent.
This is a really excellent piece of work on bringing these concepts to a broader audience. I’m quite interested in long-term investment modelling so I’d like to offer my thoughts. Of course, the below isn’t advice, so please don’t make investment decisions purely on my comments below.
It’s great that you are thinking about how to adjust standard investing concepts based on the notion that it is the total altruistic portfolio that matters, which is formed in a decentralised way. I agree this adds to the rationale for being “overweight” the company that the investor founded, or investing in individual properties. This is not how a typical investor thinks, so there is likely scope to think further along these lines. Either to improve coordination between EA investors, or to better implement a decentralised solution by departing from standard investment concepts.
I think your idea extends to alternative investments. Common wisdom in institutional investment is that it requires greater governance capabilities to invest in the more diversifying assets, such as infrastructure, some hedge funds, unlisted (commercial or residential) property, and private equity. That is, they require greater expertise, more time spent on investment processes, necessitate more careful cashflow management due to illiquidity, and potentially other challenges. And that greater governance capabilities are rewarded—see https://link.springer.com/article/10.1057/jam.2008.1. If an EA investor cares only about the overall altruistic portfolio and is capable of making/managing such investments, then it might make sense to overweight them. Some of them might be accessible through pooled funds.
In the article you rely on the standard deviation of annual returns as a measure of risk. But long term risk isn’t well captured by that. Taking a step back, risk should ultimately be defined based on altruists’ utility function over spending at different points in time. For example, there might be “hinge” moments when altruistic spending is especially effective. Imagine there is going to be a massive opportunity in 100 years to influence the creation of AGI by altruistic spending. In that case, we don’t really care if the annual standard deviation of returns is high. We care only about the probability distribution of the 100 year return.
There is a limit to the ability of leverage to magnify returns. This is partly because of the asymmetry of returns. For example, if you start with $100, then experience −50% return then +50% return, you end up with $75. Assuming you readjust your borrowing amount regularly alongside changes in the asset value, this effect is magnified by leverage and detracts from the overall return. See https://holygrailtradingstrategies.com/images/Leveraged-ETFs.pdf for more.
Leverage has a strong role in the Capital Asset Pricing Model theory you’re using. The theory does however assume away various challenges to do with leverage, like the one above. In general, it is uncommon for institutional investors (pension funds, university endowments, charitable foundations, etc) to directly borrow to invest. However, they may outsource it to a money manager, e.g. a hedge fund, who can access a decent borrowing rate on their behalf and who has the expertise to manage it. I’m not saying that leverage should never be used by EA investors. Rather, I would be quite careful before deciding to use it.
When actuaries model (commercial) real estate, it’s normally assumed that both its risk and expected return are somewhere in between those of shares and bonds. Arguably, real estate has characteristics of each, as it is an asset used for productive enterprise, and since leases typically provide regular fixed rental payments. Nevertheless, I would look to property indices’ historical data for guidance.
Certainty equivalence may not be the right concept for measuring the value of moving all EA investments to a global market portfolio. I would instead compare the sharpe ratios. If you want to put an expected dollar figure on it, one way would be to calculate the increase in expected return you could achieve while holding risk constant. This avoids needing to make an assumption about investor risk preferences, which the certainty equivalent concept relies on.
I haven’t read all your footnotes so perhaps some of the above is mentioned there. Nevertheless, I hope my comments are helpful and I am glad people in EA is actively thinking about this. Happy to chat more if you are interested.
My interpretation of the argument is not that it is equating atoms to $. Rather, it invokes whatever computations are necessary to produce (e.g. through simulations) an amount of value equal to today’s global economy. Can these computations be facilitated by a single atom? If not, then we can’t grow at the current rate for 8200 years.
Sounds like a really interesting and worthwhile topic to discuss. But it’s quite hard to be sure I’m on the same page as you without a few examples. Even hypothetical ones would do. “For reasons that should not need to be said”—unfortunately I don’t understand the reasons; am I missing something?
Anyway, speaking in generalities, I believe it’s extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it’s probably a misunderstanding. For example, if a condition is given that isn’t palatable, it’s worth finding out the underlying reasons for the condition being given, and trying to satisfy them in other ways. Since humans have a tendency towards “us vs them” tribal thinking, there’s considerable value in making effort to find common ground, establish mutual understanding, and reframe the interaction as a collegiate rather than adversarial one.
This isn’t meant as an argument against what you’ve said.
EAs like to focus on the long term and embrace probabilistic achievements. What about pursuing policy reforms that are currently inconsequential, but might have profound effects in some future state of the world? That sort of reform will probably face little resistance from established political players.
I can give an example of something I briefly tried when I was working in Lesotho, a small, poor African country. One of the problems in poor countries is called the “resource curse”. This is the counter-intuitive observation that the discovery of valuable natural resources (think oil) often leads to worse economic outcomes. There are a variety of reasons, but one is that abundant natural resources often cause countries with already-weak institutions to become even more corrupt, as powerful people scramble to get control of the resource wealth, methodically destroying checks and balances as they go.
In Lesotho, non-renewable natural resources—diamonds—currently account for only a small portion of Lesotho’s GDP (around 10%). I introduced the idea of earmarking such natural resource revenues received by the government as “special”, to be used only for infrastructure, education etc projects, instead of effectively just being consumed (for more info on this idea see this article or google “adjusted net savings”). Although this change would not have huge consequences right now, I thought that it might if there were a massive natural resource discovery in Lesotho in the future. Specifically, Lesotho might be able to avoid some of the additional corruption by already having a structure set up to protect the resource revenues from being squandered.
The idea I’m putting forward for a potential EA policy initiative is to pursue a variety of policy changes that seem painless, even inconsequential, to policymakers now, but have a small chance of a big impact in some hypothetical future. The idea is to get the right reforms passed before they become politically contentious. While it can be hard to get policymakers to pay attention to issues seen as small, there are plenty of examples of political capture that could have been mitigated by early action. And this kind of initiative is probably relatively neglected given humanity’s generally short-term focus. I think EAs are uniquely well placed to prioritize it.
I agree that EAs should pay more attention to systemic risk. Aside from exerting indirect influence on many concrete problems, it is also one of the few methods available to combat the threat of unknown risks (or equivalently increase our ability to capitalize on unknown opportunities). Achieving positive systemic change may also be more sustainable than relying on philanthropy.
In particular, I like the global governance example as a cause. This can be seen as improving the collective intelligence of humanity, and increasing the level of societal welfare we are able to achieve. Certain global public goods are simply not addressed, even despite much fanfare in the case of carbon emission abatement. Better global governance would thus create new possibilities for our species.
A full-fledged world government might be the endgame, but in the meantime small advances might be made to existing institutions like the UN and the EU, as you suggest. Unfortunately this can be very difficult; removing veto power in the UN Security Council is a case in point. Fundamentally, any advance on this front requires countries to sacrifice part of their own sovereignty, which seldom feels comfortable. But fortunately the general trend since WWII has been towards more global coordination, the recent visible setback of Brexit notwithstanding. My personal belief is that any acceleration of this trend could have huge positive consequences.
Thanks for posting this against the social incentives right now.
My initial reaction to the situation was similar to yours—wanting to trust SBF and believe that it was an honest mistake.
But there are two reasons I disagree with that position.
First, we may never know for sure whether it was an honest mistake or intentional fraud. EA should mostly not support people who cannot prove that they have not committed fraud. Many who commit fraud can claim they were making honest mistakes.
Second, when you are a custodian of that much wealth and bear that much responsibility, it’s not ok to have insufficient safeguards against mistakes. It’s immoral to fail in your duty of care when the stakes are this high.
I should clarify—I don’t mean a small amount of work, but a small conceptual adjustment. The example I give in the post is to adjust from fully addressing a specific application to partially addressing a more general question. And to do so in a way that is hopefully intellectually stimulating to other researchers.
In my own work, using a consumer intertemporal optimisation model, I’ve tried to calculate the optimal amount for humanity to spend now on mitigating existential risk. That is the sort of problem-solving question I’m talking about. A couple of possible ways forward for me: include multiple countries and explore the interactions between x-risk mitigation and global public good provision; or use the setting of existential risk to learn more about a particular type of utility function which someone pointed me to for that purpose.
Thanks for your detailed reply. Absolutely, there is some academic reward available from solving problems. Naively, the goal is to impress other academics (and thus get published, cited), and academics are more impressed when the work solves a problem.
You seem to encourage problem-solving work, and point out that governments are starting to push academia in that direction. This is great, and to me, it raises the interesting question of optimal policy in rewarding research. That is supremely difficult, at least outside of the commercialisable. My understanding is that optimal policy would pay each researcher something like the marginal societal benefit of their work, summed globally and intertemporally forever. How on earth do you estimate that for the seminal New Keynesian model paper? Governments won’t come close, and (I imagine) will tend to focus on projects whose benefits can be more easily measured or otherwise justified. So we are back to the problem of misaligned researcher incentives. But surely a government push towards impact is a step in the right direction.
Until our civilisation solves that optimal policy problem, I think academia will continue to incentivise the pursuit of knowledge at least partly for knowledge’s sake. I wrote the post because understanding the implications of that has been useful to me.
Sorry, this is going to be a “you’re doing it wrong” comment. I will try to criticize constructively!
There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can’t be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can’t influence the scenarios’ probabilities. Any of these could have decisive influence over your conclusion.
But there’s also a problem with your calculation. Your conclusion is based on the fact that you expect higher utility to result from scenarios in which you believe giving now will be better. That’s not actually an argument for deciding to give now, as it doesn’t assess whether the world will be happier as a result of the giving decision. You would need to estimate the relative impact of giving now vs. giving later under each of those scenarios, and then weight the relative impacts by the probabilities of the scenarios.
Don’t stop trying to quantify things. But remember the pitfalls. In particular, simplicity is paramount. You want to have as few “weak links” in your model as possible; i.e. moving parts that are not supported by evidence and that have significant influence on your conclusion. If it’s just one or two numbers or assumptions that are arbitrary, then the model can help you understand the implications of your uncertainty about them, and you might also be able to draw some kind of conclusion after appropriate sensitivity testing. However, if it’s 10 or 20, then you’re probably going to be led astray by spurious results.
Ok, so you’re talking about a scenario where humans cease to exist, and other intelligent entities don’t exist or don’t find Earth, but where there is still value in certain things being done in our absence. I think the answer depends on what you think is valuable in that scenario, which you don’t define. Are the “best things” safeguarding other species, or keeping the earth at a certain temperature?
But this is all quite pessimistic. Achieving this sort of aim seems like a second best outcome, compared to humanity’s survival.
For example, if earth becomes uninhabitable, colonisation of other planets is extremely good. Perhaps you could do more good by helping humans to move beyond earth, or to become highly resilient to environmental conditions? Surely the best way to ensure that human goals are met is to ensure that at least a few humans survive.
Anyway, going with your actual question, how you should pursue it really depends on your situation, skill set, finances, etc, as well as your values. The philosophical task of determining what should be done if we don’t survive is one potential. (By the way, who should get to decide on that?) Robotics and AI seem like another, based on the examples you gave. Whatever you decide, I’d suggest keeping the flexibility to change course later, e.g. by learning transferrable skills, in case you change your mind about what you think is important.
Thanks, it does a bit.
What I was saying is that if I were Andrew, I’d make it crystal clear that I’m happy to make the cup of tea, but don’t want to be shouted at; there are better ways to handle disagreements, and demands should be framed as requests. Chances are that Bob doesn’t enjoy shouting, so working out a way of making requests and settling disagreements without the shouting would benefit both.
More generally, I’d try to develop the relationship to be less “transactional”, where you act as partners willing to advance each other’s interests and where there is more trust, rather than only doing things in expectation of reward.
Ah, you’re right about the hedonistic framework. On re-reading your intro I think I meant the idea of using pleasure as a synonym for happiness and taking pain and suffering as synonyms for unhappiness. This, combined with the idea of counting minutes of pleasure vs. pain, seems to focus on just the experiencing self.
Thanks for the post. I doubt the length is a problem. As long as you’re willing to produce quality analysis, my guess is that most of the people on this forum would be happy to read it.
My thoughts are that destruction of ecosystems is not justifiable especially because many of its effects are probably irreversible (e.g. extinction of some species), and because there is huge uncertainty about its impact. The uncertainty arises because of the points you make, and because of the shakiness of even some of the assumptions you use such as the hedonistic framework. (For example, in humans the distinction between the “experiencing” and “remembering” selves diminishes the value of this framework, and we don’t know the extent to which it applies to animals.) Additional uncertainty also exists because we do not know what technological capabilities we might have in the future to reduce wild animal suffering. So almost regardless of the specifics, I believe that it would certainly be better to wait at least until we know more about animal suffering and humanity’s future capabilities, before seriously considering taking the irreversible and drastic measure of destroying habitats. This might be just a different point of emphasis rather than something you didn’t cover.
Sure. When I say “arbitrary”, I mean not based on evidence, or on any kind of robust reasoning. I think that’s the same as your conception of it.
The “conclusion” of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don’t go as far as to actually make a recommendation.
To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it “felt” right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?
Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical “if this, then that” analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you’ve made lots of arbitrary assumptions (say 10-20); it’s difficult to get any helpful insights from “if this and this and this and this and........, then that”.
So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said “prediction is difficult, especially about the future”? ;-) But models that aren’t sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.
This is an interesting idea. A few thoughts from a student of international financial macroeconomics.
Seignorage is essentially the profits that come from devaluing money holdings. That means your basic mechanism is to transfer value from holders of GLO to people who claim your UBI. This could work with the early enthusiasts, or with there being transactional value in holding GLO (e.g. if sellers accept GLO then buyers will keep some of it on hand). Since enthusiasts will be attracted if there is a strong prospect for transactional value, I’ll give a few comments on the prospect of GLO becoming a global currency. My comments are mostly issues, problems, and questions that you may have to answer to convince people that the GLO ambition has potential. But that shouldn’t detract from the value of the project.
Any currency needs to have its value continually supported in some way. Your summary contains a misconception: that, to maintain the $1 value, USD reserves won’t be required after some point. In fact, entire countries can fail to defend their national currencies’ pegs despite having billions of USD reserves. It’s similar to a bank run, and it can happen to stablecoins not backed 1 for 1.
Generating demand for GLO may be difficult. Since seignorage is a devaluation of money holdings, it would create a disincentive to hold GLO. For example, cryptocurrencies often constrain supply or burn tokens in a bid to get people to buy and hold. You’re proposing to do the opposite. That is why getting people to use GLO for transactions, or some other utility such as altruistic appeal, is vital. So generating demand is not impossible, but challenging.
Your ambition for GLO may not be consistent with a $1 peg, since your ambition is effectively for the dollar to become irrelevant. Of course, a $ peg would take you a long way at first. Nevertheless, a natural solution in the case of runaway GLO success may be to peg to a CPI-like weighted basket of prices. Perhaps CPI—x%, to generate some value to transfer to UBI claimants.
The amount of seignorage revenue in a given period will depend on the growth of demand for GLO in that period. Demand may fluctuate, and with it the UBI income amount. The income will be zero in periods in which reserves are used to prop up the value. That is not a deal breaker, but you will have to dip into reserves to produce a steady UBI income stream, or accept income fluctuation.
If GLO becomes a ubiquitous global currency, it will limit countries’ ability to use domestic monetary policy to stabilise the business cycle and unemployment. That would open the question of global monetary policy in a GLO world, and whether the policy variables e.g. GLO supply should be used for macro stability as well as UBI, and who should make those decisions.
I hope my comments are constructive enough to be helpful. Best of luck!