Seems essentially fine. There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse. Exposing them to a risk to the rest of their portfolio should be more than sufficient to make this not a concern.
Might be a fair point, but remember, this is in the case where some project was predictably net negative and then actually was badly net negative. My guess is at least some funders would be willing to step in and disincentivise that kind of activity, and the threat of it would keep people off the worst projects.
A_donor
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral.
I thought a bunch more about this, and I do think there is something here worth paying attention to.
I am not certain that there is a notable enough pool of the projects in the category you’re worried about to offset the benefits of impact markets, but we would incentivise those that exist, and that has a cost.
If we’re limited to accredited investors, as Scott proposed, we have some pretty strong mitigation options. In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio. Since accounts will be hard to generate and only available to accredited investors, generating a new account for each item is not an available option.
I think I can make some modifications to the Awesome Auto Auction to include this fairly simply, and AAA does not allow selling as an action which removes the other risk of people dumping their money / provides a natural structure for limiting withdrawals (just cut off their automatic payments until the “debt” is repayed).
Would this be sufficient mitigation? And if not, what might you still fear about this?
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
I don’t see much benefit to a Web3 one assuming we can do microtransactions on a Web2, so I’d be fine with either not doing the Web3 or only doing it after several years of having a Web2 without any of those restrictions and nothing going badly wrong (retaining the option to restrict new markets for it at any time).
I should have phrased differently: It’s not that hard to pick out highly risky projects retroactively, relative to identifying them prospectively. I also think that the reference class which is most worrying is genuinely not that hard to identify as strongly negative EV.
Impact markets don’t solve the problem of funders being able to fund harmful projects. But they don’t make it differentially worse (it empowers funders generally, but I don’t expect you would argue that grantmakers are net negative, so this still comes out net-positive).
I would welcome attempts to cause the culture of big grantmakers to more reliably make sure the recipients stay focused on the major challenges, but that is a separate project.
The classes of problem you list are all important questions of what should be funded, and it would be great to have better models of the EV of funding them, but none of them are impact-market specific. It’s already true that the funder who is most enthusiastic about a project can fund it unilaterally, and that this will sometimes be EV-negative. People can already recruit new funders who are risk-tolerant.
We’re making grantmakers generally more powerful, and that’s not entirely free of negative effects[1], but it does seem very likely net-positive.[2]
I do think it makes sense to not rush into creating a decentralized unregulatable system on general principles of caution, as we certainly should watch the operation of a more controllable one for some time before moving towards that.
The community as a whole cannot come to consensus on each of the huge number of important decisions to be made, the bandwidth of common knowledge is far, far to low, and we are faced with too many choices for that to be a viable option. Having several of the most relevant people strongly on board is about as good a sign as you could expect currently. I’m open to more opinions coming in, and would be very interested in seeing you debate with people on the other side and try to double crux on this or get more people on board with your position, but turning this into a committee is going to stall the project.
I think these concerns are largely incorrect, and avoiding implementing this technology due to them would be a tragedy. Here’s my reasoning:
It’s not that hard to see when a project was at risk of having large downsides, oraculars can avoid retrofunding these just as they would avoid normally funding them.[1]
If you assume oraculars who are fine with incentivising doing large amounts of expected harm in order to get credit for something if it goes well (which I think is a flawed assumption), well.. they can do that with normal funding. Retrofunding does not add to it unless the early funders irrationally expect oraculars to buy up bad EV bets which paid off.
As for “weak consensus”, we have Scott Alexander, Paul Christiano, and Eliezer Yudkowsky coming down on the side of “Yes, retrofunding is great”. I’m not sure how that could be seen as anything other than strong consensus of key thought leaders, not to mention the many other people who’ve thought carefully about this and decided it is one of the most important interventions to improve the future.
- ^
In reply to “hindsight bias”; retrofunders have strictly more information than normal funders, and can be explicitly made aware of this failure mode. I expect major oraculars to be capable of correcting for hindsight bias well enough to pass on any really risky projects. This is a mid-sized crux, necessary but not sufficient to reverse the sign of Impact Markets’ value due to this failure mode.
Quick opinions on the choice branches:
1: What Is The Basic Format Of The Market?Custom: Awesome Auto Auction, but closest to D: Fractionalized Impact Shares With Assurance Contract.
B: Credit For Funding The Project—Funding a project retroactively should not be considered morally different from funding it in any other way, creating a novel kind of moral accounting seems overcomplicating things and hard to get actual consensus / common knowledge on.
3: How Should The Market Handle Projects With Variable Funding Needs?
Custom: Awesome Auto Auction solves this elegantly, by letting the project absorb some fraction of the value of the cert if it continues to grow.
4: Should Founders Be Getting Rich Off Impact Certificates?
A: Yes as this provides good incentives, but again with Awesome Auto Auction presenting a somewhat cleaner way than the default, as the founders would not liquidate their shares and drop the price / eat a chunk of donor money, but instead receive trickle income if their project kept climbing in value.
5: How Do We Kickstart The Existence Of An Impact Market?
B: Committed Pot Of Money (aka The Original ACX Grants Plan) plus common knowledge that other funders can easily join later and will do so if they like what’s happening
6: Should The Market Use Cryptocurrency?
The best of both world proposal you mention seems clearly the best.
7: How Should The Market Price IPOs?
Custom: Awesome Auto Auction. Combines most of the advantages of both, I think?
8: How Should The Oracular Funder Buy Successful Projects?
A. By Buying A Certain Number Of Shares, with Awesome Auto Auction I think removing the main concern of Monopsony? But either way, B does not fit the structure.
9: What Should The Final Oracular Funder’s Decision Process Be?
A: Fund Based On Regular Charitable Decision-Making Procedure seems like a reasonable default, but letting the oracular decide seems correct.
10: Who Are We Expecting To Have As Investors?
Custom: People who are basically just okay with donating the money, like me, but don’t have infinite runway so would be happy to be topped up by major funders who like the bets we pick. I really want impact markets to exist for my own use case.
11: Conclusion: What Kind Of Impact Market Should We Have?
I really hate all these options, and I’m posting this partly to see if other people have better ideas.
You bet I have an idea! And I bet you can guess what it is! Awesome Auto Auction solves this neatly, the founder gets funding which scales with the value of the cert, but not in a form they can sell so there is no risk of them losing the credit for their project.
Happy to go into far more detail on any of these if requested.
I did a bunch of mechanism design on the details of #1 alongside the Impact Markets people!
Key desiderata:
Low overhead for participants in the market—Incentivising these people to spend lots of brainpower trying to work out when you buy and sell is massively suboptimal. You want a system that is buy and forget and just works.
Minimize hard feelings around selling your cert at a loss or seeing equivalent ones go for a very low price—Related to your #II—C
Impact authors automatically get some fraction of the rewards of a very successful project[1]
Ability to buy into certificates for flexible amounts, with minimal hassle and handling scaling well
We converged on the below model if there is a payments platform like FTX to automate microtransactions (crypto or not), and a different one which has slightly less nice properties but requires less payouts per pay-in. I can write that one up too if wanted.
Awesome Auto-Auction (better name welcome)
The core innovation is the queue of impact slices, with a “head” of buyouts moving along the queue as new money comes in.
Ownership of certificates in an auto-auction is divided into a series of slices, from earliest (the initial entry creating the auction), through all previous holders of slices of ownership, to the current owners’ slices.[2]
There is no action required to sell or decisions about price, the only available move is to buy and the only decision is how much you want to buy. When someone buys into a cert, the incoming money is split into three streams:
Creator—Directly to the impact creator
Capital repayment—Returning the capital to current holders of the cert and buying out their slice, with no opportunity for profit
Royalty—Giving profit to investors who have already been bought out, in proportion to the amount they raised price by (incentivising people to be early investors and raise the price)
Let’s look at an example of how this plays out with a $1000 purchase into a cert, with a 25% / 30% / 45% split.[3]
Some set % goes to the original creator (100% if they have not yet got their asking price, optionally held by an assurance contract until met), some goes to paying back the money current owners put in to buy their slices (with the oldest current owner getting priority), and some as a trickle income profit to already bought-out investors who funded the cert earlier in proportion to how much they raised the price (as a %) divided by the total amount that bought out investors raised the price (the %s added together).
I made a spreadsheet which you can copy to play around with split proportions and buy events and see how the money ends up distributed among the three groups at different investment levels.
This might sound complicated, but can be made very straightforward to operate and understand with a UI which lets buyers predict how much would be paid out by their purchase at any future level of total investment in the cert. This could work something like:
This mechanism seems to thoroughly succeed at all the desiderata, and would I think make impact markets much lower friction than other proposed market mechanisms (and as a bonus transitions smoothly into #5E!).
Edit: Also might well help with some of the concerns raised in the reddit thread, as I think the action “sell” has weird emotional connotations in this domain which this mechanism might bypass.
I would be happy to have calls with anyone who might want to turn it into a reality.
- ^
I come down on the “doing good should be a route to increasing wealth” side of #4, with the caveat that I’d like to see this happen not as a big sell off type event where a final funder directly buys impact from someone who has already got plenty back, but as an ongoing royalty stream which any funder can top up if they liked the project and is split fairly among investors and creators.
- ^
If using crypto, each individual’s NFT represents the combination of all their blocks of ownership, and indicates both how large a share of the NFT they currently own and their share of past ownership visually in an unobtrusive manner (likely a thin bar, with two colors).
- ^
The split can either be determined by the creator as they add the cert to the market, or we could just pick reasonable defaults and everyone sticks to that.
- 15 Jul 2022 16:32 UTC; 29 points) 's comment on Impact Markets: The Annoying Details by (
- 15 Jul 2022 19:21 UTC; 5 points) 's comment on Impact Markets: The Annoying Details by (
This is great! Ties the core threads of thought around this disagreement in a cohesive and well-referenced way, much more digestible than the originals.
I’d suggest cross-posting to LessWrong, and including links to the relevant posts on LW/EAF so that the “Mentioned in” feature of this forum generates backlinks to your work from those posts.
The Parable of the Talents, especially the part starting at:
But I think the situation can also be somewhat rosier than that.
Ozy once told me that the law of comparative advantage was one of the most inspirational things they had ever read. This was sufficiently strange that I demanded an explanation.
Ozy said that it proves everyone can contribute. Even if you are worse than everyone else at everything, you can still participate in global trade and other people will pay you money. It may not be very much money, but it will be some, and it will be a measure of how your actions are making other people better off and they are grateful for your existence.
Might prove reassuring. Yes, EA has lots of very smart people, but those people exist in an ecosystem which almost everyone can contribute to. People do and should give kudos to those who do the object level work required to keep the attention of the geniuses on the parts of the problems which need them.
As some examples of helpful things available to you:
Being an extra pair of hands at events
Asking someone who you think is aligned with your values and might have too much on their plate what you can help them with (if you actually have the bandwidth to follow through)
Making yourself available to on-board newcomers to the ideas in 1-on-1 conversations
- 11 Jul 2022 12:15 UTC; 152 points) 's comment on EA for dumb people? by (
- 12 Jul 2022 19:56 UTC; 17 points) 's comment on EA for dumb people? by (
Talk to HaonChan and Jehan on the Bountied Rationality Discord. They’re trying to build this.
Not sure if it’s the widely used definition, but I think of affectable as anything in our future light cone, with accessible as just the bits which we could get something (e.g. a von Neumann probe) physically to before the expansion of the universe takes it away from our reach, which is a smaller bubble because we can’t send matter at light speed, our probe would have to slow down at the other end, and the expansion of the universe robs our probe of velocity relative to distant galaxies over very long periods.
Edit: The FHI has a relevant paper: https://www.fhi.ox.ac.uk/wp-content/uploads/space-races-settling.pdf
I was considering writing something like this up a a while back, but didn’t have enough evidence directly, was mostly working of too few examples as a grantmaker and general models. Glad this concern is being broadcast.
I did come up with a proposal for addressing parts of the problem over on the “Please pitch ideas to potential EA CTOs” post. If you’re a software dev who wants to help build a tool which might make the vultures less able to eat at least parts of EA please read over the proposal and ping me if interested.
Additional layer: Have the researchers have a separate “Which infrastructure / support has been most valuable to you?” category of karma, and use that to help direct funding towards the most valuable parts of the infrastructure ecosystem to support alignment. This should be one way, with researchers able to send this toward infrastructure, but not the reverse since research is the goal.
The purpose of preserving alignment is not to get back to AI as quickly as possible, but to make it more likely that when we eventually do climb the tech tree we are more likely to be able to align advanced AIs. Even if we have to reinvent a large number of technologies, having alignment research ready represents a (slightly non-standard) form of differential technological development rather than simply speeding up the recovery overall.
Agreed that civilization restart manuals would be good, would be happy to have the alignment archives stored alongside those. Would prefer not to hold up getting a MVP of this much smaller and easier archive in place waiting for that to come together though.
My guess is these are great for longevity, but maybe prohibitively expensive[1] if you want to print out e.g. the entire alignment forum plus other papers.
Could be good for a smaller selected key insights collection, if that exists somewhere?
- ^
Likely reference class is gravestones. I’m getting numbers like: “Extra characters are approximately $10 thereafter” and “It costs around £1.95 per letter or character”, even with a bulk discount that’s going to add up.
- ^
Preserving and continuing alignment research through a severe global catastrophe
AI alignment is rapidly scaling funding, and this means grantmakers will be stretched thin and less able to reliably avoid giving money to people who are not mission aligned, not producing useful research, or worst of all just want to extract money from the system. This has the potential to cause massive problems down the road, both by producing distracting low-quality research and by setting up expectations which will cause drama if someone is defunded later on while there’s still a lot of money flowing to others.
An attack-resistant EigenKarma[1]-like network for alignment researchers would, if adopted, allow the bottleneck of grantmaker time to be eased and quality of vetting to be improved, by allowing all researchers to participate in the process of vetting people and assessing the quality of their work. The ideal system would:
Allow grantmakers to view any individual’s score with an arbitrary initial trusted group, so they could analyze how promising someone seems from the perspective of any subgroup, with a clean UI.
Allow people to import their own upvote history from any of AF/LW/EAF to seed their outflowing karma, but adjust it manually via a clean UI.
Have some basic tools to flag suspicious voting patterns (e.g. two people only channeling karma to each other), likely by graphing networks of votes.
Maybe have some features to allow grants to be registered on the platform, so grantmakers can see what’s been awarded already?
Maybe have a split between “this person seems competent and maybe we should fund them to learn” vs “this person has produced something of value”?
Rob Miles has some code running on his Discord with a basic EigenKarma system, which is currently being used as the basis for a crypto project by some people from Monastic Academy, and could be used to start your project. I have some thoughts on how to improve the code and would be happy to advise.
I’m imagining a world where researchers channel their trust into the people they think are doing the most good work, which means that grantmakers can go “oh, conditioning on interpretability-focused researchers as the seed group, this applicant scores highly” or “huh, this person has been working for two years but no one trusted thinks what they’re doing is useful” rather than relying on their own valuable time to assess the technical details or their much less comprehensive and scalable sense of how they think the person is perceived.
Obviously some pre-research would be to make a sketch of how it’d work and ask grantmakers and researchers if they would use the system, but I for one would if it worked well (and might provide some seed funding to the right person).
- ^
This is Rob Miles’s description of his code, I hear the EigenTrust++ model is even better, but have not read the paper yet to verify that it makes more sense here.
Yes: “Regrantors will be compensated for their work based on the quality and volume of their grantmaking.”
I’d be very interested in joining as a regranter, though it may make sense to wait a few years, by which point I will have donated most of my crypto pool and gained a bunch of experience. You can see my current strategy at Being an individual alignment grantmaker.
Edit: Does screening for conflicts of interest mean not allowing regranters to grant to people they know? If yes, I can see the reasoning, but if I was operating under this rule it would have blocked several of my most promising grants, which I found through personal connections. I would propose having these grants marked clearly and the regranter’s reputation being more strongly staked on those grants going well, rather than outright banning them.
Edit2: Will there be a network for regranters (e.g. Discord, Slack), and would it be possible for me to join as an independent grantmaker to share knowledge and best practices? Or maybe I should just apply now as I’m keen to learn, just not confident I am ready to direct $250k+/year.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.