Hey there~ I’m Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin
+100 on this. I think the screening processes for these conferences overweight legible in-groupy accomplishments like organizing an EA group in your local town/college, and underweights regular impressive people like startup founders who are EA-curious—and this is really really bad for movement diversity.
Yes, I might be salty because I was rejected from both EAG London and Future Forum this year…
But I also think the bar for me introducing friends to EA-curious is higher, because there isn’t a cool thing I can invite them into. Anime conventions such as Anime Expo or Crunchyroll Expo are the opposite of this—everyone is welcome, bring your friends, have a good time—and it works out quite well for keeping people interested in the subject.
- 4 Sep 2022 16:27 UTC; 40 points) 's comment on Open EA Global by (
Haha, thanks for bringing this up. One correction, Rachel and I are married (as of last month).
A quick background on this is that around February, Scott Alexander of Astral Codex Ten asked Manifold to set up an impact market to be able to run the ACX Forecasting Minigrants round (which is the site you see now at https://manifund.org). At the time, our existing team on Manifold were already occupied, and I had seen Rachel’s work on various programming projects such as openbook.fyi. After careful consideration, and checking in with both Scott and Manifold for Charity’s board of advisors, I decided to bring her on for a 6-week consulting engagement, which we’ve since renewed and turned into a fulltime offer.
Obviously, we recognize the potential conflicts of interest and didn’t make this decision lightly. My best judgement is that Rachel has done fantastically in this position so far, comparable to eg what I would expect a new grad at Google. (If you’re technical, I invite you to judge her commits on our open source repository).
The $50k regrantor budget that both her and I have are primarily to allow us to dogfood our own site. For the two of us to build a useful product for regrantors to use, it’s important that we have on-the-ground experience of making regrants ourselves. You’re also welcome to evaluate specifically the two grants she’s recommended so far (to Rachel Freedman and the Donations List Website)!
I have this impression of OpenPhil as being the Harvard of EA orgs—that is, it’s the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅
When should someone who cares a lot about GCRs decide not to work at OP?
Definitely appreciate the clarity provided here; I’m a huge fan of the Creative Commons licenses.
I’d put in my vote for dropping the Commercial clause; very biased, of course, but at Manifold we’ve really enjoyed pulling EA Forum content (such as the Criticism and Red Teaming Contest: https://manifold.markets/CARTBot) and setting up tournaments for them. We didn’t charge anyone to participate (and we’re actually paying out a bit for tournament prizes), but all the same Manifold is a commercial venture and we’re benefiting from the content—a noncommercial license might make us more reluctant to try cool things like this.
Lots of my favorite EA people seem to think this is a good idea, so I’ll provide a dissenting view: job security can be costly in hard-to-spot ways.
I notice that the places that provide the most job security are also the least productive per-person (think govt jobs, tenured professors, big tech companies). The typical explanation goes like “a competitive ecosystem, including the ability for upstarts to come in and senior folks to get fired, leads to better services provided by the competitors”
I think respondents on the EA Forum may think “oh of course I’d love to get money for 3 years instead of 1”. But y’all are pretty skewed in terms of response bias—if a funder has $300k and give that all to the senior EA person for 3 years, they are passing up on the chance to fund other potentially better upstarts for years 2 & 3.
Depending on which specific funder you’re talking about, they don’t actually have years of funding in the bank! Afaict, most funders (such as the LTFF and Manifund) get funds to disburse over the next year, and in fact get chastised by their donors if they seem to be holding on to funds for longer than that. Donors themselves don’t have years of foresight into how they would like to be spending their money (eg I’ve personally shifted my allocation from GHD to longtermist in-network opportunities)
OpenPhil may be the exception here, but it’s also unclear to me if the commitment to stay in an area for multiple years is good—cf Nuno’s critique of OpenPhils’s criminal justice reform
One idea I’ve been toying with instead is the concept of default-recurring month-to-month grants, where a funder and grantee roughly outline what deliverables might look like, and then the grantee provides updates on what they’ve been up to. I generally like the concept of more feedback mechanisms/lower-stress touchpoints between funders and grantees than a full “grant application round”. To borrow a saying from agile software development: “if it hurts, do it more often”.
The Manifold Markets team participated in the program Joel ran; it was trajectory-changing. It felt more like YCombinator than YCombinator itself. We met a bunch of other teams working on adjacent things to us, collaborated on ideas and code, and formed actual friendships—the kind I still keep up with, more than half a year later. Joel was awesome, I would highly encourage anyone thinking of fellowships to heed his advice.
I was inspired afterwards to run a mini (2 week) program for our team + community in Mexico City. Beyond the points mentioned above, I would throw in:
Think very carefully about who comes; peer effects are the most important aspect of a fellowship program. Consider reaching out to people who you think would be a good fit, instead of just waiting for people to apply.
The best conversations happen during downtime. E.g. the 30m bus ride between the office and the hotel; late night after a kickback is officially over.
Casual repeated interactions lead to friendships; plan your events and spaces so that people run into people again and again.
Start off as a dictator when eg picking places to get dinner, rather than polling everyone and trying to get consensus. In the beginning, people just need a single Schelling point; as they get to know each other better they’ll naturally start forming their own plans.
Perhaps obvious, but maintain a shared group chat; have at least one for official announcements, and a lounge for more casual chatting. Slack or Discord are good for this.
Awesome writeup! I do think that hype for quadratic funding vastly exceeds its practicality, so really appreciate you calling out QF’s problems here. Notably Gitcoin, the poster child for QF, has also moved on more to provide funding infrastructure rather than emphasize the goodness of QF allocations.
We did end up implementing quadratic funding for Manifold’s first round of charitable distribution around June 2022 (link). We didn’t continue with it because it didn’t seem to be that useful for our goal of encouraging people to donate more, nor to allocate money to charities particularly well. We did also beta-test a mana-based mechanism just to make quadratic funding possible; it didn’t get much adoption, and we ended up removing the feature.
I do think there’s still something missing in the space of “letting lots of people agree on how to allocate resources together”; I’m especially interested in ones that let different voters have different weights (such as capitalism or liquid democracy). At present, if some funder is thinking about a fancy funding mechanism, I’d suggest they take a look at impact certs or the s-process.
Hrm, I strongly disagree with this post.
I don’t see that security/privacy is especially important as a feature of a messaging system, when compared to something like “easy to use” or “my friends are already on it”
Basically all sensitive/important EA communication already happens over Slack or Gmail. This means that the consideration for the switching isn’t especially relevant to “EA”, vs just regular consumers.
This post reads as fairly alarmist against FB messenger, but doesn’t do a good job explaining or quantifying what the harms of a possible security breach are, nor how likely such a breach might be
I don’t think EA want to be spending weirdness points convincing people to use a less-good system—switching costs are quite high!
Fwiw, I do agree that choosing good software is quite important—for example, I think EA orgs are way overindexed on Google Docs, and a switch to Notion would make any one org something like 10% more productive within 3 months.
How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?
- 8 Sep 2023 11:23 UTC; 3 points) 's comment on Long-Term Future Fund Ask Us Anything (September 2023) by (
I really appreciated this list of examples and it’s updated me a bit towards checking in with LTFF & others a bit more. That said, I’m not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.
One frame: is longtermist funding more like “admitting a Harvard class/YC batch” or more like “pre-seed/seed-stage funding”? In the former case, it’s more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latter case, you are “black swan farming”; the important thing is to not miss out on the one Facebook that 1000xs, and you’re happy to fund 99 duds in the meantime.
I currently think the latter is a better representation of longtermist impact, but 1) impact is much harder to measure than startup financial results, and 2) having high average quality/few bad grants might be better for fundraising...
Yeah, I think Rachel also herself feels a bit imposter-syndrome-y about her budget allocation and might end up delegating part of her remainder to another regrantor.
I just disagree with everyone here (Anon/Tyler/Linch/Rachel). $10k pays for like 1-2 months of salary post-tax, which is like… a single regrant. I’d claim “feedback loops from intense dogfooding is why the Manifold Markets user experience is notably better than similar EA efforts” coupled with “the user experience of EA grantmaking has been awful to date, and we think we can do better” (excepting the parts that involved Linch funding us, we love you Linch). Not just software UX but the end-to-end feeling of what being a grantee is like, speed of response, quantity of feedback, etc.
I’m also pretty inclined to dismiss “optics are bad” arguments. I again invite anyone to judge, on the object level, 1) how do Rachel’s grants look? 2) how does the Manifund site UX feel? 3) how does her code look?. And as always, if you think you can make better regrants than us, audition for the role!
I think you’ve left out the most important point: net positive effect of Amazon as having generated trillions of dollars of value for its customers, suppliers, and employees.
Customers gain from having a streamlined reliable online ordering experience, with fast delivery times, large body of reviews, and friendly dispute resolution policies
Suppliers gain access to the huge market of said customers, as well as the infrastructure to deliver products and collect payment
Employees are offered a job opportunity that they may freely choose to leave
This doesn’t even touch upon the huge social value from the websites built on top of their cloud. It’s perhaps hard to appreciate without a background in tech, but briefly: before AWS (Amazon Web Services) and their competitors, every company had to build and manage their own servers, aka physical huge hot computers that require dedicated IT people to oversee and then break when too many people visit your website.
Zvi has a line that goes like “The world’s best charity is Amazon”
Hm, naively—is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don’t think this a unique reason to avoid impact markets.
My very rough guess is that impact markets should be at a bare minimum better than the for-profit landscape, which already makes it a worthwhile intervention. People participating as final buyers of impact will at least be looking to do good rather than generate additional profits; it would be very surprising to me if the net impact of that was worse than “the thing that happens in regular markets already”.
Additionally—I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we’re likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That’s not my current read of our xrisk situation, but would love to be convinced otherwise!)
- 25 Jun 2022 18:32 UTC; 18 points) 's comment on Impact markets may incentivize predictably net-negative projects by (
It becomes clear that there’s a lot of value in really nailing down your intervention the best you can. Having tons of different reasons to think something will work. In this case, we’ve got:
It’s common sense that not being bit by mosquitos is nice, all else equal.
The global public health community has clearly accomplished lots of good for many decades, so their recommendation is worth a lot.
Lots of smart people recommend this intervention.
There are strong counterarguments to all the relevant objections, and these objections are mostly shaped like “what about this edge case” rather than taking issue with the central premise.
Even if one of these fails, there are still the others. You’re very likely to be doing some good, both probabilistically and in a more fuzzy, hard-to-pin-down sense.
I really liked this framing, and think it could be a post on it’s own! It points at something fundamental and important like “Prefer robust arguments”.
You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.
Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments “robust”.
One example of a robust argument that I particularly liked: the case for cutting meat out of your diet. You can make a pretty good argument for it from a bunch of different angles:
Animal suffering
Climate/reducing emissions
Health and longevity
Financial cost (price of food)
By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.
Some signs that an argument is robust:
Many people who think hard about this issue agree
People with very different backgrounds agree
The argument does a good job predicting past results across a lot of different areas
Robustness isn’t the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.
Haha, I think you meant this sarcastically but I would actually love to find Republican, or non-college-educated, or otherwise non-”traditional EA” regrantors. (If this describes you or someone you know, encourage them to apply!)
In response to an emailed question about “are the regranting pots backed by FTX money?”:
Manifold for Charity (501c3) has received 3 main donations so far:
The aforementioned $1.5m from an anonymous individual donor, for regrants
~$400k from SFF (1/3rd of their last grant to us), unrestricted
$500k from the FTX Future Fund, for Charity Prediction Markets.
We intend to finance regrants out of the first 2 pots; the status of the last pot is in a bit of limbo; we’re still running the charity prediction market program in the meantime, but have only spent ~$120k of it so far, and haven’t committed to never using it for other purposes. (As you might imagine, the ethical questions here are somewhat thorny, and we’re mostly hoping to fundraise from other sources to avoid them; but also don’t want to unnecessarily tie our hands)
We’ve separately received a $1m regrant from Future Fund, structured as an investment in Manifold Markets (a C Corporation), for which Alameda received equity.
We’re happy to consider more diverse regrantors—if you have specific candidates in mind, please send them this launch post, or make an intro to us (
austin@manifund.org
)!
Thanks for the writeup, Nathan; I am indeed excited about the possibility of making better grants through forecasting/futarchic mechanisms. So I’ll start from the other direction: instead of reaching for futarchy as a hammer, start with, what are current major problems grantmakers face?
The problem that seems most important to solve: “finding projects that turn out to be orders of magnitude more successful/impactful than the rest”. Paul Graham describes funding seed-stage startups as “farming black swans”, which rings true to me. To look at two example rounds from ACX Grants, which I’ve been involved in:
ACX Grants: Many of the projects look good, but a handful seem to have gotten outlier success; I would count Lars and Will’s Valuebase, the Oxfendazole group, and our own Manifold as having gone on to raise millions in further funding.
ACX Forecasting Mini-grants: Still a bit early to tell, but OPTIC and BaseRateTimes (which we missed!) seem to have hit their goals and continue on to work on cool things.
So right now, I’m most interested in mechanisms that help us find such founders/projects. Just daydreaming here, is there any kind of prediction mechanism that can turn out a report as informative as the ACX Grants 1-year project update? The information value in most prediction markets is “% chance given by the market”, which misses out on the valuable qualitative sketches given by a retroactive writeup.
Other promising things:
Asking grantees to set up markets for their own outcomes; eg “If funded, will we successfully publish a paper that receives >10 citations within 1 year?” this might clarify exactly what goals the grantees are trying to hit.
Doing some kind of impact analysis for alignment work in past years; imagine a kind of “AI Safety Nobel Prizes” which identify what work turned out to be the most important. This would give future forecasting tools something concrete to predict on.
Hi Omega, I’d be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you’ve critiqued, Apollo is very new and hasn’t received the requisite >$10m, but it’s easy to imagine them becoming a major TAIS lab over the next years!
Manifold Markets ran a prediction tournament to see whether forecaster would be able to predict the winners! For each Cause Exploration Prize entry, we had a market on “Will this entry win first or second place?”. Check out the tournament rules and view all predictions here.
I think overall, the markets did okay—they managed to get the first place entry (“Organophosphate pesticides and other neurotoxicants”) as the highest % to win, and one of the other winners was ranked 4th (“Violence against women and girls”). However, they did miss out on the two dark horse winners (“Sickle cell disease” and “shareholder activism”), which could have been one hypothetical way markets would outperform karma. Specifically, none of the Manifold forecasters placed a positive YES bet on either of the dark horse candidates.
I’m not sure that the markets were much better predictors than just EA Forum Karma—and it’s possible that most of the signal from the markets were just forecasters incorporating EA Forum Karma into their predictions. The top 10 predictions by Karma also had 2 of the 1st/2nd place winners:
And if you include honorable mentions in the analysis, EA Forum Karma actually did somewhat better. Manifold Markets had 7⁄10 “winners” (first/second/honorable), while EA Forum Karma had 9⁄10.
Thanks again for the team at OpenPhil (especially Chris and Aaron) for hosting these prizes and thereby sponsoring so many great essays! Would love to see that writeup about learnings, especially curious what the decision process was that lead to these winners and honorable mentions.