Hey there~ I’m Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin
+100 on this. I think the screening processes for these conferences overweight legible in-groupy accomplishments like organizing an EA group in your local town/college, and underweights regular impressive people like startup founders who are EA-curious—and this is really really bad for movement diversity.
Yes, I might be salty because I was rejected from both EAG London and Future Forum this year…
But I also think the bar for me introducing friends to EA-curious is higher, because there isn’t a cool thing I can invite them into. Anime conventions such as Anime Expo or Crunchyroll Expo are the opposite of this—everyone is welcome, bring your friends, have a good time—and it works out quite well for keeping people interested in the subject.
- 4 Sep 2022 16:27 UTC; 35 points) 's comment on Open EA Global by (
What We Owe the Past
Definitely appreciate the clarity provided here; I’m a huge fan of the Creative Commons licenses.
I’d put in my vote for dropping the Commercial clause; very biased, of course, but at Manifold we’ve really enjoyed pulling EA Forum content (such as the Criticism and Red Teaming Contest: https://manifold.markets/CARTBot) and setting up tournaments for them. We didn’t charge anyone to participate (and we’re actually paying out a bit for tournament prizes), but all the same Manifold is a commercial venture and we’re benefiting from the content—a noncommercial license might make us more reluctant to try cool things like this.
Predicting for Good: Charity Prediction Markets
The Manifold Markets team participated in the program Joel ran; it was trajectory-changing. It felt more like YCombinator than YCombinator itself. We met a bunch of other teams working on adjacent things to us, collaborated on ideas and code, and formed actual friendships—the kind I still keep up with, more than half a year later. Joel was awesome, I would highly encourage anyone thinking of fellowships to heed his advice.
I was inspired afterwards to run a mini (2 week) program for our team + community in Mexico City. Beyond the points mentioned above, I would throw in:
Think very carefully about who comes; peer effects are the most important aspect of a fellowship program. Consider reaching out to people who you think would be a good fit, instead of just waiting for people to apply.
The best conversations happen during downtime. E.g. the 30m bus ride between the office and the hotel; late night after a kickback is officially over.
Casual repeated interactions lead to friendships; plan your events and spaces so that people run into people again and again.
Start off as a dictator when eg picking places to get dinner, rather than polling everyone and trying to get consensus. In the beginning, people just need a single Schelling point; as they get to know each other better they’ll naturally start forming their own plans.
Perhaps obvious, but maintain a shared group chat; have at least one for official announcements, and a lounge for more casual chatting. Slack or Discord are good for this.
Manifold for Good: Bet on the future, for charity
Hrm, I strongly disagree with this post.
I don’t see that security/privacy is especially important as a feature of a messaging system, when compared to something like “easy to use” or “my friends are already on it”
Basically all sensitive/important EA communication already happens over Slack or Gmail. This means that the consideration for the switching isn’t especially relevant to “EA”, vs just regular consumers.
This post reads as fairly alarmist against FB messenger, but doesn’t do a good job explaining or quantifying what the harms of a possible security breach are, nor how likely such a breach might be
I don’t think EA want to be spending weirdness points convincing people to use a less-good system—switching costs are quite high!
Fwiw, I do agree that choosing good software is quite important—for example, I think EA orgs are way overindexed on Google Docs, and a switch to Notion would make any one org something like 10% more productive within 3 months.
Create a prediction market in two minutes on Manifold Markets
I think you’ve left out the most important point: net positive effect of Amazon as having generated trillions of dollars of value for its customers, suppliers, and employees.
Customers gain from having a streamlined reliable online ordering experience, with fast delivery times, large body of reviews, and friendly dispute resolution policies
Suppliers gain access to the huge market of said customers, as well as the infrastructure to deliver products and collect payment
Employees are offered a job opportunity that they may freely choose to leave
This doesn’t even touch upon the huge social value from the websites built on top of their cloud. It’s perhaps hard to appreciate without a background in tech, but briefly: before AWS (Amazon Web Services) and their competitors, every company had to build and manage their own servers, aka physical huge hot computers that require dedicated IT people to oversee and then break when too many people visit your website.
Zvi has a line that goes like “The world’s best charity is Amazon”
Reminder: you can donate your mana to charity!
Hm, naively—is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don’t think this a unique reason to avoid impact markets.
My very rough guess is that impact markets should be at a bare minimum better than the for-profit landscape, which already makes it a worthwhile intervention. People participating as final buyers of impact will at least be looking to do good rather than generate additional profits; it would be very surprising to me if the net impact of that was worse than “the thing that happens in regular markets already”.
Additionally—I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we’re likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That’s not my current read of our xrisk situation, but would love to be convinced otherwise!)
- 25 Jun 2022 18:32 UTC; 17 points) 's comment on Impact markets may incentivize predictably net-negative projects by (
It becomes clear that there’s a lot of value in really nailing down your intervention the best you can. Having tons of different reasons to think something will work. In this case, we’ve got:
It’s common sense that not being bit by mosquitos is nice, all else equal.
The global public health community has clearly accomplished lots of good for many decades, so their recommendation is worth a lot.
Lots of smart people recommend this intervention.
There are strong counterarguments to all the relevant objections, and these objections are mostly shaped like “what about this edge case” rather than taking issue with the central premise.
Even if one of these fails, there are still the others. You’re very likely to be doing some good, both probabilistically and in a more fuzzy, hard-to-pin-down sense.
I really liked this framing, and think it could be a post on it’s own! It points at something fundamental and important like “Prefer robust arguments”.
You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.
Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments “robust”.
One example of a robust argument that I particularly liked: the case for cutting meat out of your diet. You can make a pretty good argument for it from a bunch of different angles:
Animal suffering
Climate/reducing emissions
Health and longevity
Financial cost (price of food)
By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.
Some signs that an argument is robust:
Many people who think hard about this issue agree
People with very different backgrounds agree
The argument does a good job predicting past results across a lot of different areas
Robustness isn’t the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.
Manifund x AI Worldviews
Predict which posts will win the Criticism and Red Teaming Contest!
I think anime/gaming expos/conventions might be a good example actually—in those events, the density of high quality people is less important than just “open for anyone who’s interested to come”. Like, organizers will try to have speakers and guests lined up who are established/legit, but 98% of the people visiting are just fans of anime who want to talk to other fans.
Notably, it’s not where industry experts converge to do productive work on creating things, or do 1:1s; but they sure do take advantage of cons and expos to market their new work to audiences. By analogy, a much larger EA Expo would have the advantage of promoting the newest ideas to a wider subset of the movement.
Plus, you get really cool emergent dynamics when the audience size is 10x’d. For example, if there are a 1-2 people in 1000 who enjoy creating EA art, at 10000 people you can have 10-20 of them get together and meetup and talk to each other
Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more “me trying to lay out my intuitions” and less “I know exactly how we should change EA on account of these intuitions”. I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!
I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferences—but totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just don’t know—and to that end your review is very enlightening! And some is: there’s a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; I’d push for a return to more of Sequences-style shorter chunks.
I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we can’t change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary.
I still believe in (2), but I’m not confident I can articulate why (and I might be wrong!). Once again, I’d draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/not gain utility if they were aware of the future supplanting their wishes.
Can we change the past? I feel like the answer is not 100% obviously “no”—I think this post by Joe Carlsmith lays out some arguments for why:
Overall, rejecting the common-sense comforts of CDT, and accepting the possibility of some kind of “acausal control,” leaves us in strange and uncertain territory. I think we should do it anyway. But we should also tread carefully.
(but it’s also super technical and I’m at risk of having misunderstood his post to service my own arguments.)
In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more “this was a good idea, here’s your prize”, and less “here’s some money to go do X”.
I’m not entirely sure what % of my belief in this comes from “this is a morally just way of paying out to the past” vs “this will be effective at producing better future outcomes”; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.
To this end, I’ve been working on a proposal for equity for charities—still in a very early stage, but as you work as a fund manager, I’d love to hear your thoughts (especially your criticism!)
Finally (and to put my money where my mouth is): would you accept a $100 bounty for your comment, paid in Manifold Dollars aka a donation to the charity of your choice? If so, DM me!
I think this is a straightforwardly good idea; I would pay a $5k bounty to someone who makes “EA comms” as good as e.g. internal Google comms, which is IMO not an extremely high bar.
I think an important point (that Ozzie does identify) is that it’s not a simple as just setting up a couple systems, but rather doing all of the work that goes in shepherding a community and making it feel alive. Especially in the early days, there’s a difference between a Slack that feels “alive” and “dead” and a single good moderator/poster who commits to posting daily can make the difference. I don’t know that this needs to be a fulltime person; my happy price for doing this myself would be like $20k/year?
Regarding leaks: I don’t think the value of better internal comms is in “guaranteed privacy of info”. It’s more in “reducing friction to communicate across orgs” and in “increasing the chance that your message is actually read by the people”. And there’s a big difference between “an ill-intentioned insider has the ability to screenshot and repost your message to Twitter” to “by default, every muckraker can scroll through your entire posting history”.
Public venues like EA Forum and Facebook are a firehose that are very difficult for busy people to stay on top of; private venues like chat groups are too chaotically organized and give me kind of an ugh-field feeling.
Some random ideas:
Create the “One EA Slack/Discord to rule them all”. Or extend out of an existing eg Constellation chat.
Ask EAG attendees to use that instead of Swapcard messaging, so that all EAG attendees are thrown into one long-lived messaging system
Integrate chat into EA Forum (DMs feel too much like email at the moment)
Integrate chat into Manifold (though Manifold is much less of a Schelling point for EA than EAF)
Start lists of Google Groups (though this competes a bit against the EAF’s subforums)
“Do you have an intuition around when one should make a Donor-Advised Fund?”
The reason I, personally, opened a DAF was to make it dead simple to donate appreciated stock.
If you’re not familiar: you can give a lot more to charity, at the same cost to you, if you gift stock that’s gone up in price instead of cash. For example, say you bought stock for $1k and has appreciated to $10k. (Lucky you!) If you sold it to donate it to charity, you first have to pay capital gains tax on the $9k, which is 35% or about $3k. So the charity only gets $7k. If instead, you gift the stock directly: you don’t pay taxes, and neither does the charity. Basically, the US Govt matches your donation. Great deal, right?
The catch is: actually gifting stock is really annoying! When I was donating TSLA shares to GiveWell I had to literally fax a piece of paper telling them which shares to take out of my account. A DAF is much simpler; I just click some buttons from my Schwab investment account and the stock lands and gets sold in my Schwab Charitable DAF. There are other great reasons to open a DAF too—but making this tax optimization really easy is why I went for it.
Thanks for your responses!
I’m not sure that “uniqueness” is the right thing to look at.
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
Manifold Markets ran a prediction tournament to see whether forecaster would be able to predict the winners! For each Cause Exploration Prize entry, we had a market on “Will this entry win first or second place?”. Check out the tournament rules and view all predictions here.
I think overall, the markets did okay—they managed to get the first place entry (“Organophosphate pesticides and other neurotoxicants”) as the highest % to win, and one of the other winners was ranked 4th (“Violence against women and girls”). However, they did miss out on the two dark horse winners (“Sickle cell disease” and “shareholder activism”), which could have been one hypothetical way markets would outperform karma. Specifically, none of the Manifold forecasters placed a positive YES bet on either of the dark horse candidates.
I’m not sure that the markets were much better predictors than just EA Forum Karma—and it’s possible that most of the signal from the markets were just forecasters incorporating EA Forum Karma into their predictions. The top 10 predictions by Karma also had 2 of the 1st/2nd place winners:
And if you include honorable mentions in the analysis, EA Forum Karma actually did somewhat better. Manifold Markets had 7⁄10 “winners” (first/second/honorable), while EA Forum Karma had 9⁄10.
Thanks again for the team at OpenPhil (especially Chris and Aaron) for hosting these prizes and thereby sponsoring so many great essays! Would love to see that writeup about learnings, especially curious what the decision process was that lead to these winners and honorable mentions.