Hey there~ I’m Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin
Consider Google internal communications (I used to work there). Google has ~100k fulltimers, far more than the total number of EA fulltime professionals. And internal communications can leak (eg the Damore memo). But only a small fraction of these internal messages actually get leaked; and the feeling of posting there is much less “posting on Twitter”and more “posting in private group chat”.
Being able to cold-message almost anyone in the company, and have the expectation that they will see your message and respond, also leads to norm of shared trust in the communication actually happening instead of getting ghosted.
I think this is a straightforwardly good idea; I would pay a $5k bounty to someone who makes “EA comms” as good as e.g. internal Google comms, which is IMO not an extremely high bar.
I think an important point (that Ozzie does identify) is that it’s not a simple as just setting up a couple systems, but rather doing all of the work that goes in shepherding a community and making it feel alive. Especially in the early days, there’s a difference between a Slack that feels “alive” and “dead” and a single good moderator/poster who commits to posting daily can make the difference. I don’t know that this needs to be a fulltime person; my happy price for doing this myself would be like $20k/year?
Regarding leaks: I don’t think the value of better internal comms is in “guaranteed privacy of info”. It’s more in “reducing friction to communicate across orgs” and in “increasing the chance that your message is actually read by the people”. And there’s a big difference between “an ill-intentioned insider has the ability to screenshot and repost your message to Twitter” to “by default, every muckraker can scroll through your entire posting history”.
Public venues like EA Forum and Facebook are a firehose that are very difficult for busy people to stay on top of; private venues like chat groups are too chaotically organized and give me kind of an ugh-field feeling.
Some random ideas:
Create the “One EA Slack/Discord to rule them all”. Or extend out of an existing eg Constellation chat.
Ask EAG attendees to use that instead of Swapcard messaging, so that all EAG attendees are thrown into one long-lived messaging system
Integrate chat into EA Forum (DMs feel too much like email at the moment)
Integrate chat into Manifold (though Manifold is much less of a Schelling point for EA than EAF)
Start lists of Google Groups (though this competes a bit against the EAF’s subforums)
In past years, I believed that donating to many causes was suboptimal, and was happy to just send money to Givewell’s Top Charities fund. But I’ve diversified my donations this year, partly due to 2., 3. and 4. Some other considerations:
7. From the charity’s perspective, a diversified donor base might provide more year-over-year stability. A charity should be happier to have 100 donors paying $1k a year than 1 donor paying $100k, in terms of how beholden it is to its donors.
8. Relatedly, a small charity might have a easier time fundraising if they can use a broad donor base as evidence to larger funders about the impact of their work.
9. Wisdom of the crowds/why capitalism is so good: there’s a lot of knowledge held in individual donors’ heads about which charities are doing the best work; diversifying allows for more granular feedback/bits of information flow in the overall charitable system.
Haha, I wrote a similarly titled article sharing the premise that Sam’s actions seem more indicative of a mistake than a fraud: https://forum.effectivealtruism.org/posts/w6aLsNppuwnqccHmC/in-defense-of-sbf
I appreciated the personal notes about SBFs interactions with the animal welfare community. I do think the tribalism EA tribalism element is very real as well. Also appreciate the point about trying to work on something intrinsically motivating—I’m not sure that it’s possible for every individual but I do feel like my own intrinsic love of work helps a lot with putting in a lot of time and effort!
Thanks for asking! Manifold has received a grant to promote charitable prediction markets which we can regrant from. But otherwise, we could also fund these donations via mana purchases (some of our users buy in more mana if they run out, or want to support Manifold.markets)
Thank you—I think you did a good job of capturing what I was trying to say. we shouldn’t go full fluffy rainbows, and we should directionally update against SBF compared to before FTX imploded; but what I’m seeing is way overcorrected and I’m trying to express why.
Yeah, perhaps I could have been more clear in my argumentation structure. Point 1 is a consideration on the object-level: was it willful? But points 2 and 3 assume that even if it was willful, the community response goes too far in condemnation, and condemnation without regard for loyalty/ambition might hurt its ability to actually do good in the world.
Yeah, idk, it’s actually less of a personal note than a comment on decision theory among future and current billionaires. I guess the “personal” side is where I can confidently say “this set of actions feels very distasteful to me” because I get to make claims about my own sense of taste; and I’m trying to extrapolate that to other people who might become meaningful in the future.
Or maybe: This is a specific rant to the “EA community” separate from “EA principles”. I hold my association with the “EA community” quite loosely; I only actually met people in this space like this year as a result of Manifold, whereas I’ve been donating/reading EA for 6ish years. The EA principles broadly make sense to me either way; and I guess I’m trying to figure out whether the EA community is composed of people I’m happy to associate with.
Thanks for the shoutout! Super cool to have all this info collate in one place (it’ll be fun to set up certs on Manifold so people can invest in how they feel about different platforms 😉)
In case it matters, Manifold last raised at a $22m postmoney valuation, with a total of about 2.4m in investment and 0.6m in grants: https://manifoldmarkets.notion.site/Manifold-Finances-0f9a14a16afe4375b67e21471ce456b0
Also, I think our growth has gotten significantly better in the last couple months—and would be extremely interested in estimates on DAU on other platforms haha.
I think Manifold was experiencing an (unrelated) database outage when you posted this comment, and the markets should be up again; please let me know if this isn’t the case!
Definitely appreciate the clarity provided here; I’m a huge fan of the Creative Commons licenses.
I’d put in my vote for dropping the Commercial clause; very biased, of course, but at Manifold we’ve really enjoyed pulling EA Forum content (such as the Criticism and Red Teaming Contest: https://manifold.markets/CARTBot) and setting up tournaments for them. We didn’t charge anyone to participate (and we’re actually paying out a bit for tournament prizes), but all the same Manifold is a commercial venture and we’re benefiting from the content—a noncommercial license might make us more reluctant to try cool things like this.
I’m also not sure how much work is being covered by the word “range” here—it’s true that eg Google would hire one engineer for 200k and another for 800k, but the roles and responsibilities and day to day work of both would look completely different.
Hm so the subject of salary range is actually quite different than transparency—I mostly think large ranges would be good because it allows for EA to pay market rates to highly skilled/in demand workers. Imo the current ethos of artificial/sacrificial wages in EA is pennywise, pound foolish, and leads to the problem of having very mission-aligned but not extremely competent people in EA orgs. I think it’s a major reason EA struggles to attract mid to late career talent, especially leadership/managers/mentors.
Re: adversarial, I don’t have the sense that either 1) employers care about having their pay ranges publicized on levels.fyi or similar services, nor that 2) companies have a right to keep such information private.
Thanks for adding the context! I think your specific points are factually correct regarding wide variations of pay in the software industry, though I don’t think that actually refutes the point that salary ranges and comp expectations are well-known and easy to look up—certainly much more so than in EA or most other for-profit sectors. (Government/academic work is the exception here, where you can often find eg specific teacher salaries posted publicly)
If you eg look at https://www.levels.fyi/ for Google, you can directly see what an L3 (fresh grad), L4 (1-2 years of experience), L5 (“Senior”, typically 3-4 years) make in the industry. RSU/equity/vesting schedule does complicate things, but they are likewise shared on the site, here’s Stripe’s (note their L3 corresponds to Google L5)
I acknowledge the point made in Morrison’s comment, but just think that it’s a bad norm that favors employers, who tend to have an informational advantage over employees in the first place, and am unsure why EA orgs and especially EA individuals/employees should want to perpetuate this norm.
On an optics level, I think you should just be up front and confident in your valuations. If someone asks why, you can mention something like “this ML researchers makes $1m in salary because they would otherwise make $2m at Deepmind”.
Link updated, sorry about that!
Inside the tech world, there’s a norm of fairly transparent salaries driven by levels.fyi (and Glassdoor, to a lesser extent). I think this significantly reduces pay gaps caused by eg differential negotiating inclinations, and a similar gathering place for public EA salary metrics is one of my pet project proposals.
Manifold Markets takes the somewhat unusual step of just making all of our salaries public: https://manifoldmarkets.notion.site/Manifold-Finances-0f9a14a16afe4375b67e21471ce456b0
Manifold Markets ran a prediction tournament to see whether forecaster would be able to predict the winners! For each Cause Exploration Prize entry, we had a market on “Will this entry win first or second place?”. Check out the tournament rules and view all predictions here.
I think overall, the markets did okay—they managed to get the first place entry (“Organophosphate pesticides and other neurotoxicants”) as the highest % to win, and one of the other winners was ranked 4th (“Violence against women and girls”). However, they did miss out on the two dark horse winners (“Sickle cell disease” and “shareholder activism”), which could have been one hypothetical way markets would outperform karma. Specifically, none of the Manifold forecasters placed a positive YES bet on either of the dark horse candidates.
I’m not sure that the markets were much better predictors than just EA Forum Karma—and it’s possible that most of the signal from the markets were just forecasters incorporating EA Forum Karma into their predictions. The top 10 predictions by Karma also had 2 of the 1st/2nd place winners:
And if you include honorable mentions in the analysis, EA Forum Karma actually did somewhat better. Manifold Markets had 7⁄10 “winners” (first/second/honorable), while EA Forum Karma had 9⁄10.
Thanks again for the team at OpenPhil (especially Chris and Aaron) for hosting these prizes and thereby sponsoring so many great essays! Would love to see that writeup about learnings, especially curious what the decision process was that lead to these winners and honorable mentions.
I think anime/gaming expos/conventions might be a good example actually—in those events, the density of high quality people is less important than just “open for anyone who’s interested to come”. Like, organizers will try to have speakers and guests lined up who are established/legit, but 98% of the people visiting are just fans of anime who want to talk to other fans.
Notably, it’s not where industry experts converge to do productive work on creating things, or do 1:1s; but they sure do take advantage of cons and expos to market their new work to audiences. By analogy, a much larger EA Expo would have the advantage of promoting the newest ideas to a wider subset of the movement.
Plus, you get really cool emergent dynamics when the audience size is 10x’d. For example, if there are a 1-2 people in 1000 who enjoy creating EA art, at 10000 people you can have 10-20 of them get together and meetup and talk to each other
Haha thanks for the shoutout, Nathan! Our writeup and tournament announcement is now up at https://forum.effectivealtruism.org/posts/ktZCeDaMZgr9dCjsX/prediction-tournament-who-will-win-the-cause-exploration
The Manifold Markets team participated in the program Joel ran; it was trajectory-changing. It felt more like YCombinator than YCombinator itself. We met a bunch of other teams working on adjacent things to us, collaborated on ideas and code, and formed actual friendships—the kind I still keep up with, more than half a year later. Joel was awesome, I would highly encourage anyone thinking of fellowships to heed his advice.
I was inspired afterwards to run a mini (2 week) program for our team + community in Mexico City. Beyond the points mentioned above, I would throw in:
Think very carefully about who comes; peer effects are the most important aspect of a fellowship program. Consider reaching out to people who you think would be a good fit, instead of just waiting for people to apply.
The best conversations happen during downtime. E.g. the 30m bus ride between the office and the hotel; late night after a kickback is officially over.
Casual repeated interactions lead to friendships; plan your events and spaces so that people run into people again and again.
Start off as a dictator when eg picking places to get dinner, rather than polling everyone and trying to get consensus. In the beginning, people just need a single Schelling point; as they get to know each other better they’ll naturally start forming their own plans.
Perhaps obvious, but maintain a shared group chat; have at least one for official announcements, and a lounge for more casual chatting. Slack or Discord are good for this.