Hey there~ I’m Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin
For example, Substack is a bigger deal now than a few years ago, and if the Forum becomes a much worse platform for authors by comparison, losing strong writers to Substack is a risk to the Forum community.
I’ve proposed to the LW folks and I’ll propose to y’all: make it easy to import/xpost Substack posts into EA Forum! RIght now a lot of my writing goes from Notion draft ⇒ our Substack ⇒ LW/EAF, and getting the formatting exactly right (esp around images, spacing, and footnotes) is a pain. I would love the ability to just drop in our Substack link and have that automatically, correctly, import the article into these places.
I’m also not sure if this is what SWP is going for, but the entire proposal reminds me of Paul Christiano’s on humane egg offsets, which I’ve long been fond of: https://sideways-view.com/2021/03/21/robust-egg-offsetting/
With Paul’s, the egg certificate solves a problem of “I want humane eggs, but I can buy regular egg + humane cert = humane egg”. Maybe the same would apply for stunned shrimp, eg a supermarket might say “I want to brand my shrimp as stunned for marketing or for commitments; I can buy regular shrimp + stun cert = stunned shrimp”
Vote power should scale with karma
This gives EA Forum and LessWrong a very useful property of markets: more influence accrues to individuals who have a good track record of posting.
Really appreciate this post! I think it’s really important to try new things, and also have the courage to notice when things are not working and stop them. As a person who habitually starts projects, I often struggle with the latter myself, haha.
(speaking of new projects, Manifund might be interested in hosting donor lotteries or something similar in the future—lmk if there’s interest in continuity there!)
Hey! Thanks for the thoughts. I’m unfortunately very busy these days (including, with preparing for Manifest 2025!) so can’t guarantee I’ll be able to address everything thoroughly, but a few quick points, written hastily and without strong conviction:
re non sequitor, I’m not sure if you’ve been on a podcast before but one tends to just, like, say stuff that comes to mind; it’s not an all-things-considered take. I agree that Hanania denouncing his past self is a great and probably more central example of growth, I just didn’t reference it because the SWP stuff was more top of mind (interesting, unexpected).
I know approximately nothing about HBD fwiw; like I’m not even super sure what the term refers to (my guess without checking: the controversial idea that certain populations/races have higher IQs?). It’s not the case that I’ve looked a bunch into HBD and decided I’ll invite these 6 speakers because of their HBD beliefs; I outlined the specific reasons I invited them, which is that they each had an interesting topic to talk about (none of which were HBD afaik). You could accuse me of dereliction of duty wrt researching the downstream effects of inviting speakers with controversy? idk, maybe, I’m open to that criticism, it’s just there’s a lot of stuff to juggle and it feels a bit like an isolated demand on my time.
I agree that racism directly harms people, beyond being offensive, and this can be very bad. It’s not obvious to me where and how of racism is happening in my local community (broadly construed, ie the spaces I spend time in IRL and online), or what specific bad things that are caused by this racism? Like, I think my general view of racism is that it’s an important cause area, alongside many other important causes to work on like AI safety, animal welfare, GHD, climate change, progress, etc—but it happens to be not very neglected or tractable, for me personally to address.
No updates on ACX Grants to share atm; stay tuned!
Thank you Caleb, I appreciate the endorsement!
And yeah, I was very surprised by the dearth of strong community efforts in SF. Some guesses at this:
Berkeley and Oakland have been historical nexus for EA and rationality, with a rich-get-richer effect where people migrating to the bay choose East Bay
In SF, there’s much more competition for talent: people can go work on startups, AI labs, FAANG, VC
And also competition for mindshare: SF’s higher population and density means there are many other communities (eg climbing, biking, improv, yimby, partying)
Some are! Check out each project in the post, some have links to source code.
(I do wish we’d gotten source code for all of them, next time might consider an open source hackathon!)
Thanks Angelina! It was indeed fun, hope to have you join in some future version of this~
And yeah definitely great to highlight that list of projects, many juicy ideas in there for any aspiring epistemics hacker, still unexplored. (I think it might be good for @Owen Cotton-Barratt et al to just post that as a standalone article!)
I agree that the post is not well defended (partly due to brevity & assuming context); and also that some of the claims seem wrong. But I think the things that are valuable in this post are still worth learning from.
(I’m reminded of a Tyler Cowen quote I can’t find atm, something like “When I read the typical economics paper, I think “that seems right” and immediately forget about it. When I read a paper by Hanson, I think “What? No way!” and then think about it for the rest of my life”. Ben strikes me as the latter kind of writer.)
Similar to the way Big Ag farms chickens for their meat, you could view governments and corporations as farming humans for their productivity. I think this has been true throughout history, but accelerated recently by more financialization/consumerism and software/smartphones. Both are entities that care about a particular kind of output from the animals they manage, with some reasons to care about their welfare but also some reasons to operate in an extractive way. And when these entities can find a substitute (eg plant-based meat, or AI for intellectual labor) the outcomes may not be ideal for for the animals.
I’m a bit disappointed, if not surprised, with the community response here. I understand veganism is something of a sacred cow (apologies) in these parts, but that’s precisely why Ben’s post deserves a careful treatment—it’s the arguments you least agree with that you should extend the most charity to. While this post didn’t cause me to reconsider my vegetarianism, historically Ben’s posts have had an outsized impact on the way I see things, and I’m grateful for his thoughts here.
Ben’s response to point 2 was especially interesting:
If factory farming seems like a bad thing, you should do something about the version happening to you first.
And I agree about the significance of human fertility decline. I expect that this comparison, of factory farming to modern human lives, will be a useful metaphor when thinking about how to improve the structures around us.
It’s a good point about how it applies to founders specifically—under the old terms (3:1 match up to 50% of stock grant) it would imply a maximum extra cost from Anthropic of 1.5x whatever the founders currently hold. That’s a lot!
Those bottom line figures doesn’t seem crazy optimistic to me, though—like, my guess is a bunch of folks at Anthropic expect AGI on the inside of 4 years, and Anthropic is the go to example of “founded by EAs”. I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time.
Anthropic’s donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers:
> Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grantCurious if anyone knows the rationale for this—I’m thinking through how to structure Manifund’s own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration.
I’m also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot.
One (conservative imo) ballpark:
If founders + employees broadly own 30% of outstanding equity
50% of that has been assigned and vested
20% of employees will donate
20% of their equity within the next 4 years
then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.
Thanks for the recommendation, Benjamin! We think donating to Manifund’s AI Safety regranting program is especially good if you don’t have a strong inside view among the different orgs in the space, but trust our existing regrantors and the projects they’ve funded; or if you are excited about providing pre-seed/seed funding for new initiatives or individuals, rather than later-stage funding for more established charities (as our regrantors are similar to “angel investors for AI safety”).
If you’re a large donor (eg giving >$50k/year), we’re also happy to work with you to sponsor new AI safety regrantors, or suggest to you folks who are particularly aligned with your interests or values. Reach out to me at austin@manifund.org!
This makes sense to me; I’d be excited to fund research or especially startups working to operationalize AI freedoms and rights.
FWIW, my current guess is that the proper unit to extend legal rights is not a base LLM like “Claude Sonnet 3.5” but rather a corporation-like entity with a specific charter, context/history, economic relationships, and accounts. Its cognition could be powered by LLMs (the way eg McDonald’s cognition is powered by humans), but it fundamentally is a different entity due to its structure/scaffolding.
Thanks for cross posting, this got Shapley values to “click” for me!
No concrete timelines at the moment, almost definitely more than a few months from now.
That’s good to know—I assume Oli was being somewhat hyperbolic here. Do you (or anyone else) have examples of right-of-center policy work that OpenPhil has funded?
I’m not aware of any projects that aim to advise what we might call “Small Major Donors”: people giving away perhaps $20k-$100k annually.
We don’t advertise very much, but my org (Manifund) does try to fill this gap:
Our main site, https://manifund.org/, allows individuals and orgs to publish charitable projects and raise funding in public, usually for projects in the range of $10k-$200k
We generally focus on: good website UX, transparency (our grants, reasoning, website code and meeting notes are all public), moving money fast (~1 week rather than months)
We are more self-serve than advisory; we mostly expect our donors to find projects they like themselves, which they can do because the grant proposals include large amounts of detail, plus they can directly chat with the project creators over our comments section
Though, we have experimented with promoting good projects via things like impact certs & quadratic funding rounds, or just posting recommendations on our blog
In the EA space, we’re particularly open to weird arrangements; beyond providing lightweight fiscal sponsorship to hundreds of individuals and experimenting with funding mechanisms, we have eg loaned money to aligned orgs and invested in for-profit enterprises
If you’re interested in donating medium-sized amounts in unusual ways, reach out to me at austin@manifund.org!
Thanks, we’ll definitely consider that option for future pieces!