The pledge, for me, is not just about donating the money but about the spiritual hygiene parting with the money and affirming my priorities, so it’s very important to me to actually give money I was in possession of. It could work for hours but I’d need to have that same knowledge of making the sacrifice as it was happening. I’m not saying this is the correct or necessary way to view the pledge and I approve of other people using the pledge in the way that best helps them to stay in line with their altruistic values.
Holly Elmore ⏸️ 🔸
It is actual salary. Since I’m an exempt, salaried employee, it’s not clear that I could claim pro bono hours, and unless that was very clearly written into my hire letter I feel that doing things that way wouldn’t be enough in line with the spirit of the pledge. It’s possible we could get the tax benefits and deal with my qualms in the future.
I didn’t receive salary I was owed before the org was officially formed (waiting for the appropriate structures to pay myself with a W2), all of which is still an account payable to me, and I’ve foregone additional salary when the org couldn’t afford it, which is owed to me as backpay. In order to donate any of the money that’s owed to me, we have to process it through payroll and pay payroll tax on it.
At this point, I have many years of 10% donations in backpay. Some of it I’m reserving the right to still claim one day. But I’m processing some as a donation for my year-end giving (when I do the bulk of my giving) this year.
Well if someone has a great suggestion that’s the objection it has to overcome
No offense to forecasting, which is good and worthwhile, but I think trying to come with a bet in this case is a guaranteed time suck that will muddy the waters instead of clarifying them. There are very few crisp falsifiable hypotheses that also get at the cruxes of whether it’s better to donate to PauseAI or animal welfare given that that’s not already clear to Vasco that I think would make good bets unfortunately.
https://x.com/ilex_ulmus/status/1776724461636735244
Hmm, I wonder what we would bet on. There’s no official timeline or p(doom) of PauseAI, and our community is all over the map on that. Our case for you donating to pausing AI is not about exactly how imminently doom is upon us, but how much a grassroots movement would help in concentrating public sentiments and swaying crucial decisions by demanding safety and accountability.
My personal views on AI Doom (https://forum.effectivealtruism.org/posts/LcJ7zoQWv3zDDYFmD/cutting-ai-safety-down-to-size) are not as doomy as Greg’s. I just still think this is the most important issue in the world at a lower chance of extinction or with a longer timelines, and that the crucial time to act is as soon as possible. I don’t think the timeline prediction is really the crux.
Doesn’t he abstain voting on at least SFF grants himself because of this? I’ve heard that but you’d know better.
The minimal PauseAI US is me making enough to live in the Bay Area. As long as I’m keeping the organization alive, much of the startup work will not be lost. Our 501(c)(3) and c4 status would come in in the next year, I’d continue to use the systems set up by me and Lee, and I’d be able to keep a low amount of programming going while fundraising. I intend to keep PauseAI US alive unless I absolutely can’t afford it or I became convinced I would never be able to effectively carry out our interventions.
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
I struggled with how to handle the 10% pledge when I first starting seeking donations. I did find it a little hinky to donate to my own org, but also kind of wrong to ask people for donations that end up funding other stuff, even though it’s 100% the employee’s business what they do with the salary they receive and that doesn’t change just because they do charitable work, etc.
But circumstances have made that decision for me as I’ve ended up donating a considerable amount of my salary to the org to get it through the early stages. Let’s just say I’m well ahead on my pledge!
Holly, you’re a 10% Pledger, does that mean that some of the money we give you ends up with different charities?
-
I know bigger orgs like recurring for a few reasons (people likely give more total money this way, fundraising case is often is based on need so it’s good not to be holding all your future at once), but I think we are currently too small to prefer uncertain future money over a lump sum. Also, because we are fiscally sponsored until we get our own 501(c)(3) status, setting up systems for recurring donations is a bit hairy and we’ll just have to redo them in a few months. So, perhaps in the future we will prefer recurring, for now lump sum is great. If it’s easier for you to do recurring, we could set up a recurring Zelle transfer now.
-
Manifund is also our fiscal sponsor, so we would owe them 5% of our income anyway. In our case, it makes no difference financially, and the platform is more convenient.
-
I was going on my memory of that post and I don’t have the spoons to go through it again, so I’ll take your word for it.
I am not making a case for who should be referred to as “the AI Safety community”, and if you draw a larger circle then you get a lot more people with lower p(doom)s. You still get mostly x-risk people as opposed to other risks and people thinking that it’s only x-risk that justifies intervening, implicitly if not explicitly.
> To address another claim: “The Dial of Progress” by Zvi, a core LessWrong contributor, makes the case that technology is not always good (similar to the “Technology Bucket Error” post) and the comments overwhelmingly seem to agree.This post is a great example of my point. If you pay attention to the end, Zvi says that the dial thinking is mostly right and is morally sympathetic to it, just that AGI is an exception. I wanted it to mean what you thought it meant :(
Since we passed the speculation round, we will receive feedback on the application, but haven’t yet. I will share what I can here when I get it.
What does a Trump 2 admin mean for PauseAI US?
We are an avowedly bipartisan org and we stan the democratic process. Our messaging is strong because of its simplicity and appeal to what the people actually think and feel. But our next actions remain the same no matter who is in office: protest to share our message and lobby for the PauseAI proposal. We will revise our lobbying strategy based on who has what weight, as we would with any change of the guard, and different topics and misconceptions will likely dominate the education side of our work than before.
The likely emphasis on defense and natsec and China competition seems to make Pause lobbying harder
This is why it’s all the more important that we be there.
The EA instinct is to do things that are high leverage and to quickly give up causes that are hard or involve tugging the rope against an opponent to find something easier (higher leverage). There is no substitute for doing the hard work of grassroots growth and lobbying here. There will be a fight for hearts and minds, conflicts between moneyed industry interests and the population at large, and shortcuts in that kind of work are called “astroturfing”. Messaging getting harder is not a reason to leave—it’s a crucial reason to stay.
If grassroots protesting and lobbying were impossible, we would something else. But this is just what politics looks like, and AI Safety needs to be represented in politics.
Yes, very much so. PauseAI US is a coalition of people who want to pause frontier AI training, for whatever reason they may have. This is the great strength of the Pause position— it’s simply the sensible next step when you don’t know what you’re doing playing with a powerful unknown, regardless of what your most salient feared outcome is. The problem is just how much could go wrong with AI (that we can and can’t predict), not only one particular set of risks, and Pause is one of the only general solutions.
Our community includes x-risk motivated people, artists who care about abuse of copyright and losing their jobs, SAG-AFTRA members whose primary issue is digital identity protection and digital provenance, diplomats whose chief concern is equality across the Global North and Global South, climate activists, anti-deepfake activists, and people who don’t want an AI Singularity to take away all meaningful human agency. My primary fear is x-risk, ditto most of the leadership across the PauseAIs, but I’m also very concerned about digital sentience and think that Pause is the only safe next step for their own good. Pause comfortably accommodates the gamut of AI risks.
And the Pause position accommodates this huge set of concerns without conflict. The silly feud between AI ethics and AI x-risk doesn’t make sense through the lens of Pause: both issues would be helped by not making even more powerful models before we know what we’re doing, so they aren’t competing. Similarly, with Pause, there’s no need to choose between near-term and long-term focus.
We are fiscally sponsored by Manifund and just waiting for the IRS to process our 501(c)(3) application (which could still take several more months). So, for the donor it’s all the same—we have 501(c)(3) status via Manifund, and in exchange we give 5% of our income to them. Sometimes these arrangements are meant to be indefinite, and the fiscal sponsor does a lot of administration and handles the taxes and bookkeeping. PauseAI US has its own bookkeeper and tax preparer and we will end the fiscal sponsor relationship as soon as the IRS grants us our own 501(c)(3) status.
Additionally, we’ve applied for 501(c)(4) status for PauseAI US Action Fund, which will likely take even longer. Because Manifund (and PauseAI US, in our c3 application) have an election h, we are able to do lobbying as a c3 as long as it doesn’t exceed ~20% (actual formula is more complicated) of our expenditures, so we probably will not need the c4 for the lobbying money for a while, but the structure is being set up now so we can raise unrestricted lobbying money.
1. Our lobbying is more “outside game” than the others in the space. Rather than getting our lobbying authority from prestige or expense, we get it from our grassroots support. Our message is simpler and clearer, pushing harder on the Overton window. (More on the radical flank effect here.) Our messages can complement more constrained lobbying from aligned inside gamers by making their asks seem more reasonable and safe, which is why us lobbying is not redundant with those other orgs but synergistic.
2. Felix has experience on climate campaigns and climate canvassing and was a leader in U Chicago EA. He’s young, so he hasn’t had many years of experience at anything, but he has the relevant kinds of experience that I wanted and is demonstrably excellent at educating, building bridges, and juggling a large network. He. has the tact and sensitivity you want in a role like this while also being very earnest. I’m very excited to nurture his talent and have him serve as the foundation for our lobbying program going forward.
Wait, is that an explanation? Can new accounts downvote this soon?
Oh sorry, “exempt employee” is a legal term, referring to being exempt from limits on hours, overtime, mandatory lunch breaks, etc. What I meant was I’m not an hourly employee.
https://www.indeed.com/hire/c/info/exempt-vs-non-exempt-employee