I should also add the context that Open Phil will not fund us for political reasons. We have no big funders—it’s all down to people like you!
Holly Elmore ⏸️ 🔸
Fixed it—thank you!
Oh sorry, “exempt employee” is a legal term, referring to being exempt from limits on hours, overtime, mandatory lunch breaks, etc. What I meant was I’m not an hourly employee.
https://www.indeed.com/hire/c/info/exempt-vs-non-exempt-employee
The pledge, for me, is not just about donating the money but about the spiritual hygiene parting with the money and affirming my priorities, so it’s very important to me to actually give money I was in possession of. It could work for hours but I’d need to have that same knowledge of making the sacrifice as it was happening. I’m not saying this is the correct or necessary way to view the pledge and I approve of other people using the pledge in the way that best helps them to stay in line with their altruistic values.
It is actual salary. Since I’m an exempt, salaried employee, it’s not clear that I could claim pro bono hours, and unless that was very clearly written into my hire letter I feel that doing things that way wouldn’t be enough in line with the spirit of the pledge. It’s possible we could get the tax benefits and deal with my qualms in the future.
I didn’t receive salary I was owed before the org was officially formed (waiting for the appropriate structures to pay myself with a W2), all of which is still an account payable to me, and I’ve foregone additional salary when the org couldn’t afford it, which is owed to me as backpay. In order to donate any of the money that’s owed to me, we have to process it through payroll and pay payroll tax on it.
At this point, I have many years of 10% donations in backpay. Some of it I’m reserving the right to still claim one day. But I’m processing some as a donation for my year-end giving (when I do the bulk of my giving) this year.
Well if someone has a great suggestion that’s the objection it has to overcome
No offense to forecasting, which is good and worthwhile, but I think trying to come with a bet in this case is a guaranteed time suck that will muddy the waters instead of clarifying them. There are very few crisp falsifiable hypotheses that also get at the cruxes of whether it’s better to donate to PauseAI or animal welfare given that that’s not already clear to Vasco that I think would make good bets unfortunately.
https://x.com/ilex_ulmus/status/1776724461636735244
Hmm, I wonder what we would bet on. There’s no official timeline or p(doom) of PauseAI, and our community is all over the map on that. Our case for you donating to pausing AI is not about exactly how imminently doom is upon us, but how much a grassroots movement would help in concentrating public sentiments and swaying crucial decisions by demanding safety and accountability.
My personal views on AI Doom (https://forum.effectivealtruism.org/posts/LcJ7zoQWv3zDDYFmD/cutting-ai-safety-down-to-size) are not as doomy as Greg’s. I just still think this is the most important issue in the world at a lower chance of extinction or with a longer timelines, and that the crucial time to act is as soon as possible. I don’t think the timeline prediction is really the crux.
Doesn’t he abstain voting on at least SFF grants himself because of this? I’ve heard that but you’d know better.
The minimal PauseAI US is me making enough to live in the Bay Area. As long as I’m keeping the organization alive, much of the startup work will not be lost. Our 501(c)(3) and c4 status would come in in the next year, I’d continue to use the systems set up by me and Lee, and I’d be able to keep a low amount of programming going while fundraising. I intend to keep PauseAI US alive unless I absolutely can’t afford it or I became convinced I would never be able to effectively carry out our interventions.
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
I struggled with how to handle the 10% pledge when I first starting seeking donations. I did find it a little hinky to donate to my own org, but also kind of wrong to ask people for donations that end up funding other stuff, even though it’s 100% the employee’s business what they do with the salary they receive and that doesn’t change just because they do charitable work, etc.
But circumstances have made that decision for me as I’ve ended up donating a considerable amount of my salary to the org to get it through the early stages. Let’s just say I’m well ahead on my pledge!
Holly, you’re a 10% Pledger, does that mean that some of the money we give you ends up with different charities?
-
I know bigger orgs like recurring for a few reasons (people likely give more total money this way, fundraising case is often is based on need so it’s good not to be holding all your future at once), but I think we are currently too small to prefer uncertain future money over a lump sum. Also, because we are fiscally sponsored until we get our own 501(c)(3) status, setting up systems for recurring donations is a bit hairy and we’ll just have to redo them in a few months. So, perhaps in the future we will prefer recurring, for now lump sum is great. If it’s easier for you to do recurring, we could set up a recurring Zelle transfer now.
-
Manifund is also our fiscal sponsor, so we would owe them 5% of our income anyway. In our case, it makes no difference financially, and the platform is more convenient.
-
I was going on my memory of that post and I don’t have the spoons to go through it again, so I’ll take your word for it.
I am not making a case for who should be referred to as “the AI Safety community”, and if you draw a larger circle then you get a lot more people with lower p(doom)s. You still get mostly x-risk people as opposed to other risks and people thinking that it’s only x-risk that justifies intervening, implicitly if not explicitly.
> To address another claim: “The Dial of Progress” by Zvi, a core LessWrong contributor, makes the case that technology is not always good (similar to the “Technology Bucket Error” post) and the comments overwhelmingly seem to agree.This post is a great example of my point. If you pay attention to the end, Zvi says that the dial thinking is mostly right and is morally sympathetic to it, just that AGI is an exception. I wanted it to mean what you thought it meant :(
Since we passed the speculation round, we will receive feedback on the application, but haven’t yet. I will share what I can here when I get it.
What does a Trump 2 admin mean for PauseAI US?
We are an avowedly bipartisan org and we stan the democratic process. Our messaging is strong because of its simplicity and appeal to what the people actually think and feel. But our next actions remain the same no matter who is in office: protest to share our message and lobby for the PauseAI proposal. We will revise our lobbying strategy based on who has what weight, as we would with any change of the guard, and different topics and misconceptions will likely dominate the education side of our work than before.
The likely emphasis on defense and natsec and China competition seems to make Pause lobbying harder
This is why it’s all the more important that we be there.
The EA instinct is to do things that are high leverage and to quickly give up causes that are hard or involve tugging the rope against an opponent to find something easier (higher leverage). There is no substitute for doing the hard work of grassroots growth and lobbying here. There will be a fight for hearts and minds, conflicts between moneyed industry interests and the population at large, and shortcuts in that kind of work are called “astroturfing”. Messaging getting harder is not a reason to leave—it’s a crucial reason to stay.
If grassroots protesting and lobbying were impossible, we would something else. But this is just what politics looks like, and AI Safety needs to be represented in politics.
Yes, very much so. PauseAI US is a coalition of people who want to pause frontier AI training, for whatever reason they may have. This is the great strength of the Pause position— it’s simply the sensible next step when you don’t know what you’re doing playing with a powerful unknown, regardless of what your most salient feared outcome is. The problem is just how much could go wrong with AI (that we can and can’t predict), not only one particular set of risks, and Pause is one of the only general solutions.
Our community includes x-risk motivated people, artists who care about abuse of copyright and losing their jobs, SAG-AFTRA members whose primary issue is digital identity protection and digital provenance, diplomats whose chief concern is equality across the Global North and Global South, climate activists, anti-deepfake activists, and people who don’t want an AI Singularity to take away all meaningful human agency. My primary fear is x-risk, ditto most of the leadership across the PauseAIs, but I’m also very concerned about digital sentience and think that Pause is the only safe next step for their own good. Pause comfortably accommodates the gamut of AI risks.
And the Pause position accommodates this huge set of concerns without conflict. The silly feud between AI ethics and AI x-risk doesn’t make sense through the lens of Pause: both issues would be helped by not making even more powerful models before we know what we’re doing, so they aren’t competing. Similarly, with Pause, there’s no need to choose between near-term and long-term focus.
If you are so inclined, individual donors can make a big difference to PauseAI US as well (more here: https://forum.effectivealtruism.org/posts/YWyntpDpZx6HoaXGT/please-vote-for-pauseai-us-in-the-donation-election)
We’re the highest voted AI risk contender in the donation election, so vote for us while there’s still time!