For what it’s worth, I upvoted and disagree-voted, because I think I think you’re wrong and because you clearly put thought and effort into your writing, and produced the sort of content I think we should generally have more of, even though I’m annoyed locally that “don’t do either” is a much easier comment to write than “here’s the analysis you asked for”, leading to the only serious comments on the post being people stating your view.
keller_scholl
This is a good discussion, but I think that you’re missing the strongest argument, in this context, against donation-splitting: we’re not in a one-shot. Within the context of a one-shot, it makes much more sense to donation-split. But there’s a charity that I donate to in part because I am convinced that they are very financially constrained (iteration impact). Furthermore, Donors can respond to each other within a year. If most EAs give around December/January, OpenPhil and GWWC can distribute in February, after hearing from relevant orgs how much funding they received. If a charity is credible in saying that they’re under their expected funding, again, people can donate in response. So in practice I don’t expect donation splitting to have that positive an effect on charity financing uncertainty, particularly compared to something like multi-year commitments.
Nonlinear, thank you. Edited.
Looking at this comment after Nonlinear, I think it holds up. There exists a point at which an org loses the (moral, not legal) right to see questions / a writeup in advance, and Nonlinear was past it. Legal threats, contacting the people you spoke with, and contacting your employer are classic examples of this. I am also sympathetic to journalists covering industries that are known to react strongly, such as oil and tobacco. But the items in the list you provide do not come close to the bar of the org being untrustworthy, and that is the bar I think must be cleared.
I think that distinguishing between 1-8 hours (preferably paid), up to 40 hours, and 1-6 months, is very important here. I am happiest about the shortest ones, particularly for people who have to leave a job (part of why I think that OP is talking about the latter sort).
Correct: I’m vaguely aware of Kat Woods posting on FB, but haven’t investigated Nonlinear in any depth before: having an explicit definition of “what information I’m working with” seemed useful.
Yes, Nonlinear is smaller than expected.
I outlined a bad org with problems, even after adjusting for a hostile reporter and a vengeful ex-employee. I think that the evidence is somewhat weaker than what I expect, not counting that I trust you personally, and the allegations are stronger/worse. Overall, it was a negative update about Nonlinear.
I think part of the disconnect, from my perspective, is that I have experience with small scrappy conventions that deliver good talks and an enjoyable time and a large central room where people can mingle. The scrappier science-fiction conventions seem to charge in the range of $60-$120, usually on the lower side, and, while relying very heavily on volunteer labor and physical assets, about break even. The fancier ones might charge $250/person/weekend. That’s not the true price, since it excludes what dealers pay for access, advertising, etc. But my sense of con budgets is that it is at least half of the true price.
Obviously a large chunk of that is the $240 on food that you’re spending and they’re not. Another chunk of the cost is location: said cons tend to be out in the boonies of their relevant cities, passing along to attendees costs of travel or increased hotel prices.
The context that non-profit conventions tend to be $400+ is helpful: thank you. I really appreciate the transparency.
I don’t think that this is a good state of affairs. I think that the points I raise range from “this should be completely unacceptable” (4, 6) to “if this is the worst credible information that can be shared, the org is probably doing very well (3, 5)”. This is not a description of an org that I would support! But if a friend told me they were doing good work there and they felt the problems were blown out of proportion or context by a hostile critic and a vengeful ex-employee with an axe to grind, I would take them seriously and not say “you have to leave immediately. I can cover two months salary for you while you find another job, but I believe that strongly that you should not work here.”
As always, context is important: “the head of the org is a serial harasser with no effective checks” and “we fired someone when their subordinate came forward with a sexual harassment allegation that, after a one-week investigation, we validated and found credible: the victim is happily employed by us today” are very different states of affairs. If someone is sharing the worst credible information, then the difference between “we were slow to update on X” and “they knew X was false from the report by A, but didn’t change their marketing materials for another six months” can be hard to distinguish.
Running an org is complicated and hard, and I think many people underestimate how much negative spin a third party with access to full information can include. I am deliberately not modelling “Ben Pace, who I have known for almost a decade” and instead framing “hostile journalist looking for clicks”, which I think is the appropriate frame of reference.
Worst credible information about a charity that I would expect based on the following description (pulled from Google’s generative AI summary: may or may not be accurate, but seemed like the best balance to me of engaging with some information quickly):
Nonlinear is an organization that funds and researches AI safety interventions. They also offer an incubation program that provides seed funding and mentorship. The Nonlinear Library is a podcast that uses text-to-speech software to convert the best writing from the rationalist and EA communities into audio.
The Nonlinear Fund is an organization that aims to research, fund, and seed AI safety interventions. Their incubation program provides seed funding and mentorship. The seed funding is for a year’s salary, but you can also use it for other things, such as hiring other people.
The Nonlinear Library is a podcast that uses text-to-speech software to convert the best writing from the rationalist and EA communities into audio. You can listen to the podcast on Apple Podcasts and Spotify.
I am not describing a charity with ideal management practices, but envisioning one with 25 employees, active for 5 years, and which has poor but not shockingly or offensively bad governance by the standards of EA orgs. Someplace where I wouldn’t be worried if a friend worked there, but I would sympathetically listen to their complaints and consider them not the best use of my marginal dollar.
Credible accusations of sexual harassment by at least one current or former employee
One or more incidents of notable financial mismanagement
Promised use of donor funds that did not materialize into a finished project (less than 10% of one year’s annual budget in scope)
Credible evidence of evading employment or tax law in some way that, when framed by a hostile observer, looks “pretty bad”: I do not expect sweatshops, but encouraging employees to violate the terms of visas or preferentially hiring donors in a way that can be made to sound scary.
Multiple stories of funding going to friends and personal contacts rather than “objectively better” candidates who did not have personal contacts.
Credible evidence that a moderately important claim they fundraised on continued to be propagated after it stopped being true or the evidence for it was much weaker than previously thought.
Maybe I am excessively cynical about what bad things happen at small charities, but this feels like a reasonable list to me. There may be other events of similar badness.
From what I can tell, Harris has impressively low name recognition and is fairly unpopular with voters. That doesn’t mean that party elites won’t object to an outside group sponsoring a candidate who doesn’t have their blessing.
A few points.
There were (expensive, time-consuming, costly) efforts to get political allies, elect friendly candidates, etc. Then FTX collapsed. That would need to be rebuilt, first.
Presidential candidates are Big Deals. You get ones who are single-issue on climate, or maybe trade in years where that is particularly salient. There might be a Republican challenger this year who’s notably pro-choice relative to current Republican policy positions.
On the Democratic side, challenging Biden is a way to make yourself Very Unpopular with party elites. Challenging Harris, if she is his chosen successor, would be That But Worse. It might be worth it, but there are serious costs.
On the Republican side, you need a candidate who can compete with DeSantis and Trump. A single issue that most people don’t care about won’t cut it. Generalized anti-tech sentiment, maybe?
With the same resources, it’s probably easier and more effective to try to persuade candidates who are more successful.
You’re massively underestimating your ROI, probably by an order of magnitude. $10 billion in charitable contributions per year, even with a very steep discount rate of 20%, would be an ROI of, not 18-fold, but closer to 90-fold (with a net present value of $50 billion). With a more reasonable discount rate of 10% (would have said 5%, but then the Fed happened), you’re talking about 180-fold returns.
Of course, this falls apart under sufficiently short timelines.
I don’t think that any of those justify not sending either your questions or a writeup of the post to the org in advance. They have a public email address. It’s at the bottom of their home page. I don’t think it’s a particularly excessive burden to send a copy once you’re done and give them a week. Perhaps two if they apologize and ask for a bit more time. I understand why people might be suspicious at the moment, but forcing people to scramble while on vacation is not a good norm. As you say, this post clearly wasn’t that time-sensitive. I don’t think that the Forum should have taken your post down, but that’s a much higher bar.
For comparison, when I posted a piece that was somewhat critical of CEA’s admissions and transparency policies, it was after I had asked in a more private Slack channel and gotten an answer I was not satisfied with. You can see that they clarified that they did inform people, and that others chimed in to thank me for informing them with the post.
I am not speaking for the DoD, the US government, or any of my employers.
I think that your claim about technological inevitability is premised on the desire of states to regulate key technologies, sometimes mediated by public pressure. All of the examples listed were blocked for decades by regulation, sometimes supplemented with public fear, soft regulation, etc. That’s fine so long as, say, governments don’t consider advancements in the field a core national interest. The US and China do, and often in an explicitly securitized form.
Quoting CNAS
China’s leadership – including President Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition.
English-language Coverage of the US tends to avoid such sweeping statements, because readers have more local context, because political disagreement is more public, and because readers expect it.
But the DoD in the most recent National Defense Strategy identified AI as a secondary priority. Trump and Biden identified it as an area to maintain and advance national leadership in. And, of course, with the US at the head they don’t need to do as much in the way of directing people, since the existing system is delivering adequate results.
Convincing the two global superpowers not to develop militarily useful technology while tensions are rising is going to be the first time in history that has ever been accomplished.
That’s not to say that we can’t slow it down. But AI very much is inevitable if it is useful, and it seems like it will be very useful.
Someone might be out about being bi at an after-party with friends, but not want to see that detail being confirmed by a fact-checker for a national paper. This doesn’t seem particularly unusual.
This isn’t the only thing that could go wrong, but it’s a straightforward example. Perhaps they don’t want their full name blatantly linked to their online account. There are lots of reasons that people might want privacy. Unless your life is at risk, I would not assume that you have privacy from a journalist who isn’t a personal friend unless they have an explicit commitment. I trust journalists who are also community members to not take harmful advantage of access.
Something that is sometimes not obvious to people not used to dealing with journalists is that off-the-record sometimes means “I can’t officially tell you this, so please find another source who can corroborate it”. It’s not remotely the same thing as an expectation of privacy and good sense that one would have with a friend.
Without getting too far into the specifics, I think that this is a good attitude to have across a wide range of policy concerns, and that similar issues apply to other policy areas EAs are interested in.
Bay Area 2023. Will edit.
Some post-EAG thoughts on journalists
For context, CEA accepted at EAG Bay Area 2023 a journalist who has at times written critically of EA and individual EAs, and who is very much not a community member. I am deliberately not naming the journalist, because they haven’t done anything wrong and I’m still trying to work out my own thoughts.
On one hand, “journalists who write nice things get to go to the events, journalists who write mean things get excluded” is at best ethically problematic. It’s very very very normal: political campaigns do it, industry events do it, individuals do it. “Access journalism” is the norm more than it is the exception. But that doesn’t mean that we should. One solution is to be very very careful about maintaining the differentiation between “community member” and “critical or not”. Dylan Matthews is straightforwardly an EA and has reported critically on a past EAG: if he was excluded for this I would be deeply concerned.
On the other hand, I think that, when hosting an EA event, an EA organization has certain obligations to the people at that event. One of them is protecting their safety and privacy. EAs who are journalists can, I think, generally be relied upon to be fair and to respect the privacy of individuals. That is not a trust I extend to journalists who are not community members: the linked example is particularly egregious, but tabloid reporting happens.
EAG is a gathering of community members. People go to advance their goals: see friends, network, be networked at, give advice, get advice, learn interesting things, and more. In a healthy movement, I think that EAGs should be a professional obligation, good for the individual, or fun for the individual. It doesn’t have to be all of them, but it shouldn’t harm them on any axis.
Someone might be out about being bi at an after-party with friends, but not want to see that detail being confirmed by a fact-checker for a national paper. This doesn’t seem particularly unusual. They would be right to trust community members, but might not realize that there could be journalists at the after-party. Non-community journalists will not necessarily share norms about privacy or have particularly strong incentives to follow any norms that do exist.
On the gripping hand, it feels more than a little hypocritical to complain about the low quality of criticism of EA and also complain when a journalist wants to attend an EA event to get to know the movement better.
One thing I’m confident of is that I wish that this had been more clearly disclosed. “This year we are excited to welcome X, who will be providing a critical view on EA” is good enough to at least warn people that someone whose bio says that they are interested in
how the wealthiest people in society spend their money or live their lives
(emphasis mine)
is attending.
I’m still trying to sort out the rest of my views here. Happy to take feedback. It’s very possible that I’m missing some information about this.
P.S.
I have been told by someone at CEA that all attending journalists have agreed that everything at EAG is off the record by default. I don’t consider this to be an adequate mitigating factor for accepting non-community journalists and not mentioning this to attendees or speakers.
P.P.S.
And no, I’m not using a pseudonym for this. I think that that is a bad and damaging trend on the Forum, and I don’t, actually, believe that anyone at CEA will retaliate against me for posting this.
That seems almost aggressively misleading. “Some of this category of debt may have been held by these descendants, therefore it should have been invalidated”, as you seem to be implying, proves far too much.
One point that occurs to me is that firms run by senior employees are reasonably common in white-collar professions: certainly not all of them, but many doctors function under this system, and it’s practically normative for lawyers, operates in theory for university professors, and I believe to a lesser extent accountants and financiers. There is likely to be a managing partner, but that person serves with the consent of the senior partners.
A democracy to which new members must be voted in, socialized for a number of years, and buy in their own stake seems to have substantial advantages over one where everyone gets a vote the moment that they join. I also suspect that not understanding what they’re engaged in as a political experiment is helpful for reducing certain types of distractions.
With that in mind, expanding coops among the white-collar elite seems relatively practical, and elite persuasion is always a powerful tool.