I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
And one should probably give some weight to limitations imposed by the medium—a 3-minute video on a platform whose users are on average not known for having long attention spans.
Late 2021 is the date of the article, not the website: “They started the project in May of 2020 for their own use, and within a few months, created a version for the public.”
The U.S. government advice was pretty bad, but I don’t think this was from lack of knowledge. I think it was more a deliberate attempt to downplay the effectiveness of masks to mitigate supply issues.
I also wouldn’t expect the government to necessarily perform well on getting the truth out there quickly, or on responding well to low-probability / high impact events by taking EV+ actions that cause significant disruption to the public. Government officials have to worry about the risk of stoking public panic and similar indirect effects much more than most private individuals, including rationalist thinkers. For example, @Denkenberger🔸 mentions some rationalists figuring out who they wanted to be locked down with on the early side; deciding that the situation warrants this kind of behavior—like deciding to short the stock market, or most other private-actor stuff—doesn’t require consideration of indirect effects like government statements do. Nor are a political leader’s incentives aligned to maximize expected value in these sorts of situations.
So I’d consider beating the government to be evidence of competence, but not much evidence of particularly early or wise performance by private entities.
For balance, the established authorities’ early beliefs and practices about COVID did not age well. Some of that can be attributed to governments doing government things, like downplaying the effectiveness of masks to mitigate supply issues. But, for instance, the WHO fundamentally missed on its understanding of how COVID is transmitted . . . for many months. So we were told to wash our groceries, a distraction from things that would have made a difference. Early treatment approaches (e.g., being too quick to put people on vents) were not great either.
The linked article shows that some relevant experts had a correct understanding early on but struggled to get acceptance. “Dogmatic bias is certainly a big part of it,” one of them told Nature later on. So I don’t think the COVID story would present a good case for why EA should defer to the consensus view of experts. Perhaps it presents a good case for why EA should be very cautious about endorsing things that almost no relevant expert believes, but that is a more modest conclusion.
Do you think the EA tendency toward many smaller-to-midsize organizations plays a role in this? I’m not in the industry at all, but the “comms-focused” roles feel more fundamental in a sense than the “digital growth” roles. Stated differently, I can imagine an org having the former but not the latter, but find it hard to envision an org with only the latter. If an org only has a single FTE available for “marketing-related” work, it wouldn’t surprise me to learn that the job description for that role is often going to lean in the comms-focused direction.
Although I think Yarrow’s claim is that the LW community was not “particularly early on covid [and did not give] particularly wise advice.” I don’t think the rationality community saying things that were not at the time “obvious” undermines this conclusion as long as those things were also being said in a good number of other places at the same time.
Cummings was reading rationality material, so that had the chance to change his mind. He probably wasn’t reading (e.g.) the r/preppers subreddit, so its members could not get this kind of credit. (Another example: Kim Kardashian got Donald Trump to pardon Alice Marie Johnson and probably had some meaningful effect on his first administration’s criminal-justice reforms. This is almost certainty a reflection of her having access, not evidence that she is a first-rate criminal justice thinker or that her talking points were better than those of others supporting Johnson’s clemency bid.)
Thanks! I may be thinking about it too much from the consumer perspective of owning a condo in a 100-year-old building, where the noise of filtration is a major drawback and the costs of a broader modernization of HVAC systems would be considerable.
I haven’t polled grocery store owners, but an owner would bear all the costs of improving air quality yet may capture few of the economic benefits. Although customers would care a lot in a pandemic, they probably wouldn’t otherwise care in a way that increases profits—and managers are incentivized toward short-term results. Cynically, most of their employees may not have paid sick time, so the owner may not even realize most of the benefit from reduced employee illness. (Of course, regulators could require compliance—but that’s not an awareness problem. So maybe the candidate intervention is lobbying?)
This is one of those scenarios in which I think it’s easier to capture ~the full costs than the full benefits:
Would you assign value to the indirect protective effect on those you live with (if any), friends, and family members? Apparently the flu household attack rate can be all over the place depending on strain and other factors, but 15-20% may be reasonable guesses in general (source: AI overview on Google search, very low confidence).
This gets into some tricky situations with housemates; you’re likely to all be better off if you mutually agree to consider the indirect protective effects on housemates when making your own decisions. But that effect is likely to be significantly greater with unvaccinated housemates than vaccinated ones. If you live with three other people, the first vaccination may have significant household spillover effects; the fourth not so much.
Most people would pay to avoid the discomfort of having the flu (above and beyond the loss in productivity) or would demand payment to willingly undergo that discomfort. Maybe you could consider willingness to pay for pleasurable leisure activities, and then decide how many of those activities you’d be willing to forego to avoid enduring one average case of the flu?
On the costs side:
1.5 hr is a lot to get a flu vaccine by US standards; they are available on a walk-in basis at pharmacies everywhere. That’s not a critique of your analysis, of course.
Could you call ahead and ensure that where you were going to get the vaccine used Influvac or Vaxigrip? (I assume fewer places would stock Fluenz, anyway due to cost.)
For most people, the hours of their day do not have equal value or utility. I can’t—at least not on a regular basis—realistically use the 14th most valuable hour of my day for renumerative work, but I could use it to get a vaccine. In other words, there’s a limit on how many hours I can sustain higher-demand activities. In contrast, when I get the flu, I think the loss in productivity hits the relevant time slots more evenly.
I don’t know if those adjustments would flip the end result for you—but I think accounting for them would make it a close call and would show how modest differences in the factors (e.g., personal circumstances that make getting the vaccine less time-consuming) would flip the outcome.
To clarify, does our “crazy” vote consider all possible causes of crazy, or just crazy that is caused by / significantly associated with AI?
If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you aren’t increasing the overall cost-effectiveness of the LGBT rights movement, you’re just juicing your own numbers.
I think that relies on a certain model of the effects of social advocacy. Modeling is error-prone, but I don’t think our activist in 1900 would be well-served spending significant money without giving some thought to their model. More often, I think the model for getting stuff done looks more like a more complicated version of: Inputs A and B are expected to produce C in the presence of a sufficient catalyst and the relative absence of inhibiting agents.
Putting more A into the system isn’t going to help produce C if the rate limit is being caused by the amount of B available, the lack of the catalyst, or the presence of inhibiting agents. Although money is a useful input that is often fungible at various rates to other necessary inputs, and sometimes can influence catalyst & inhibitor levels, sometimes it cannot (or can do so very inefficiently and/or at levels beyond the funder’s ability to meaningfully influence).
Sometimes for social change, having the older generation die off or otherwise lose power is useful. There’s not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
There’s also the reality that efforts often decay if there isn’t sufficient forward momentum—that was the intended point of the Pikachu welfare example. Ash doesn’t have the money right now to found a perpetual foundation for the cause that will be able to accomplish anything meaningful. If he front-loads the money—say on some field-building, some research grants, some grants to graduate students—and the money runs out, then the organizations will fold, the research will grow increasingly out of date, and the graduate students will find new areas to work in.
You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, you’ll wait until they’re much cheaper. But shouldn’t you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
The more neutral-to-positive way to cast free-riding is employing leverage. I’m really not concerned about free-riding on for-profit companies, or even much governmental work (especially things like military R&D, which has led to various socially useful technologies).
That’s not an accounting trick in my book—there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren’t the benefits I care about, and Big Hologram isn’t likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
As a society, we give corporations and similar entities certain privileges to incentivize behavior because a lot of value ends up leaking out to third parties. For example, the point of patents is “To promote the Progress of Science and useful Art” with the understanding that said progress becomes part of the commons after a specified time has passed. Utilizing that progress after the patent period has expired isn’t some sort of shady exploitation of the researcher; it is the deal society made in exchange for taking affirmative actions to protect the researcher’s IP during the patent period.
I like this comment. This topic is always at risk to devolving into a generalized debate between rationalists and their opponents, creating a lot of heat but not light. So it’s helpful to keep a fairly tight focus on potentially action-relevant questions (of which the comment identifies one).
I think Joseph is pointing out the ’s in the first example, the added “the” in the second
Do you think the general superiority of filtration over Far-UVC is likely inherent to the technologies involved, or would the balance be reasonably likely to change given further development of Far-UVC technologies? In other words, is it something like solar, which used to be rather expensive for the amount of output but improved dramatically with investment, economies of scale, and technological progress?
(Of course, we could improve filter technology as well, although it strikes my uninformed eyes as having less potential room to improve.)
The scope of what could be considered “patient philanthropy” is pretty broad. My comment doesn’t apply to all potential implementations of the topic.
To start with, I’ll note the distinction between whether society should allow for patient philanthropy and whether it makes sense for a philanthropist who is attempting to maximize their own altruistic goals. For what it is worth, I think there should be some significant limits roughly in line with US law on private foundations, and I would close what I see as some loopholes on public charity status (e.g., that donors can evade the intent of the public-charity rules by donating through a DAF which is technically a public charity, and so counts as public support).
But it’s not logically inconsistent to favor tightening the rules for everyone and to also think that if society chooses not to do so, then I shouldn’t unilaterally disadvantage my preferred cause areas while (e.g.) the LDS church increases its cash hoard.
A Cause Area in Which Yarrow’s Arguments Don’t Work Well for Me
I think some of these arguments depend significantly on what the donor is trying to do. I’m going to pick non-EA cause areas for the examples to keep the focus somewhat abstract (while also concrete enough for examples to work).
Let’s suppose Luna’s preferred cause area is freeing dogs from shelters and giving them loving homes. The rational preference argument doesn’t work here, and I know of no reason to think that the cost of freeing dogs will increase nearly as fast as the rate of return on investments. I also don’t have any clear reason to think that there are shovel-ready interventions today that will have a large enough effect on future shelter populations in 50 years to justify spending a much smaller sum of money now. (Admittedly, I didn’t research either; please do your own research if you are interested in funding canine rescue.)
Luna does face risk from “operational, legal, political, or force majeure” considerations, as well as the risk of technological or social changes making her original goal ineffective or inefficient. But many of these considerations happen over time, meaning that Luna (or her successors) should be able to sense them and start freeing dogs if their risk of disruption over the next 10-20 years gets too high. More broadly, I think this is an answer to some criticisms—the philanthropist doesn’t have to cabin the discretion of the fund to act as circumstances change (although there are also costs to allowing more discretion).
Donors can invest their own money and deploy it when it is most appropriate.
This sounds like patient philanthropy lite—with an implied time limit of the rest of the donor’s life and a restriction on buying/selling assets, both coming from tax considerations. That addresses some valid concerns with patient philanthropy, but we lose the advantage of having the money irrevocably committed to charitable purposes. I’m not sure how to weigh those considerations.
The Anti-PP Argument Calls for Faith in Future Foundations, Donors, and Governments
For the reserve-fund variants of PP: the patient philanthropist may not want to trust other foundations and governments to react strongly enough to future developments. There’s at least some reason to hold such a view, although it may not be enough to justify the practice. I suspect most people think the government generally doesn’t do a great job with its funding priorities (although they would have diverging and even contradictory opinions on why that is the case). I am not particularly impressed by the big foundations that have a wide scope of practice (and thus are potentially flexible). While experience that foundations tend to ossify and become bloated is an argument against patient philanthropy, it also counts as an argument against trusting big foundations to move swiftly and decisively in the face of an emerging threat or opportunity. Still, I think this premise would need to be developed and supported further to update my views on reserve-fund PP.
For other forms of PP: The assertion that the future world should rely on current-income donors, traditional foundations, and/or governments may rest on an assumption that the amount of need / opportunity in a given cause area tracks fairly well with the amount of funding available. If something happens in cause area X and it needs 1-2 orders of magnitude more money this year, will the money be forthcoming in short order? I don’t have a clear opinion on that (and it may depend on the cause area).
Patient Philanthropy May Work Particularly Well in Some Use Cases
Timing: John, in 1900, wants to promote LGBT rights. Deploying his funds in 1900 probably isn’t going to work very well from an effectiveness standpoint. Putting the money away and waiting for the right moment for the cultural winds to shift, and then pumping money to try to sustain/reinforce the wind change, sounds like a more effective strategy. In addition to causes that require cultural shifts, this argument could work for causes that need technological development in a broad sense.
Critical Mass: Ash is passionate about Pikachu welfare and has $1MM to spend. Few people (and few other potential donors) care about Pikachus right now, so the $1MM is unlikely to be enough to catalyze the field of Pikachu welfare studies. I’m writing this bullet point too early in the morning for me to do math, but AI tells me that would be $29.4MM in current dollars in 50 years at a 7% real rate of return. Ash can reasonably think that he has a better shot of creating a self-sustaining field of Pikachu welfare in 2075 than he has today. With several decades to build cash first, the field could survive for much longer on his funding before securing public support and wins that will probably be necessary to find other funding.
But the exercise of pasting and reading the results is carrying ~the entire argument here. The first two paragraphs basically say that you think we’re missing something obvious; the post-prompt material links some reference materials without commentary. The prompt itself conveys instructions to an AI, not your argument to a human reader.
To the extent that the reader is discerning and evaluating your argument, they can only do so through running the prompt and reading the raw AI output. So the content that actually carries the argument is not, in my view, “your content” which you have merely “use[d] an AI to help you compose . . . .” Without the use of the raw AI content, what argument do the four corners of the post convey?
I would frown on someone running a prompt, and then pasting the unedited output as the main part of one’s post. Posting a prompt and then asking the user to run the prompt and read the output strikes me as essentially the same thing. At least in the first scenario, the nominal user-author has probably at least read the specific output in question.
Although the norms allow users to employ AI assistance in producing content, [1] this exercise goes too far for me. (In my view, heavy reliance on AI can sometimes be okay in the context of comments if the use is disclosed.)
- ^
“If you, as a human, use an AI to help you compose your content, which you then post under your own name, that’s fine.”
- ^
You’re right to be concerned about the incentives of cooperators who had their own legal exposure. But those witnesses stood up to days of cross-examination about specifics by SBF’s lawyers. Those attorneys had access to documentary evidence with which to try to impeach the witness testimony—so it’s not like the witnesses could just make up a story here.
Those who lost money are being repaid in cash based on the value of their crypto when the bankruptcy filing was made. The market was down at that time and later recovered. The victims are not being put in the same place they would have been in absent the fraud.
“Intentional fraud” is redundant since fraud requires intent to defraud. It does not, however, require the intent to permanently deprive people of their property. So a subjective belief that the fraudster would be able to return monies to those whose funds he misappropriated due to the fraudulent scheme is not a defense here.
“[F]raud is a broad term, which includes false representations, dishonesty and deceit.” United States v. Grainger, 701 F.2d 308, 311 (4th Cir. 1983). SBF obtained client monies through false, dishonest, and deceitful representations that (for instance) the funds would not be used as Alameda’s slush fund. He knew the representations made to secure client funds were false, dishonest, and deceitful. That’s enough for the convictions.
Thanks for restarting this conversation!
Relatedly, it’s also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with—or at least are consonant with—their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there’s not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors’ personal financial interests could bias the community’s actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But—one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don’t write posts and comments in support of stop/pause advocacy because they don’t want to irritate the new funders. Maybe grantmakers don’t recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There’s also a risk of losing public credibility—it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.
Many orgs could have a limit on the percentage of their budget they will accept from significantly AI-involved sources. Some orgs—those with particular sensitivity on AI knowledge and policy—should probably avoid any major gifts from AI-involved sources at all.
Particularly sensitive orgs could be granted extended runways and/or funding agreements with some sort of independent protection against non-renewal.
Other donors could provide more funding for red-teaming AI work, especially work that potentially affected AI-involved source donors.
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
In fact, they were largely building off the efforts of recognized domain experts. See, e.g., this bibliography from 2010 of sources used in the “initial formation of [its] list of priority programs in international aid,” and this 2009 analysis of bednet programs.