I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
This is more a copyright law question than a First Amendment one, at least under current law. E.g., https://www.trails.umd.edu/news/ai-imitating-artist-style-drives-call-to-rethink-copyright-law.
I believe whether the 1A requires this outcome is unclear at present. Of course, there’s a lot of activity protected by the 1A that is horrible to do.
So I think we may have a crux—are “independent experiences” necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?
Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph—but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don’t mean to imply we should treat Bulby as human art, only as art with a human element.)
That people can prompt an AI to generate art in a way that crosses the line of so-called “stylistic forgeries” doesn’t strike me as a good reason to condemn all AI art output. It doesn’t undermine the idea that an artist whose work is only a tiny, indirect influence on another artist’s work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.
Also, we’d need to consider the environmental costs of creating Bulby by non-AI means. Even assuming they are lower than AI generation now, I could see the argument flipping into a pro-AI art argument with sufficient technological advancement.
How different is the process of how AIs “learn” to draw from how humans learn for ethical purposes? It seems to me that we consciously or unconsciously “scrape” art (and writing) we encounter to develop our own artistic (or writing) skills. The scraping student then competes with other artists. In other words, there’s an element of human-to-human appropriation that we have previously found unremarkable as long as it doesn’t come too close to being copying. Moreover, this process strikes me as an important mechanism by which culture is transmitted and developed.
Of course, one could try to identify problematic ways in which AI learning from images it encounters differs from the traditional way humans learn. But for me, I think there needs to be that something more, not just the use in training alone.
Most art is, I think, for “decorations”—that way of characterizing most art is a double edged sword for your argument to me. It reduces the cost of abstaining from AI art, but also makes me think protecting human art is less important.
That’s what I did for my recent critical review of one of Social Change Lab’s reports.
One of the challenges here is defining what “criticism” is for purposes of the proposed expectation. Although the definition can be somewhat murky at the margin, I think the intent here is to address posts that are more fairly characterized as critical of people or organizations, not those that merely disagree with intellectual work product like an academic article or report.
For what it’s worth, I think your review was solidly on the “not a criticism of a person or organization” side of the ledger.
Second: A big reason to reach out to people is to resolve misunderstandings. But it’s even better to resolve misunderstandings in public, after publishing the criticism. Readers may have the same misunderstandings, and writing a public back-and-forth is better for readers.
That’s consistent with reaching out, I think. My recollection is that people who advocate for the practice have generally affirmed that advance notification is sufficient; the critic need not agree to engage in any pre-publication discourse.
(A $1 test donation worked for me a minute ago.)
individual critic should be responsible for navigating these factors and others and deciding when these things (reaching out, allowing a reply) would be appropriate and make sense.
I think that’s half complete. No one is having their posts deleted for not reaching out, so the choice is ultimately up to the critic. But the community also has a role to play here. If community members believe the critic failed to provide appropriate advance notice, and has not demonstrated sufficient cause for doing so, they can elect to:
Downvote the criticism, and/or
Decline to engage with the criticism, at least until the organization has had a reasonable amount of time to reply (even though they may not remember to come back to it later).
This would benefit from one of those polls, I think. Unfortunately, I don’t think they are available in comments. E.g.,
Giving advance notice of critical Forum posts to EA organizations should be:
seen as optional in almost all cases
done in almost all cases
(with at least one footnote to define “critical”)
Based on prior discussions, my guess is that the median voter would vote about 70% toward done in almost all cases . . . so this would be evidence for a community-supported norm, albeit one that is more flexible than Toby advocates for here.
I’ll call the role of voters in voting posts/comments below zero / off the frontpage / to be collapsed in comments “cloture voting” for short. As the name implies, I see that role as cutting off or at least curtailing discussion—which is sometimes a necessary function, of course.
Scaled voting power is part of why moderation on the Forum is sustainable. When I see posts downvoted past zero I agree the majority of the time.
While I agree that cloture voting serves a pseudo-moderation function, is there evidence that the results are better with heavily scaled voting power than they would be with less-scaled power?
~~
As applied to cloture voting, I have mixed feelings on the degree of scaling in the abstract. In practice, I think many of the downsides come (1) the ability of users to arbitrarily decide when and how often to cast strongvotes and (2) net karma being the mere result of adding up votes.
On point 1, I note that someone with 100 karma could exercise more influence over vote totals than I do with a +9 strongvote, simply by strongvoting significantly more than I do. This would be even easier with 1000 karma, because the voter would have the same standard vote as I. In the end, people can self-nominate themselves for greater power merely by increasing their willingness to click-and-hold (or click twice on mobile). I find that more concerning than the scaling issue.
On point 2, the following sample equations seem generally undesirable to me:
(A) three strongvotes at −9, −8, and −7, combined with nine +2 standard votes = −6 net karma
(B) five strongvotes at −6, combined with four strongvotes at +6 = −6 net karma
There’s a reason cloture requires a supermajority vote in most parliamentary manuals. And those reasons may be even more pronounced here, where the early votes are only a fraction of the total potential votes—and I sense are not always representative either!
In (2A), there appears to be a minority viewpoint whose adherents are using strongvotes to hide content the significant majority of voters believe to be a positive contribution. Yes, those voters could respond with strongvotes of their own. But they don’t know they are in the majority and that their viewpoint is being overridden by 1⁄3 or less of a strongvoter’s opinion.
In (2B), the community is closely divided and there is no consensus for cloture. But the use of strongvotes makes the karma total come out negative enough to hide a comment (IIRC).
One could envision encoding special rules to mitigate these concerns, such as:
A post or comment’s display is governed by its cloture-adjusted karma, in which at most one-third of the votes on either side count as strong. So where the only downvotes are −9, −8, −7, they would count as −9, −2, −2.
In addition to negative karma, cloture requires a greater number of downvotes than upvotes, the exact fraction varying a bit based on the total votes cast. For example, I don’t think 4-3 should be enough for cloture, but 40-30 would be.
Should there be an option for the poll results not to link responses to individual voters? I think there are some questions for which a confidential poll would be preferable. On the other hand, I imagine the voter-vote identity would still be known to CEA and potentially hackable (which IIRC is a reason why there is no “make an anonymous comment/post” function).
I think you can probably do better locally with an EA mindset than by donating through GWWC—and this isn’t a criticism of GWWC!
As a practical matter, a potential intervention needs to have enough room for additional funding and be long-term enough to justify the costs of evaluation and transaction costs, then the opportunity needs to actually come to the attention of GWWC or another organization.
I suspect you’d have access to some highly effective microscale and/or time sensitive opportunities that GWWC and people like me do not. You also are likely to have local knowledge to evaluate those opportunities that people like me lack.
I directionally agree, but the system does need to also perform well in spikes like FTX, the Bostrom controversy, critical major news stories, and so on. I doubt those are an issue on r/excel.
To steelman this:
Even assuming OP funding != EA, one still might consider OP funding to count as funding from the AI Safety Club (TM), and for the Mechanize critics to be speaking in their capacity as members of the AISC rather than of EA. Being upset that AISC money supported development of people who are now working to accelerate AI seems understandable to me.
Epoch fundraised on the Forum in early 2023 and solicited applications for employment on the Forum as recently as December 2024. Although I don’t see any specific references to the AISC in those posts, it wouldn’t be unreasonable to assume some degree of alignment from its posting of fundraising and recruitment asks on the Forum without any disclaimer. (However, I haven’t heard a good reason to impute Epoch’s actions to the Mechanize trio specifically.)
If the data were available, the amount an CE charity might be able to raise on average from funders other than highly-aligned funders might work better if someone were deploying your analysis for a different decision about whether to found a CE charity vs. earn to give. You’ve mentioned that you were “satisfied that Kaya Guides had minimal risk of substantial funding displacement in a success scenario,” so it makes sense that you wouldn’t adjust for this when making your specific decision.
(The working, rough assumption here is that the average CE charity can put a dollar to use roughly as well as the average GiveWell grantee or ACE-recommended charity—so moving $1 from the latter to the former produces neither a net gain nor a net loss. That’s unlikely to be particularly correct, but it’s probably closer to the actual effect than not adjusting for where the money went counterfactually).
Wasn’t me, but accidentally upgrading one’s vote to a strongvote on mobile isn’t difficult. So the possibility of the karma drop being from reversion of a strong upvote vs. being from a strong downvote should be considered.
There are reasons why rejected ideas were rejected
I don’t think it would be accurate to classify most of the ideas here as rejected, at least not without qualification. My recollection is that there was substantial support for many of these propositions in addition to voices in opposition. On the whole, if I had to sum up the prior discussion on these topics in a single word, I would probably choose inconclusive.[1] That there was no real action on these points suggests that those with the ability to most effectively act on them weren’t convinced, or that they had more important issues on their plate, but that only tells us the reaction from a small part of the community.
And I think that matters from the standpoint of what we can reasonably expect from someone in Maxim’s shoes. If the ideas had been rejected by community consensus on their merits, then the argument that proponents need new arguments/evidence or changed circumstances would be stronger in my book. The prior rejection would be at least some evidence that the ideas were wrong on the merits.
Of course, posting the same ideas every month would just be annoying. But I don’t think there’s been a ton of discussion on these ideas as of late, and there are a significant number of new people each year (plus some people who may be in a better position to act on the ideas than they were in the past).
- ^
I do recognize that some specific ideas on the topic of democracy appear to have been rejected by community consensus on the merits.
- ^
Did you adjust for the likelihood that some of the funding secured by the charity would likely would have gone to other effective charities in the same cause area?
A new organization can often compete for dollars that weren’t previously available to an EA org—such as government or non-EA foundation grants that are only open to certain subject areas.
I agree that there are no plausible circumstances in which anyone’s relatives will benefit in a way not shared with a larger class of people. However, I do think groups of people differ in ways that are relevant to how important fast AI development vs. more risk-averse AI development is to their interests. Giving undue weight to the interests of a group of people because one’s friends or family are in that group would still raise the concern I expressed above.
One group that—if they were considering their own interests only—might be rationally expected to accept somewhat more risk than the population as a whole are those who are ~50-55+. As Jaime wrote:
For some of my older relatives, it might make a big difference to their health and wellbeing whether AI-fueled explosive growth happens in 10 vs 20 years.
A similar outcome could also happen if (e.g.) the prior generation of my family has passed on, I had young children, and as a result of prioritizing their interests I didn’t give enough weight to older individuals’ desire to have powerful AI soon enough to improve and/or extend their lives.
I think that depends a lot on the specifics of the organization in question. For example: I think defining the electorate is a hard problem if the organization is devoted to spending lots of donor money. In that scenario, there are good reasons for people to seek a vote for reasons other than membership in the community.
But beyond that, most institutions in civil society do not impose demanding entry requirements. The US Chess Federation grants membership to anyone who pays a fee (and hasn’t been banned for misconduct), without any concerns that the checkers crowd will stage a hostile takeover. To join a church with congregationalist governance (where the risk of hostile takeover is greater), you might need to attend a few classes, sign a statement agreeing with some core principles, and attend an interview with a group leader.
It’s not clear to me why the techniques that work for the rest of civil society would fail for EA. Most candidates would pass on Forum karma, EAG/EAGx attendance, or other easily verifiable criteria.