I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
Assuming there will continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g., by having two conferences on x-risk or AI-risk and a third one on GHW/FAW)
Some but not all should be replaced (low confidence)
Agree that the appropriate amount of time depends—but I also think there needs to be some sort of semi-clear safe harbor here for critics here. Otherwise we are going to get excessively tied up in the meta debate of whether the critic gave the org enough advance notice.
Yeah, I suspect most people (including myself) think it depends. I conceptualize the right side of the scale roughly as “there’s a presumption of advance notice, and where you place your icon on the right side is ~ about how strongly or weakly the case-specific factors need to favor non-notice to warrant a departure from the presumption”
Giving meaningful advance notice of a post that is critical of an EA person or organization should be
I think it’s a good default rule, but think there are circumstances in which that presumption is rebutted.
My vote is also influenced by my inability to define “criticism” with good precision—and the resultant ambiguity and possible overinclusion pushes my vote toward the midpoint.
Done! The wording was trickier than I expected, but I decided it was better to post than not.
However, I think the cost of this position is non-negligible. Given the power-law distribution of impact among people and given the many rounds of tests, which employees at EA organizations allegedly undergo—a democratic vote would probably yield a much less discerning choice (as most people wouldn’t spend more than 30 minutes picking a candidate). I’m not sure to what extent the wisdom of the crowd might apply here.
Important characteristics of the ambassador include the community has trust in this person and this person is aligned to the community’s interests and concerns. A community vote is ~authoritative on the first question and awfully probative on the second. If someone independent of the community picked the evaluator, in a real sense they wouldn’t be the community’s ambassador.
You could also do a two-step selection process here; the community selects a committee (and perhaps does approval voting for candidates), and the committee selects the ambassador after more thought. That would allow the more detailed evaluation for finalists while maintaining at least indirect community selection.
I think that depends a lot on the specifics of the organization in question. For example: I think defining the electorate is a hard problem if the organization is devoted to spending lots of donor money. In that scenario, there are good reasons for people to seek a vote for reasons other than membership in the community.
But beyond that, most institutions in civil society do not impose demanding entry requirements. The US Chess Federation grants membership to anyone who pays a fee (and hasn’t been banned for misconduct), without any concerns that the checkers crowd will stage a hostile takeover. To join a church with congregationalist governance (where the risk of hostile takeover is greater), you might need to attend a few classes, sign a statement agreeing with some core principles, and attend an interview with a group leader.
It’s not clear to me why the techniques that work for the rest of civil society would fail for EA. Most candidates would pass on Forum karma, EAG/EAGx attendance, or other easily verifiable criteria.
This is more a copyright law question than a First Amendment one, at least under current law. E.g., https://www.trails.umd.edu/news/ai-imitating-artist-style-drives-call-to-rethink-copyright-law.
I believe whether the 1A requires this outcome is unclear at present. Of course, there’s a lot of activity protected by the 1A that is horrible to do.
So I think we may have a crux—are “independent experiences” necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?
Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph—but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don’t mean to imply we should treat Bulby as human art, only as art with a human element.)
That people can prompt an AI to generate art in a way that crosses the line of so-called “stylistic forgeries” doesn’t strike me as a good reason to condemn all AI art output. It doesn’t undermine the idea that an artist whose work is only a tiny, indirect influence on another artist’s work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.
Also, we’d need to consider the environmental costs of creating Bulby by non-AI means. Even assuming they are lower than AI generation now, I could see the argument flipping into a pro-AI art argument with sufficient technological advancement.
How different is the process of how AIs “learn” to draw from how humans learn for ethical purposes? It seems to me that we consciously or unconsciously “scrape” art (and writing) we encounter to develop our own artistic (or writing) skills. The scraping student then competes with other artists. In other words, there’s an element of human-to-human appropriation that we have previously found unremarkable as long as it doesn’t come too close to being copying. Moreover, this process strikes me as an important mechanism by which culture is transmitted and developed.
Of course, one could try to identify problematic ways in which AI learning from images it encounters differs from the traditional way humans learn. But for me, I think there needs to be that something more, not just the use in training alone.
Most art is, I think, for “decorations”—that way of characterizing most art is a double edged sword for your argument to me. It reduces the cost of abstaining from AI art, but also makes me think protecting human art is less important.
That’s what I did for my recent critical review of one of Social Change Lab’s reports.
One of the challenges here is defining what “criticism” is for purposes of the proposed expectation. Although the definition can be somewhat murky at the margin, I think the intent here is to address posts that are more fairly characterized as critical of people or organizations, not those that merely disagree with intellectual work product like an academic article or report.
For what it’s worth, I think your review was solidly on the “not a criticism of a person or organization” side of the ledger.
Second: A big reason to reach out to people is to resolve misunderstandings. But it’s even better to resolve misunderstandings in public, after publishing the criticism. Readers may have the same misunderstandings, and writing a public back-and-forth is better for readers.
That’s consistent with reaching out, I think. My recollection is that people who advocate for the practice have generally affirmed that advance notification is sufficient; the critic need not agree to engage in any pre-publication discourse.
(A $1 test donation worked for me a minute ago.)
individual critic should be responsible for navigating these factors and others and deciding when these things (reaching out, allowing a reply) would be appropriate and make sense.
I think that’s half complete. No one is having their posts deleted for not reaching out, so the choice is ultimately up to the critic. But the community also has a role to play here. If community members believe the critic failed to provide appropriate advance notice, and has not demonstrated sufficient cause for doing so, they can elect to:
Downvote the criticism, and/or
Decline to engage with the criticism, at least until the organization has had a reasonable amount of time to reply (even though they may not remember to come back to it later).
This would benefit from one of those polls, I think. Unfortunately, I don’t think they are available in comments. E.g.,
Giving advance notice of critical Forum posts to EA organizations should be:
seen as optional in almost all cases
done in almost all cases
(with at least one footnote to define “critical”)
Based on prior discussions, my guess is that the median voter would vote about 70% toward done in almost all cases . . . so this would be evidence for a community-supported norm, albeit one that is more flexible than Toby advocates for here.
I’ll call the role of voters in voting posts/comments below zero / off the frontpage / to be collapsed in comments “cloture voting” for short. As the name implies, I see that role as cutting off or at least curtailing discussion—which is sometimes a necessary function, of course.
Scaled voting power is part of why moderation on the Forum is sustainable. When I see posts downvoted past zero I agree the majority of the time.
While I agree that cloture voting serves a pseudo-moderation function, is there evidence that the results are better with heavily scaled voting power than they would be with less-scaled power?
~~
As applied to cloture voting, I have mixed feelings on the degree of scaling in the abstract. In practice, I think many of the downsides come (1) the ability of users to arbitrarily decide when and how often to cast strongvotes and (2) net karma being the mere result of adding up votes.
On point 1, I note that someone with 100 karma could exercise more influence over vote totals than I do with a +9 strongvote, simply by strongvoting significantly more than I do. This would be even easier with 1000 karma, because the voter would have the same standard vote as I. In the end, people can self-nominate themselves for greater power merely by increasing their willingness to click-and-hold (or click twice on mobile). I find that more concerning than the scaling issue.
On point 2, the following sample equations seem generally undesirable to me:
(A) three strongvotes at −9, −8, and −7, combined with nine +2 standard votes = −6 net karma
(B) five strongvotes at −6, combined with four strongvotes at +6 = −6 net karma
There’s a reason cloture requires a supermajority vote in most parliamentary manuals. And those reasons may be even more pronounced here, where the early votes are only a fraction of the total potential votes—and I sense are not always representative either!
In (2A), there appears to be a minority viewpoint whose adherents are using strongvotes to hide content the significant majority of voters believe to be a positive contribution. Yes, those voters could respond with strongvotes of their own. But they don’t know they are in the majority and that their viewpoint is being overridden by 1⁄3 or less of a strongvoter’s opinion.
In (2B), the community is closely divided and there is no consensus for cloture. But the use of strongvotes makes the karma total come out negative enough to hide a comment (IIRC).
One could envision encoding special rules to mitigate these concerns, such as:
A post or comment’s display is governed by its cloture-adjusted karma, in which at most one-third of the votes on either side count as strong. So where the only downvotes are −9, −8, −7, they would count as −9, −2, −2.
In addition to negative karma, cloture requires a greater number of downvotes than upvotes, the exact fraction varying a bit based on the total votes cast. For example, I don’t think 4-3 should be enough for cloture, but 40-30 would be.
Should there be an option for the poll results not to link responses to individual voters? I think there are some questions for which a confidential poll would be preferable. On the other hand, I imagine the voter-vote identity would still be known to CEA and potentially hackable (which IIRC is a reason why there is no “make an anonymous comment/post” function).
I think you can probably do better locally with an EA mindset than by donating through GWWC—and this isn’t a criticism of GWWC!
As a practical matter, a potential intervention needs to have enough room for additional funding and be long-term enough to justify the costs of evaluation and transaction costs, then the opportunity needs to actually come to the attention of GWWC or another organization.
I suspect you’d have access to some highly effective microscale and/or time sensitive opportunities that GWWC and people like me do not. You also are likely to have local knowledge to evaluate those opportunities that people like me lack.
I directionally agree, but the system does need to also perform well in spikes like FTX, the Bostrom controversy, critical major news stories, and so on. I doubt those are an issue on r/excel.
but no, I don’t know what it is (or have a clear and viable plan for finding it)