The core argument of Nick Bostrom’s bestselling book Superintelligence has also aged quite poorly: In brief, the book mostly assumed we will manually program a set of values into an AGI, and argued that since human values are complex, our value specification will likely be wrong, and will cause a catastrophe when optimized by a superintelligence. But most researchers now recognize that this argument is not applicable to modern ML systems which learn values, along with everything else, from vast amounts of human-generated data.
For what it’s worth, the book does discuss value learning as a way of an AI acquiring values—you can see chapter 13 as being basically about this.
I would describe the core argument of the book as the following (going off of my notes of chapter 8, “Is the default outcome doom?”):
It is possible to build AI that’s much smarter than humans.
This process could loop in on itself, leading to takeoff that could be slow or fast.
A superintelligence could gain a decisive strategic advantage and form a singleton.
Due to the orthogonality thesis, this superintelligence would not necessarily be aligned with human interests.
Due to instrumental convergence, an unaligned superintelligence would likely take over the world.
Because of the possibility of a treacherous turn, we cannot reliably check the safety of an AI on a training set.
There are things to complain about in this argument (a lot of “could”s that don’t necessarily cash out to high probabilities), but I don’t think it (or the book) assumes that we will manually program a set of values into an AGI.
OP strikes me as hyperbolic in a way that makes me disinclined to trust it.
I can’t deny this, in the sense that I don’t know that it’s false, but OP gives no evidence for this beyond the bare claims. OP doesn’t provide any details that people could investigate to verify, and OP writes anonymously on a one-off account, so that people can’t check how trustworthy OP has been in the past or on similar topics.
Now, I don’t think there’s anything wrong with saying things without proof or evidence—and in fact, it wouldn’t shock me to hear that there were 30 incidents of rape or prolonged abuse in EA circles in something like a 6-year period (I’ve had friends tell me of some sexual infractions, and I don’t see why I would have heard about all of them) - but I think one should own that they’re doing that.
That link shows an anonymous commenter saying that they reported people to CEA community health, and Julia Wise agreeing, thanking that commenter, and saying that the reports helped CEA keep the accused out of some CEA spaces. Assuming the anonymous commenter is OP, I think it’s misleading to summarize this as the CH team “tr[ying] to take credit for some of [OP’s] work”.
I have no idea what these contradictory statements are, altho I admit to not having followed discussion of this topic on the EA forum carefully. The fact that OP didn’t link them, and the questionable representations elsewhere in the post make me not inclined to trust that there is such a contradiction.
As far as I can tell from what OP writes, the situation summarized here as “EA… us[ing OP] for free work for six years” is that people who have been sexually assualted have contacted OP, OP has reported them to CEA, and people from CEA have had conversations with OP. OP also refers to CEA community health as asking or telling OP to ask people who have accused others of sexual assault to contact CEA community health. I guess this is compatible with something like “We would appreciate it if you asked people to report accusations to us” as well as “Tell people to report accusations to us”, but neither strike me as asking OP to do “free work”. Unless “EA” is meant to refer to the people who have been sexually assaulted, or unless there’s more OP hasn’t said, I don’t see how OP’s summary is at all fair.
Not knowing who OP is it’s hard to tell whether this is right—but guessing at what username OP used in that discussion, I see one person recommending working with them, which is fewer endorsements than I would have guessed from taking this sentence straightforwardly.
Julia Wise’s LinkedIn lists degrees in sociology and social work, as well as experience in social work and mental health clinics. She is on the community health team at CEA, and in fact I think leads it? [EDIT: I’ve now heard that she doesn’t lead the community health team]
Disclaimer: I don’t have any particular inside information about CEA community health, and in particular I don’t know how good a job they do at things.