There is an “exploit” against anonymity on LessWrong
On a recent popular LessWrong post, the OP, Gwern, receives a “anonymous” critique, and then chooses to deanonymize his critic, very likely by using a trivial API call:
The above call produces the identity of the commentor, who turns out to be Mark Friedenbach.
Technical comment for onlookers:
The reason why this happened is that LessWrong, like all websites, represents the information you see (posts, authorship) with underlying data/variables. This data pops up in a lot of places, and the API is one of them.
Guaranteeing anonymity is hard, because there’s often many dependencies or interlocking systems involved. It’s really hard to scrub all occurrences of identity, not just because occurrences are numerous, but because masking them often requires editing or creating new “systems”, often affecting a lot of other things, sort of like dominoes.
Also, my guess is that for principled reasons, the specific API framework LessWrong is using, isn’t native or interesting to LessWrong developers, so it’s understandable that something like anonymity isn’t polished on it.
I wrote this section to say that I don’t think this is a “fire” or major defect. I don’t think it reflects on LW and I’m not sure it’s a good use of skilled EA/LW developer time to tighten this issue down.
The moderators feel that several comments in this thread break Forum norms. In particular:
Charles He points out that Gwern has doxed someone on a different website, LessWrong, seemingly in response to criticism. We’re not in a position to address this because it happened outside the EA Forum and isn’t about a Forum user, but we do take this seriously and wouldn’t have approved of this on the EA Forum.
However, we feel that Charles’s comment displays a lack of care and further doxes the user in question since the comment lists the user’s full name (which Gwern never listed). Moreover, Charles unnecessarily shares a vulnerability of LessWrong.
We’ve written to Charles about this, and we’re discussing further action. We’ve also explicitly added that doxing is not allowed or tolerated on the EA Forum, although we think this behavior was already banned or heavily discouraged as a corollary of our existing “Strong norms.”
I agree with this comment and it seems I should be banned, and I encourage you to apply the maximum ban. This is because:
The moderator comment above is correct
Additionally, in the comment that initiated this issue, I claimed I was protecting an individual. Yet, as the moderator pointed out, I seemed to be “further doxxing” him. So it seems my claims are a lie or hypocritical. I think this is a severe fault.
In the above, and other incidents, it seems like I am the causal factor—without me, the incidents won’t exist.
Also, this has taken up a lot of time:
For this event, at least one moderator meeting has occurred and several messages notifying me (which seems a lot of effort).
I have gotten warnings in the past, such as from two previous bans (!)
One moderately-senior moderator EA has reached out for a call now.
I think this use of time (including very senior EAs) is generous. While I’m not confident I understand the nature of the proposed call, I’m unsure my behavior or choices will change. Since the net results may not be valuable to these EAs, I declined this call.
I do not promise to remedy my behavior, and I won’t engage with these generous efforts at communication.
So, in a way requiring the least amount of further effort or discussion, you should apply a ban, maybe a very long or permanent one.
Instead of talking about me or this ban anymore, while you are here, I really want to encourage considerations of some ideas that I wrote in the following comments:
Global health and poverty should have stories and media that show work and EA talent
The sentiment that “bednets are boring” is common.
This is unnecessary, as the work in these areas are fascinating, involve great skill and unique experiences, that can be exciting and motivating.
These stories have educational value to EAs and others.
They can cover skills and work like convincing stakeholders, governments and complex logistical, scientific related implementations in many different countries or jurisdictions.
They express skills not currently present or visible in most EA communications.
This helps communications and presentation of EA
To be clear, this would be something like an EA journalist, continually creating stories about these interventions. 80K hours but with a different style or approach.
There is a lack of forum discussion on effective animal welfare
This can be improved with the presence of people from the main larger EA animal welfare orgs
Welfarism isn’t communicated well.
Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
This idea is well understood and agreed upon by animal welfare EAs
While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
Patterns of communication in wild animal welfare and other areas aren’t ideal.
It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
Moderation of views and exploration is good, and pointing out one’s personal history in more regular animal advocacy and other altruistic work is good.
Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
A consistent set of content (web pages seem to be good).
Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work
Weighting suffering by neuron count is not scientific—resolving this might be EA cause X
EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
Because it’s blocked for unprincipled reasons, this could probably be “cause X”
The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings.
The considerations about communication and overburdening people apply, and a conservative approach would be good
Maybe driving this issue starting from prosaic, well known animals is a useful tactic
Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.
Dangers from AI is real, moderate timelines are real
AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
One major objection, that severe AI danger correlates highly with intractability, is powerful
Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
This objection is addressed by this argument, which seems newer and should be an update to all neartermist EAs
Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
Several lock-in scenarios fully justify neartermist work.
Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
These are good as theories/considerations/speculations, but their central place is very unjustifiable
They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
Taste and presentation is big
Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
For example, acausal work is probably pretty bad
(Some chance they are actually good, tastes can be weird)
Fixation on current internal culture is really bad for general recruitment
Talent pool may be 5x to 50x greater with effective program
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
Often will attempt to produce AGI outright or find explore/conditions related to it
Will always attempt to produce profit
(These are generally prosocial)
Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
This has major effects on all EA orgs
A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
So things just dilute down to profit seeking tech companies. This seems bad:
In one case, speaking to an AI safety person, they brought up donations from their org, casually by chance. The donation amount was large compared to EA grants.
It’s problematic if ancillary donations by EA orgs are larger than EA grantmakers.
The “constant job interview” culture of some EA events and interactions will be made worse
Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
All these effects can actually be positive, and these orgs and cultures can strengthen EA.
I think this can be addressed by monitoring of talent flows, funding and new organizations
E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
I think this person should be friendly (pro AI safety), not a critic
They might release reports showing how good the orgs are and how happy employees are
This person can monitor and tabulate grants as well, which seems useful.
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design
I find it unseemly that Gwern made the choice to both find and publicly reveal Mark’s identity.
Additionally, I find Gwern’s presentation of this knowledge glib and unbecoming, which calls back to the very issues that Mark objects to.
I echo Mark’s views in his critique. I often find that Gwern’s allusions in his post contribute little. I also find his use of them overbearing.
EA and thinkers should be aiming at a very high tier. I think this tier should be aimed at, for example, by a high quality writer contest (which cites Gwern).
Writers and EA may not get second chances in these exclusive and opinionated spaces. I think that for writers or thinkers aiming to be influential in the ways EAs think are important, acts like this one, or cloudy aesthetics with the truth, could be enough to exclude them.
To explain: I did no API hacking. This was so trivial a bug that it was found entirely by accident simply browsing the page. Someone happened to be reading the page via the popular GreaterWrong mirror and noticed that I mentioned an ‘anonymous’ comment but that I was clearly responding to a “Mark_Friedenbach” and puzzled, checked the LW2 version, and loled. Oops. (Not that it came as too much of a surprise to me. I remember his comments from before he chose to go anonymous… Bullshit about ‘Engrish’ is par for the course.)
This was not intentional on the part of GW or saturn2, it’s simply that GW has always cached the user ID & name (because why wouldn’t it) and whoever implemented the ‘anonymous’ feature apparently didn’t think through the user ID part of it. So, this entire time, for however many years the ‘anonymous’ thing has been there, it’s been completely broken (and it would be broken even if GW was not around, because anyone with any way to access old user ID/name pairs, such as via the Internet Archive, would be able to link them).
Since the horses which left the barn have long since broken a leg and been sent to the glue factory, and it’s obvious once you start looking (you didn’t spot GW, but you did see the API problem immediately), I felt no particular hurry to disclose it when it served as such an incredible example for a comment claiming, among other things, that it is so absurd that anyone would ever make a stupid design choice that it constitutes grounds for ignoring anything I say. That is not a gift horse I will look in the mouth. Like a good magic trick, it works best when the viewer can’t wave it away by coming up with a patch for it. (“I would simply not write a cryptocurrency which made any mistakes.”)
Nor did I deanonymize him, contra your other comment. I was deliberate about not using his full username and using just “Mark”; had I wanted to use it, I would have, but I just wanted to prove to him that he was not anonymous, due to a stupid bug. There are many Marks on LW, even considered strictly as usernames containing the term and ignoring the real possibility they might use a username not containing ‘[Mm]ark*’. (Mark Xu & Mark Otaris just off the top of my head.
If anyone ‘deanonymized’ him (considering one can just read it right there on the GW page and many people already have), it would be you. I do hope we’re not going to hear any preaching about responsible disclosure coming from the person who rushed to publicly post all the details and the full user name? (What sort of ‘high tier’ would we put you on or how would one describe ‘acts like this one’?)
Additionally, I find Gwern’s presentation of this knowledge glib and unbecoming, which calls back to the very issues that Mark objects to.
I find Mark’s comments glib and unbecoming, and a good example of why we might not want to have not anonymous comments at all. If he wants to post comments about how a character thinking something in a story is wildly “unprofessional” or make up numbers, he can register a pseudonym and have a visible history tying his comments together like anyone else.
You called attention to the existence of a hack and said his name, that could be enough for some people to uncover identity. (Agreed that people posting the full name were not very considerate either). Did it even occur to you that saying some things in some countries is illegal and your doxxing victim could go to prison for saying something that looks innocuous to you? Do you know where Mark is from and what all his country’s speech laws are? I am so completely disappointed that you would notice a leak like this and not quietly alert people to fix it and PM Mark about it, but doxx someone over an internet argument.
Did it even occur to you that saying some things in some countries is illegal and your doxxing victim could go to prison for saying something that looks innocuous to you? Do you know where Mark is from and what all his country’s speech laws are?
If Mark is in such a situation (which he was not, and I knew he was not), then the real culprit is whoever implemented such a completely broken and utterly unfixable ‘anonymous’ comment, and himself for being a security researcher and yet believing that retroactively making comments ‘anonymous’ on a publicly-scrape-able website would protect him against nation-state actors when anonymity was neither the goal nor a documented promise of the account deletion feature he was abusing and then crying ‘dox!’ about it not doing what it wasn’t supposed to do and didn’t do.
This was not intentional on the part of GW or saturn2, it’s simply that GW has always cached the user ID & name (because why wouldn’t it) and whoever implemented the ‘anonymous’ feature apparently didn’t think through the user ID part of it.
I did think of it! But having documents without ownership sure requires a substantial rewrite of a lot of LW code in a way that didn’t seem worth the effort. And any hope for real anonymity for historical comments was already lost with lots of people scraping the site. If we ever had any official “post anonymously” features, I would definitely care to fix these issues, but this is a deleted account, and posting from a deleted account is itself more like a bug and not an officially supported feature (we allow deleted accounts to still login so they can recover any content from things like PMs, and I guess we left open the ability to leave comments).
I would strongly advise closing the commenting loophole then, if that was never intended to be possible. The only thing worse than not having security/anonymity is having the illusion of security/anonymity.
While I agree that total privacy/anonymity is almost impossible, “pretty good” privacy in practice can be achieved through obscurity. For example, you could find my full name by following two links, but most people won’t bother. (If you do, please don’t post it here.)
Absolutely. But you know you are relying on obscurity and relatively modest cost there, and you keep that in mind when you comment. Which is fine. Whereas if you thought that it was secure and breaking it came at a high cost (though it was in fact ~5 seconds of effort away), you might make comments you would not otherwise. Which is less fine.
If anyone ‘deanonymized’ him (considering one can just read it right there on the GW page and many people already have), it would be you. I do hope we’re not going to hear any preaching about responsible disclosure coming from the person who rushed to publicly post all the details and the full user name?
Gwern’s rhetoric elides the consideration that my message is extremely unlikely to be consequential against Mark, as he himself explains.
I point out that is a reasonable characterization that all the effects/benefits of calling out Mark accrue to Gwern by the device of using Mark’s first name, yet he can escape a charge of “doxxing”, by the same.
I call out to readers to consider what the substance of what my thread is about, and what the various choices I’ve made, and consequent content might reveal.
Yes, he does claim it. So, why did you do it? Why did you post his whole username, when I did not and no one could figure out who it was from simply ‘Mark’?
I point out that is a reasonable characterization that all the effects/benefits of calling out Mark accrue to Gwern by the device of using Mark’s first name, yet he can escape a charge of “doxxing”, by the same.
Absolutely. I did not dox him, and I neither needed nor wanted to. I did what illustrated my point with minimum harm and I gained my desired benefits that way. This is good, and not bad.
I did not post screenshots explaining how to do it and who it was, which were unnecessary and potentially do some harm. So, why did you dox Mark?
I am proud of the work of many people who built the community of LessWrong and I hope to read the interesting contributions of talented people like you in the future.
There is an “exploit” against anonymity on LessWrong
On a recent popular LessWrong post, the OP, Gwern, receives a “anonymous” critique, and then chooses to deanonymize his critic, very likely by using a trivial API call:
The above call produces the identity of the commentor, who turns out to be Mark Friedenbach.
Technical comment for onlookers:
The reason why this happened is that LessWrong, like all websites, represents the information you see (posts, authorship) with underlying data/variables. This data pops up in a lot of places, and the API is one of them.
Guaranteeing anonymity is hard, because there’s often many dependencies or interlocking systems involved. It’s really hard to scrub all occurrences of identity, not just because occurrences are numerous, but because masking them often requires editing or creating new “systems”, often affecting a lot of other things, sort of like dominoes.
Also, my guess is that for principled reasons, the specific API framework LessWrong is using, isn’t native or interesting to LessWrong developers, so it’s understandable that something like anonymity isn’t polished on it.
I wrote this section to say that I don’t think this is a “fire” or major defect. I don’t think it reflects on LW and I’m not sure it’s a good use of skilled EA/LW developer time to tighten this issue down.
The moderators feel that several comments in this thread break Forum norms. In particular:
Charles He points out that Gwern has doxed someone on a different website, LessWrong, seemingly in response to criticism. We’re not in a position to address this because it happened outside the EA Forum and isn’t about a Forum user, but we do take this seriously and wouldn’t have approved of this on the EA Forum.
However, we feel that Charles’s comment displays a lack of care and further doxes the user in question since the comment lists the user’s full name (which Gwern never listed). Moreover, Charles unnecessarily shares a vulnerability of LessWrong.
We’ve written to Charles about this, and we’re discussing further action. We’ve also explicitly added that doxing is not allowed or tolerated on the EA Forum, although we think this behavior was already banned or heavily discouraged as a corollary of our existing “Strong norms.”
I agree with this comment and it seems I should be banned, and I encourage you to apply the maximum ban. This is because:
The moderator comment above is correct
Additionally, in the comment that initiated this issue, I claimed I was protecting an individual. Yet, as the moderator pointed out, I seemed to be “further doxxing” him. So it seems my claims are a lie or hypocritical. I think this is a severe fault.
In the above, and other incidents, it seems like I am the causal factor—without me, the incidents won’t exist.
Also, this has taken up a lot of time:
For this event, at least one moderator meeting has occurred and several messages notifying me (which seems a lot of effort).
I have gotten warnings in the past, such as from two previous bans (!)
One moderately-senior moderator EA has reached out for a call now.
I think this use of time (including very senior EAs) is generous. While I’m not confident I understand the nature of the proposed call, I’m unsure my behavior or choices will change. Since the net results may not be valuable to these EAs, I declined this call.
I do not promise to remedy my behavior, and I won’t engage with these generous efforts at communication.
So, in a way requiring the least amount of further effort or discussion, you should apply a ban, maybe a very long or permanent one.
Instead of talking about me or this ban anymore, while you are here, I really want to encourage considerations of some ideas that I wrote in the following comments:
Global health and poverty should have stories and media that show work and EA talent
The sentiment that “bednets are boring” is common.
This is unnecessary, as the work in these areas are fascinating, involve great skill and unique experiences, that can be exciting and motivating.
These stories have educational value to EAs and others.
They can cover skills and work like convincing stakeholders, governments and complex logistical, scientific related implementations in many different countries or jurisdictions.
They express skills not currently present or visible in most EA communications.
This helps communications and presentation of EA
To be clear, this would be something like an EA journalist, continually creating stories about these interventions. 80K hours but with a different style or approach.
Examples of stories (found in a few seconds)
https://www.nytimes.com/2011/10/09/magazine/taken-by-pirates.html
https://www.nytimes.com/2022/05/20/world/africa/somalia-free-ambulance.html
(These don’t have a 80K sort of, long form in depth content, or cover perspectives from the founders, which seems valuable).
Animal welfare
There is a lack of forum discussion on effective animal welfare
This can be improved with the presence of people from the main larger EA animal welfare orgs
Welfarism isn’t communicated well.
Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
This idea is well understood and agreed upon by animal welfare EAs
While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
Patterns of communication in wild animal welfare and other areas aren’t ideal.
It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
Moderation of views and exploration is good, and pointing out one’s personal history in more regular animal advocacy and other altruistic work is good.
Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
A consistent set of content (web pages seem to be good).
Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work
Weighting suffering by neuron count is not scientific—resolving this might be EA cause X
EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
Because it’s blocked for unprincipled reasons, this could probably be “cause X”
The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings.
The considerations about communication and overburdening people apply, and a conservative approach would be good
Maybe driving this issue starting from prosaic, well known animals is a useful tactic
Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.
(Theres no bullet points here)
AI Safety
Dangers from AI is real, moderate timelines are real
AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
One major objection, that severe AI danger correlates highly with intractability, is powerful
Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
This objection is addressed by this argument, which seems newer and should be an update to all neartermist EAs
Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
Several lock-in scenarios fully justify neartermist work.
Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
These are good as theories/considerations/speculations, but their central place is very unjustifiable
They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
Taste and presentation is big
Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
For example, acausal work is probably pretty bad
(Some chance they are actually good, tastes can be weird)
Fixation on current internal culture is really bad for general recruitment
Talent pool may be 5x to 50x greater with effective program
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
Often will attempt to produce AGI outright or find explore/conditions related to it
Will always attempt to produce profit
(These are generally prosocial)
Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
This has major effects on all EA orgs
A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
So things just dilute down to profit seeking tech companies. This seems bad:
In one case, speaking to an AI safety person, they brought up donations from their org, casually by chance. The donation amount was large compared to EA grants.
It’s problematic if ancillary donations by EA orgs are larger than EA grantmakers.
The “constant job interview” culture of some EA events and interactions will be made worse
Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
All these effects can actually be positive, and these orgs and cultures can strengthen EA.
I think this can be addressed by monitoring of talent flows, funding and new organizations
E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
I think this person should be friendly (pro AI safety), not a critic
They might release reports showing how good the orgs are and how happy employees are
This person can monitor and tabulate grants as well, which seems useful.
Sort of a census taker or statistician
EA Common Application seems like a good idea
I think a common application seems good and to my knowledge, no one I know is working on a very high end, institutional version
See something written up here
EA forum investment seems robustly good
This is one example (“very high quality focus posts”)
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design
I find it unseemly that Gwern made the choice to both find and publicly reveal Mark’s identity.
Additionally, I find Gwern’s presentation of this knowledge glib and unbecoming, which calls back to the very issues that Mark objects to.
I echo Mark’s views in his critique. I often find that Gwern’s allusions in his post contribute little. I also find his use of them overbearing.
EA and thinkers should be aiming at a very high tier. I think this tier should be aimed at, for example, by a high quality writer contest (which cites Gwern).
Writers and EA may not get second chances in these exclusive and opinionated spaces. I think that for writers or thinkers aiming to be influential in the ways EAs think are important, acts like this one, or cloudy aesthetics with the truth, could be enough to exclude them.
To explain: I did no API hacking. This was so trivial a bug that it was found entirely by accident simply browsing the page. Someone happened to be reading the page via the popular GreaterWrong mirror and noticed that I mentioned an ‘anonymous’ comment but that I was clearly responding to a “Mark_Friedenbach” and puzzled, checked the LW2 version, and loled. Oops. (Not that it came as too much of a surprise to me. I remember his comments from before he chose to go anonymous… Bullshit about ‘Engrish’ is par for the course.)
This was not intentional on the part of GW or saturn2, it’s simply that GW has always cached the user ID & name (because why wouldn’t it) and whoever implemented the ‘anonymous’ feature apparently didn’t think through the user ID part of it. So, this entire time, for however many years the ‘anonymous’ thing has been there, it’s been completely broken (and it would be broken even if GW was not around, because anyone with any way to access old user ID/name pairs, such as via the Internet Archive, would be able to link them).
Since the horses which left the barn have long since broken a leg and been sent to the glue factory, and it’s obvious once you start looking (you didn’t spot GW, but you did see the API problem immediately), I felt no particular hurry to disclose it when it served as such an incredible example for a comment claiming, among other things, that it is so absurd that anyone would ever make a stupid design choice that it constitutes grounds for ignoring anything I say. That is not a gift horse I will look in the mouth. Like a good magic trick, it works best when the viewer can’t wave it away by coming up with a patch for it. (“I would simply not write a cryptocurrency which made any mistakes.”)
Nor did I deanonymize him, contra your other comment. I was deliberate about not using his full username and using just “Mark”; had I wanted to use it, I would have, but I just wanted to prove to him that he was not anonymous, due to a stupid bug. There are many Marks on LW, even considered strictly as usernames containing the term and ignoring the real possibility they might use a username not containing ‘[Mm]ark*’. (Mark Xu & Mark Otaris just off the top of my head.
If anyone ‘deanonymized’ him (considering one can just read it right there on the GW page and many people already have), it would be you. I do hope we’re not going to hear any preaching about responsible disclosure coming from the person who rushed to publicly post all the details and the full user name? (What sort of ‘high tier’ would we put you on or how would one describe ‘acts like this one’?)
I find Mark’s comments glib and unbecoming, and a good example of why we might not want to have not anonymous comments at all. If he wants to post comments about how a character thinking something in a story is wildly “unprofessional” or make up numbers, he can register a pseudonym and have a visible history tying his comments together like anyone else.
You called attention to the existence of a hack and said his name, that could be enough for some people to uncover identity. (Agreed that people posting the full name were not very considerate either). Did it even occur to you that saying some things in some countries is illegal and your doxxing victim could go to prison for saying something that looks innocuous to you? Do you know where Mark is from and what all his country’s speech laws are? I am so completely disappointed that you would notice a leak like this and not quietly alert people to fix it and PM Mark about it, but doxx someone over an internet argument.
If Mark is in such a situation (which he was not, and I knew he was not), then the real culprit is whoever implemented such a completely broken and utterly unfixable ‘anonymous’ comment, and himself for being a security researcher and yet believing that retroactively making comments ‘anonymous’ on a publicly-scrape-able website would protect him against nation-state actors when anonymity was neither the goal nor a documented promise of the account deletion feature he was abusing and then crying ‘dox!’ about it not doing what it wasn’t supposed to do and didn’t do.
I did think of it! But having documents without ownership sure requires a substantial rewrite of a lot of LW code in a way that didn’t seem worth the effort. And any hope for real anonymity for historical comments was already lost with lots of people scraping the site. If we ever had any official “post anonymously” features, I would definitely care to fix these issues, but this is a deleted account, and posting from a deleted account is itself more like a bug and not an officially supported feature (we allow deleted accounts to still login so they can recover any content from things like PMs, and I guess we left open the ability to leave comments).
I would strongly advise closing the commenting loophole then, if that was never intended to be possible. The only thing worse than not having security/anonymity is having the illusion of security/anonymity.
While I agree that total privacy/anonymity is almost impossible, “pretty good” privacy in practice can be achieved through obscurity. For example, you could find my full name by following two links, but most people won’t bother. (If you do, please don’t post it here.)
Absolutely. But you know you are relying on obscurity and relatively modest cost there, and you keep that in mind when you comment. Which is fine. Whereas if you thought that it was secure and breaking it came at a high cost (though it was in fact ~5 seconds of effort away), you might make comments you would not otherwise. Which is less fine.
Yeah, that seems reasonable. Just made a PR for it.
Gwern’s rhetoric elides the consideration that my message is extremely unlikely to be consequential against Mark, as he himself explains.
I point out that is a reasonable characterization that all the effects/benefits of calling out Mark accrue to Gwern by the device of using Mark’s first name, yet he can escape a charge of “doxxing”, by the same.
I call out to readers to consider what the substance of what my thread is about, and what the various choices I’ve made, and consequent content might reveal.
Yes, he does claim it. So, why did you do it? Why did you post his whole username, when I did not and no one could figure out who it was from simply ‘Mark’?
Absolutely. I did not dox him, and I neither needed nor wanted to. I did what illustrated my point with minimum harm and I gained my desired benefits that way. This is good, and not bad.
I did not post screenshots explaining how to do it and who it was, which were unnecessary and potentially do some harm. So, why did you dox Mark?
I am proud of the work of many people who built the community of LessWrong and I hope to read the interesting contributions of talented people like you in the future.