Software engineer, blogging and editing at /nickai/. PM me your fluid-g-increasing ideas. (Formerly President/VP of the EA Club at RIT (NY, USA)).
NicholasKross
I feel both held back and out of my depth in this, so this and the comments have helped my perspective. Thank you for writing this!
Effective Altruism, Before the Memes Started
[Question] Is it crunch time yet? If so, who can help?
Devin’s reply/summary:
“Thanks for the comments. Sorry, I wrote a good deal of this stream of conscious so it isn’t really structured as an argument. More a way of me to connect some personal thoughts/experiences together in a hopefully productive way. I can see how that wouldn’t be super accessible. The basic argument embedded in it though is:
-
Effective Altruism, like many idealistic movements, started out taking critics very very seriously and trying to reach out to/be charitable to them as much as possible, which is a good thing
-
Effective Altruism, like most movements that grow older, is not quite like that anymore, it seems to respond with less frequency and generosity to critics than it used to, which is unfortunate but understandable
-
Understandable as it is, we should at least take a bit more notice of it if that’s the path we are going down because…
-
Many movements move on from here to ridiculing criticisms by treating common criticisms as though they were obviously, memeably false, and that everyone in the know gets that (I didn’t use examples, but the one most on my mind was the midwit meme format, which only requires the argument being ridiculed, your stated, undefended position, and some cartoons, to make it look like you’ve made a point). This is bad and we should be careful not to start doing it.”
-
Done!
The LOTR analogy was intriguing to me, thank you!
Devin’s response (also to DavidNash): “Sorry, there might be a misunderstanding here. The William MacAskill example is supposed to be more a framing device and specific case I’ve been thinking about, not any sort of proof that there’s a problem. As I mention in my epistemic status section, the overall claims I make about EA aren’t defended here, I rely on readers to just share this same impression of current fatigue with critics relative to early EA on reflection. If you don’t, that’s fine, but this piece isn’t going to try to convince you otherwise. On MacAskill more specifically, I agree that he isn’t at all obligated to respond, but my point in bringing him up is that, given his earlier behavior, if there hadn’t been a change in him between then and now, I would have expected he would respond. There are plenty of explanations other than a simple fatigue story, I’m intrigued by barkbellowroar’s comment bellow for instance, but my theory here is that it may be in part related to this broader trend in the movement.”
Devin’s response: “I would be careful about calling this a bad faith attack. It may seem low quality or biased, but low quality is very different from bad faith and bias is probably something most of our defenders are guilty of to a decent degree as well. I’m not an expert on this case, but my own understanding is basically that Torres wrote a more academic, EA-targeted version of this before, got no responses or engagement he found adequate, despite reaching out to try to get it, and decided to take his case to a broader audience. I think there’s a ton wrong with his analysis including stuff a more balanced view of his subjects should have easily caught, but I see every indication he was trying to criticize in good faith. Then again, I am not super familiar with this case, and maybe I’m totally wrong. But one of the broader points of my piece is something like this: we can’t engage with all critics without being overwhelmed, indeed we can’t even engage with all the critics who really deserve some engagement without being overwhelmed. It is much much better to just admit this than to act like we are engaging with everyone who deserves it by getting trigger happy with accusations of bad faith and unreasonableness. Even when each of these is true, they are far too tempting an excuse once they enter your arsenal.”
Devin’s response:
“The white supremacy part doesn’t have this effect for me. Yes there is a use of this word to refer to overt, horrible bigotry, but there is also a use of this word meaning something closer to ‘structures that empower, or maintain the power, of white people disproportionately in prominent decision-making positions’. It is reasonable to say that this latter definition may be a bad way of wording things, you could even argue a terrible way, but since this use has both academic, and more recently some mainstream, usage, it hardly seems fair to assume bad faith because of it. Some of the other stuff in this thread is more troubling, it seems there is a deep rabbit hole here, and it’s possible that Torres is generally a bad actor. Again, I don’t want to be too confident in this particular case. Although it seems we have very different ways of viewing these criticisms even when we are looking at the same thing, I will allow that you seem to have more familiarity with them.”
I mostly agree with the AI risk worldview described in footnote 5, but this is certainly an interesting analysis! (Although not super-useful for someone in a non-MIT/non-Jane-Street/not-elite-skilled reference class, but I still wonder about the flexibility of that...)
Devin’s reponse:
“Yeah, I was wondering when that might come up. I have a general resistance to making extraneous accounts, especially if they are anything like social media accounts. I find it stressful and think I would over-obsessively check/use them in a way that would wind up being harmful. Even just having this post up and the ability to respond through Nick has occupied my attention and anxiety a good deal the last few days, or I might do more cross-posts/enable comments on our blog. That said, I did consider it. EA forum seems like it would not be so bad if I was going to have an account somewhere, and there’s still a decent chance that I will make one at some point. When I asked Nick about the issue, he said he already had an account and was very willing to post it for me (by the way, thanks again Nick!). I still considered making one because I thought it might seem weird if it was posted by him instead, but for better or worse I wound up taking him up on it.”
Devin’s reply:
“Thanks for the response, reading your posts was one of the biggest inspirations for me writing this, its overall demeanor reminded me of what I see as this older strain of EA public interface in a way I hadn’t thought of in a while. On the point of MacAskill responding, I think the information you’ve given is helpful, but I do think there would have been some value in public commentary even if Torres personally wasn’t going to change his mind because of it, for instance it would have addressed concerns the piece gave outsiders who read it, and it would have both legitimized and responded to the concerns of insiders who might have resonated with some of what Torres said. As it happens, I think the community did respond to it somewhat significantly, but in a pretty partial, snubbish way. Robert Wiblin for instance appeared to subtweet the piece like twice:
https://mobile.twitter.com/robertwiblin/status/1422213998527799307
https://mobile.twitter.com/robertwiblin/status/1438883980351361030
Culminating in his recent 80k interview which he strongly advertised as a response to these concerns (again, without naming the article):
https://mobile.twitter.com/robertwiblin/status/1445817240008355843
A similar story can be said of MacAskill himself, shortly after the piece came out he made some comments on EA Forum apparently correcting misconceptions about longtermism the piece brought up without engaging with the piece directly:
Maybe Torres doesn’t deserve direct engagement even if some of his concerns do (or maybe he does), but it seems hard to deny that its publication had some non-trivial impact on the internal conversations of the movement, including in some ways there was already an appetite for. Though again I can’t expect more direct engagement (especially from those personally attacked), it does seem to me more thorough, direct engagement from prominent figures would have been better in many ways than most of the actual reaction.”
[Question] No, really, can “dead” time be salvaged?
BOUNTY IDEA (also sent in the form): Exploring Human Value Codification.
Offered to a paper or study that demonstrates a mathematical (or otherwise engineering-ready) framework to measure human’s real preference-ordering directly. Basically a neuroscience experiment or proposal thereof.
End goal: Using this framework / results from experiment(s) done based on it, you can generate novel stimuli that seem similar to each other, and reliably predict which ones human subjects will prefer more. (Gradients of pleasure, of course, no harm being done). And, of course, the neuroscientific understanding of how this preference ordering came about.
Prize amount: $5-10k for the proposal, more to fund a real experiment, order of magnitude probably in the right ballpark.
Can confirm
I post one article by a friend about memes, look away for 5 seconds, and now this!
I am naturally an angsty person, and I don’t carry much reputational risk Relate! Although you’re anonymous, I’m just ADD.
Point 1 is interesting to me:
longtermist/AI safety orgs could require a diverse ecosystem of groups working based on different approaches. This would mean the “current state of under-funded-ness” is in flux, uncertain, and leaning towards “some lesser-known group(s) need money”.
lots of smaller donations could indicate/signal interest from lots of people, which could help evaluators or larger donors with something.
Another point: since I think funding won’t be the bottleneck in the near future, I’ve refocused my career somewhat to balance more towards direct research.
(Also, partly inspired by your “Irony of Longtermism” post, I’m interested in intelligence enhancement for existing human adults, since the shorter timelines don’t leave room for embryo whatevers, and intelligence would help in any timeline.)
Imho some kind of /r/EffectiveMemes would be the best bet
Thank you for putting this (and solutions) in clear words
I feel like I’m on both sides of this, so I’ll take the fast.ai course and then immediately jump into whatever seems interesting in PyTorch