aka G Gordon Worley III
Gordon Seidoh Worley
Illegible impact is still impact
I can’t speak for any individual, but being careful in how one engages with the media is prudent. Journalists often have a larger story they are trying to tell over the course of multiple articles and they are actively cognitively biased towards figuring out how what you’re saying confirms and fits in with that story (or goes against it such that you are now Bad because you’re not with whatever force for Good is motivating their narrative). This isn’t just an idle worry either: I’ve talked to multiple journalists and they’ve independently told me as much straight out, e.g. “I’m trying to tell a story, so I’m only interested if you can tell me something that is about that story”.
Keeping quiet is probably a good idea unless you have media training so you know how to interact with journalists. Otherwise you function like a random noise generator that might accidentally generate noise that confirms what the journalist wanted to believe anyway and if you don’t endorse whatever the journalist believes you’ve just done something that works against your own interests and you probably didn’t even realize it!
I’m not sure where this falls exactly between importance and tractability, but I think one concern is that any work we do on space governance now is likely to be washed out by later, near-term, and more powerful forces in the future.
My thinking on this is by analogy to previous developments in frontier governance. For example, in the history of the United States, it was common to form treaties with native peoples and exist with them relatively peacefully right up until the native people had resources the colonists/settlers/government wanted badly enough that they found expedience excuses to ignore the treaties, such as by fabricating treaty violations to allow ignoring them or just outright using force against a weaker entity.
And that’s just to consider how governance becomes fluid when one entity far out powers another. Equally powered entities have their own methods of renegotiating for what is presently desirable, past agreement be damned.
On the other hand some things have stuck well. For example, even if actors sometimes violate them, international rules of war are often at least nominally respected and effort is put into punishing (some of) those who violate those rules. As ever, exceptions are made for those powerful enough to be beyond the reach of other actors to impose their will by force.
All of this makes me somewhat pessimistic that we can expect to do much to have a strong, positive influence on space governance.
Active Hope for EA Burnout
I would expect this not to be very neglected, hence I would expect EAs to be able to have much impact here only if, for example, it’s effectively neglected because the existing people pushing for an end to the drug war are unusually ineffective.
For example, there’s already NORML, who’s been working on cannabis angle of this since the 1970s to decent success, Portugal has already ended the drug war locally, and Oregon recently decriminalized possession of drugs for personal use.
Getting involved feels a bit like getting involved in, say, marriage equality in the 2000s: the change was already clearly in motion, plenty of people were working to push for it, and so there’s not clearly a lot additional that EAs could have brought to the table.
The downvotes are probably because, indeed, the claims only make sense if you look at the level of something like “has Scott ever said anything that could be construed as X”. I think a complete engagement with SSC doesn’t support the argument, and it’s specifically the fact that SSC is willing to address issues in their whole without flinching away from topics that might make a person “guilty by association” that makes it a compelling blog.
Announcing the Buddhists in EA Group
As best I can tell you don’t seem to address the main reasons most organizations don’t choose to outsource:
additional communication and planning friction
principal-agent problems
You could of course hand-wave here and try to say that since you propose an EA-oriented agency to serve EA orgs this would be less of an issue, but I’m skeptical since if such a model worked I’d expect, for example, not not have ever had a job at a startup and instead have worked for a large firm that specialized in providing tech services to startups. Given that there’s a lot of money at stake in startups, it’s worth considering, for example, if these sorts of challenges will cause your plan to remain unappealing in reality, since the continue with the example most startups that succeed have in-house tech, not outsourced.
Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact
Personally I downvoted this post for a few reasons:
insufficient details to evaluate the claims
claims are not stated clearly
I found it hard to follow because it contained various abbreviations with no explanation of them
assumes reader has context that is not presented in the post
To me this reads more like publicly posting content that was written only with an audience of folks working at CEA or similar orgs in mind. So I downvoted because it doesn’t seem worth a lot of people reading it since it’s unclear what value there is there for them. This isn’t to say the intended message isn’t worthwhile, only that the presentation in this particular post is insufficient.
I’d very much like to read a post providing evidence that there were many instances of sexual assault within the community if that’s the case, especially if it’s above the baseline of the surrounding context (whether that be people of similar backgrounds, living in similar places, etc.). And if CEA has engaged in misconduct I’d like to know about that, too. But I can’t make any updates based on this post because it doesn’t provide enough evidence to do so.
Although this is getting downvotes I do find it interesting at least in that it points out that at least one local group (and so probably more) are operating in ways that turn off interested folks. Unfortunately we don’t know which group, although I encourage the poster to reach out to someone at CEA and maybe they can look into it and see if there is anything they can do to help this group improve (if that is indeed appropriate) as part of their community-building efforts.
But I think it’s worth highlighting that here we have someone who care about about EA that they came here to make a post about how frustrated they are with their experience with EA! I think that points out that there is likely some opportunity to do better embedded in this!
The statement is almost certainly intentionally ambiguous. That’s kind of how a lot of PR works: say things directionally and let people read in their preferred details.
Comparing the Effect of Rational and Emotional Appeals on Donation Behavior
There’s a lot I disagree with in this post, but there’s one part I super agree with:
Be much, much less accepting of any intersection between romance and office/network
Traditionally dating happens with your 2nd and 3rd order connections, not your first. Also, dating in a professional (or related) setting is very likely to lead to bad outcomes. We know this.
I realize people want to date like-minded people. There are lots of them out there who aren’t in EA! You just have to look for them.
One possibility, if this theory is correct, is that cluster headaches are a spandrel, i.e. a (very unfortunate) unintended side effect of the pain system being accidentally fired in a case when it isn’t beneficial for it to be but doesn’t get selected out because it doesn’t have much of an impact on differential reproduction rates.
Another is that causality is slightly different, pain is amped up in some cases to elicit altruism, but the mechanisms of pain are “lower in the stack” and so can be triggered by things other than those considered here, making cluster headaches something outside the bounds of what this model would need to explain since there can be many things accentuating pain and the considered cases are just one and the situation with cluster headaches is another.
Having spent significant time around both the EA and the LW community and having written several controversial posts and then subsequently talked with folks who downvoted those posts, I now have strong reason to believe that most downvotes are in fact “boos” rather than anything more substantive. When people have substantive disagreements with posts they more often post comments indicating that and just don’t vote on a post either way.
I’m sure this is not universally true but it’s been my experience, so when I see downvotes on a post that isn’t obviously spam, trolling, or otherwise clearly low-quality (rather than in this case just not containing much content, a kind of post that is clearly not universally downvoted because many low content posts get either neutral or positive responses, which I must assume given their lack of content is a function of agreement with the idea presented), I find it reasonable to ask “why ‘boo’ at this?”. Hence my comment as a possible explanation for more “boos” than “yays”.
I agree it would be preferable if people didn’t use votes as “boos” and “yays”, and I think we could fix this—maybe by only allowing people who comment on a post to vote on it, although I think that risks creating lots of meaningless comments because people just want to vote, so there is probably some other solution that would work better—but unfortunately my experience suggests that’s exactly how most people vote on posts and comments.
- Boo votes, Yay NPS by 14 May 2019 19:07 UTC; 29 points) (LessWrong;
- 29 May 2019 17:56 UTC; 9 points) 's comment on Drowning children are rare by (
- Why do you downvote EA Forum posts & comments? by 29 May 2019 22:52 UTC; 6 points) (
TAISU 2019 Field Report
This is basically my own experience. I worked a bunch on AI independent research, but now I don’t really because it just doesn’t make sense: I have way more opportunity to make money to do more good than any direct work I could do, in my estimation, so I just double down on that.
(For context I’m on the higher end of technical talent now: 12 years of work experience, L7-equivalent, in a group tech lead role, and if I can crank up to L8 the potential gains are quite large in terms of comp that I can then donate.)
One place where EAs paying taxes in the US can probably have differential impact is in making donations less than the standard deduction(s) they can take on their taxes such that they would not benefit from itemized deductions from donating to registered charities. Impact concerns aside, unless you’re donating enough to exceed your standard deduction, you don’t get much or any tax benefits from donating to registered charities, and so all of your donations will be post-tax anyway so you have a unique opportunity to give funds to EA-aligned causes that are otherwise neglected by larger donors because they can’t get the tax benefits.
Some examples would include giving small (less than $10k USD) “angel” donations to not-yet-fully-established causes that are still organizing themselves and do not or will not ever have charitable tax status and participating in a donor lottery.
Plenty of caveats to this of course, like if you have employer matching that makes it worthwhile to give to registered charities even if you yourself won’t reap any tax benefits, and state-level standard deductions are smaller than federal ones so it’s often worth itemizing charitable giving on state returns even if it’s not on federal returns.
You are, of course, right: effective altruism is an ideology by most definitions of ideology, and you give a persuasive argument of that.
But I also think it misses the most valuable point of saying that it is not.
I think what Helen wrote resonates with many people because it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology. Effective altruism stays away from the worst tribalism of other -isms by being able to continually refresh itself by asking the simple question, “how can I do the most good?”
When we ask this question we don’t get so tied up in what others think, what is expected of us, and what the “right” answer is. We can simply ask, right here and right now, given all that I’ve got, what can I do that will do the most good, as I judge it? Simple as that we create altruism through our honest intention to consider the good and effectiveness through our willingness to ask “most?”.
Further, thinking of effective altruism as more question than ideology is valuable on multiple fronts. When I talk to people about EA, I could talk about Singer or utilitarianism or metaethics, and some times for some people those topics are the way to get them engaged, but I find most people resonate most with the simple question “how can we do the most good?”. It’s tangible, it’s a question they can ask themselves, and it’s a clear practice of compassion that need not come with any overly strong pre-conceived notions, and so everyone feels they can ask themselves the question and find an answer that may help make the world better.
When we approach EA this way, even if it doesn’t connect for someone or even if they are confused in ways that make it hard for them to be effective, they still have the option to engage in it positively as a practice that can lead them to more effectiveness and more altruism over time. By contrast, if they think of EA as an ideology that is already set, they see themselves outside it and with no path to get in, and so leave it off as another thing they are not part of or is not a part of them—another identity shard in our atomized world they won’t make part of their multifaceted lives.
And for those who choose not to consider the most good, seeing that there are those who ask this question my seem silly to them, but hardly threatening. An ideology can mean an opposing tribe you have to fight against so your own ideology has the resources to win. A question is just a question, and if a bunch of folks want to spend their time asking a question you think you already know the answer to, so much the better that you can offer them your answer and so less the worse that they pose a threat, those silly people wasting time asking a question. EA as question is flexibility and strength and pliancy to overcome those who would oppose and detract from our desire to do more good.
And that I think is the real power of thinking of EA as more question than ideology: it’s a source of strength, power, curiosity, freedom, and alacrity to pursue the most good. Yes, it may be that there is an ideology around EA, and yes that ideology may offer valuable insights into how we answer the question, but so long as we keep the question first and the ideology second, we sustain ourselves with the continually renewed forces of inquiry and compassion.
So, yes, EA may be an ideology, but only by dint of the question that lies at its heart.