You have a set amount of “weirdness points”. Spend them wisely.

I’ve heard of the con­cept of “weird­ness points” many times be­fore, but af­ter a bit of search­ing I can’t find a defini­tive post de­scribing the con­cept, so I’ve de­cided to make one. As a dis­claimer, I don’t think the ev­i­dence back­ing this post is all that strong and I am skep­ti­cal, but I do think it’s strong enough to be worth con­sid­er­ing, and I’m prob­a­bly go­ing to make some minor life changes based on it.


Chances are that if you’re read­ing this post, you’re prob­a­bly a bit weird in some way.

No offense, of course. In fact, I ac­tu­ally mean it as a com­pli­ment. Weird­ness is in­cred­ibly im­por­tant. If peo­ple weren’t will­ing to de­vi­ate from so­ciety and hold weird be­liefs, we wouldn’t have had the im­por­tant so­cial move­ments that ended slav­ery and pushed back against racism, that cre­ated democ­racy, that ex­panded so­cial roles for women, and that made the world a bet­ter place in nu­mer­ous other ways.

Many things we take for granted now as why our cur­rent so­ciety as great were once… weird.

Joseph Over­ton the­o­rized that policy de­vel­ops through six stages: un­think­able, then rad­i­cal, then ac­cept­able, then sen­si­ble, then pop­u­lar, then ac­tual policy. We could see this hap­pen with many poli­cies—cur­rently same-sex mar­riage is mak­ing its way from pop­u­lar to ac­tual policy, but not to long ago it was merely ac­cept­able, and not too long be­fore that it was pretty rad­i­cal.

Some good ideas are cur­rently in the rad­i­cal range. Effec­tive al­tru­ism it­self is such a col­lec­tion of be­liefs typ­i­cal peo­ple would con­sider pretty rad­i­cal. Many peo­ple think donat­ing 3% of their in­come is a lot, let alone the 10% de­mand that Giv­ing What We Can places, or the 50%+ that some peo­ple in the com­mu­nity do.

And that’s not all. Others would sug­gest that ev­ery­one be­come veg­e­tar­ian, ad­vo­cat­ing for open bor­ders and/​or uni­ver­sal ba­sic in­come, the abol­ish­ment of gen­dered lan­guage, hav­ing more re­sources into miti­gat­ing ex­is­ten­tial risk, fo­cus­ing on re­search into Friendly AI, cry­on­ics and cur­ing death, etc.

While many of these ideas might make the world a bet­ter place if made into policy, all of these ideas are pretty weird.

Weird­ness, of course, is a draw­back. Peo­ple take weird opinions less se­ri­ously.

The ab­sur­dity heuris­tic is a real bias that peo­ple—even you—have. If an idea sounds weird to you, you’re less likely to try and be­lieve it, even if there’s over­whelming ev­i­dence. And so­cial proof mat­ters—if less peo­ple be­lieve some­thing, peo­ple will be less likely to be­lieve it. Lastly, don’t for­get the halo effect—if one part of you seems weird, the rest of you will seem weird too!

(Up­date: ap­par­ently this con­cept is, it­self, already known to so­cial psy­chol­ogy as idiosyn­crasy cred­its. Thanks, Mr. Com­menter!)

...But we can use this knowl­edge to our ad­van­tage. The halo effect can work in re­verse—if we’re nor­mal in many ways, our weird be­liefs will seem more nor­mal too. If we have a no­tion of weird­ness as a kind of cur­rency that we have a limited sup­ply of, we can spend it wisely, with­out look­ing like a crank.

All of this leads to the fol­low­ing ac­tion­able prin­ci­ples:

Rec­og­nize you only have a few “weird­ness points” to spend. Try­ing to con­vince all your friends to donate 50% of their in­come to MIRI, be­come a ve­gan, get a cry­on­ics plan, and de­mand open bor­ders will be met with a lot of re­sis­tance. But—I hy­poth­e­size—that if you pick one of these ideas and push it, you’ll have a lot more suc­cess.

Spend your weird­ness points effec­tively. Per­haps it’s re­ally im­por­tant that peo­ple ad­vo­cate for open bor­ders. But, per­haps, get­ting peo­ple to donate to de­vel­op­ing world health would over­all do more good. In that case, I’d fo­cus on mov­ing dona­tions to the de­vel­op­ing world and leave open bor­ders alone, even though it is re­ally im­por­tant. You should triage your weird­ness effec­tively the same way you would triage your dona­tions.

Clean up and look good. Look­ism is a prob­lem in so­ciety, and I wish peo­ple could look “weird” and still be so­cially ac­cept­able. But if you’re a guy wear­ing a dress in pub­lic, or some punk rocker ve­gan ad­vo­cate, rec­og­nize that you’re spend­ing your weird­ness points fight­ing look­ism, which means less weird­ness points to spend pro­mot­ing ve­g­anism or some­thing else.

Ad­vo­cate for more “nor­mal” poli­cies that are al­most as good. Of course, al­lo­cat­ing your “weird­ness points” on a few is­sues doesn’t mean you have to stop ad­vo­cat­ing for other im­por­tant is­sues—just con­sider be­ing less weird about it. Per­haps uni­ver­sal ba­sic in­come truly would be a very effec­tive policy to help the poor in the United States. But re­form­ing the earned in­come tax credit and re­lax­ing zon­ing laws would also both do a lot to help the poor in the US, and such sug­ges­tions aren’t weird.

Use the foot-in-door tech­nique and the door-in-face tech­nique. The foot-in-door tech­nique in­volves start­ing with a small ask and grad­u­ally build­ing up the ask, such as sug­gest­ing peo­ple donate a lit­tle bit effec­tively, and then grad­u­ally get them to take the Giv­ing What We Can Pledge. The door-in-face tech­nique in­volves mak­ing a big ask (e.g., join Giv­ing What We Can) and then sub­sti­tut­ing it for a smaller ask, like the Life You Can Save pledge or Try Out Giv­ing.

Re­con­sider effec­tive al­tru­ism’s clus­ter­ing of be­liefs. Right now, effec­tive al­tru­ism is as­so­ci­ated strongly with donat­ing a lot of money and donat­ing effec­tively, less strongly with im­pact in ca­reer choice, ve­g­anism, and ex­is­ten­tial risk. Of course, I’m not say­ing that we should drop some of these memes com­pletely. But maybe EA should dis­con­nect a bit more and com­part­men­tal­ize—for ex­am­ple, leav­ing AI risk to MIRI, for ex­am­ple, and not talk about it much, say, on 80,000 Hours. And maybe in­stead of ask­ing peo­ple to both give more AND give more effec­tively, we could fo­cus more ex­clu­sively on ask­ing peo­ple to donate what they already do more effec­tively.

Eval­u­ate the above with more re­search. While I think the ev­i­dence base be­hind this is de­cent, it’s not great and I haven’t spent that much time de­vel­op­ing it. I think we should look into this more with a re­view of the rele­vant liter­a­ture and some care­ful, tar­geted, mar­ket re­search on the in­di­vi­d­ual be­liefs within effec­tive al­tru­ism (how weird are they?) and how they should be con­nected or left dis­con­nected. Maybe this has already been done some?


Also dis­cussed on LessWrong and on the EA Face­book group.


The value
is not of type