RSS

# MichaelStJules

Karma: 1,859

An­i­mal welfare re­search in­tern at Char­ity En­trepreneur­ship and or­ga­nizer for Effec­tive Altru­ism Water­loo.

Earned to give in deep learn­ing at a startup for 2 years for an­i­mal char­i­ties, and now try­ing to get into effec­tive an­i­mal ad­vo­cacy re­search. Cu­ri­ous about s-risks.

An­tispeciesist, an­tifrus­tra­tionist, pri­ori­tar­ian, con­se­quen­tial­ist. Also, I like math and ethics.

• For any­one who might doubt that clock speed should have a mul­ti­ply­ing effect (as­sum­ing lin­ear/​ad­di­tive ag­gre­ga­tion), if it didn’t, then I think how good it would be to help an­other hu­man be­ing would de­pend on how fast they are mov­ing rel­a­tive to you, and whether they are in an area of greater or lower grav­i­ta­tional “force” than you, due to spe­cial and gen­eral rel­a­tivity. That is, if they are in rel­a­tive mo­tion or un­der stronger grav­i­ta­tional effects, time passes more slowly for them from your point of view, i.e. their clock speed is lower, but they also live longer. Rel­a­tive mo­tion goes both ways: time passes more slowly for you from their point of view. If you don’t ad­just for clock speed by mul­ti­ply­ing, there are two hy­po­thet­i­cal iden­ti­cal hu­mans in differ­ent frames of refer­ence (rel­a­tive mo­tion or grav­i­ta­tional po­ten­tial or ac­cel­er­a­tion; one frame of refer­ence can be your own) with iden­ti­cal ex­pe­riences and lives from their own points of view that should re­ceive differ­ent moral weights from your point of view. That seems pretty ab­surd to me.

• If an ob­jec­tive list the­ory is true, couldn’t it be the case that there are kinds of goods un­available to us that are available to some other non­hu­man an­i­mals? Or that they are available to us, but most of us don’t ap­pre­ci­ate them so they aren’t rec­og­nized as goods? How could we find out? Are ob­jec­tive list the­o­ries there­fore doomed to an­thro­pocen­trism and speciesism? How do ob­jec­tive list the­o­ries ar­gue that some­thing is or isn’t one of these goods?

• Thanks for the an­swer. Makes sense!

I’m also still pretty con­fused about why the S&P 500 is so high now.

Some pos­si­ble in­sight: the NASDAQ is do­ing even bet­ter, at its all-time high and wasn’t hit as hard ini­tially, and the equal-weight S&P 500 is do­ing worse than the reg­u­lar S&P 500 (which weights based on mar­ket cap), so this tells me that dis­pro­por­tionately large com­pa­nies (and tech com­pa­nies) are still grow­ing pretty fast. Some of these com­pa­nies may even have benefit­ted in some ways, like Ama­zon (on­line shop­ping and stream­ing) and Net­flix (stream­ing).

20% of the S&P 500 is Microsoft, Ap­ple, Ama­zon, Face­book and Google. Only Google is still down since Fe­bru­ary at their peaks be­fore the crash, the rest are up 5-15%, other than Ama­zon (4% of the S&P 500), which is up 40%!

• Are you us­ing or do you plan to use your fore­cast­ing skills for in­vest­ing?

• Maybe more but smaller prizes? I think karma works as a bet­ter sig­nal to in­ter­nal­ize since the feed­back is more con­sis­tent and gran­u­lar, but if more prizes were given out, this gap would be smaller. Maybe re­duce the post prizes to $150 to$300 and give out more of them and/​or more com­ment prizes?

If some­one is work­ing on a post, I don’t think $200 vs$500 makes much differ­ence in mo­ti­va­tion be­yond the karma on the post and recog­ni­tion with the prize, but 0 vs $200 could (al­though I don’t know that ei­ther would make much differ­ence for me be­yond the karma). # [Question] Is it suffer­ing or in­vol­un­tary suffer­ing that’s bad, and when is it (in­vol­un­tary) suffer­ing? 22 Jun 2020 16:41 UTC 8 points 9 comments1 min readEA link • Does it make sense to com­bine lev­er­aged stan­dard stock ETFs (UPRO, TQQQ, SOXL, TECL) with lev­er­aged bonds (TMF, TYD; these are US trea­sury bonds) and lev­er­aged gold (UGLD)? The bonds and gold can re­duce your risk and max­i­mum draw­down, al­though I sup­pose your over­all long-term re­turns will be lower, while higher than the same port­fo­lio with­out lev­er­age. UPRO lost 75% to the bot­tom of the pan­demic loss, and is still down 50%, from Fe­bru­ary 19. If you’re rel­a­tively risk-neu­tral and in­vest­ing long-term with this part of your port­fo­lio, maybe it makes sense to just skip the bonds and gold (in this part of your port­fo­lio)? • It re­duces their value com­pared to a the­o­ret­i­cal bench­mark, not com­pared to in­vest­ing in the same things with­out lev­er­age over the long run, al­though the gap is smaller than you’d ex­pect and you’re tak­ing on more risk. Also, there are the man­age­ment fees with lev­er­aged ETFs, too. Not re­bal­anc­ing fre­quently seems riskier if you’re us­ing lev­er­age, since you can go nega­tive. • One con­sid­er­a­tion: how do your choices af­fect your chances of get­ting elected? Have you con­sid­ered non-poli­ti­cal work for the Aus­tralian gov­ern­ment? Maybe policy work re­lated to your PhD? Is that com­pat­i­ble with run­ning (and hav­ing run) as a poli­ti­cal can­di­date? EDIT: Ah, I see Aus­tralian Space Agency in your list. Maybe also the UK Civil Ser­vice (as a Com­mon­wealth cit­i­zen), but that might ac­tu­ally hurt your chance at get­ting elected in Aus­tralia if it sig­nals “alle­giance” to an­other coun­try. Have you ruled out earn­ing to give? What about work­ing at a plant-based or cul­tured sub­sti­tutes com­pany? See GFI’s list. • Just so you know, FRI is now CLR. Another EA org do­ing work for an­i­mals is Char­ity En­trepreneur­ship. You could also start a char­ity through their in­cu­ba­tion pro­gram, al­though do­ing so might not leave you much time to run with the An­i­mal Jus­tice Party. • Petrov’s fam­ily re­ceived the award a bit over a year af­ter he passed away, though, in this case. Of course, I’d imag­ine he would have wanted his fam­ily to live in com­fort, too, or maybe the de­ci­sion was made be­fore his pass­ing. • My thought is that we as­sume hu­mans have the same ca­pac­ity on av­er­age, be­cause while there might be differ­ences, we don’t know which way they’ll go so they should ‘wash out’ as statis­ti­cal noise. In an­other com­ment, I men­tioned that I think this is ac­tu­ally only fair to as­sume while we don’t know much about the in­di­vi­d­ual hu­mans. We could break this sym­me­try pretty eas­ily. FWIW, the analogue to my re­sponse here would be to say we can ex­pect all chick­ens to have ap­prox­i­mately the same ca­pac­ity as each other, even if in­di­vi­d­u­als chick­ens differ. The claim isn’t about hu­mans per se, but about similar­i­ties borne out of ge­net­ics. Since hu­mans also differ from each other ge­net­i­cally, isn’t the dis­tinc­tion here just a mat­ter of de­gree? • Fur­ther­more, be­sides equal­ity*, GDoT* and FDoT be­ing pretty con­trived, the dom­i­nance prin­ci­ples dis­cussed in Tarsney’s pa­per are all pretty weak, and to im­ply that we should choose x over y, we must have ex­actly 0 cre­dence in all the­o­ries that im­ply we should choose y over x. How can we jus­tify as­sign­ing ex­actly 0 cre­dence to any spe­cific moral claim and pos­i­tive cre­dence to oth­ers? If we can’t, shouldn’t we as­sign them all pos­i­tive cre­dence? How do we rule out eth­i­cal ego­ism? How do we rule out the pos­si­bil­ity that in­vol­un­tary suffer­ing is ac­tu­ally good (or a spe­cific the­ory which says to max­i­mize ag­gre­gate in­vol­un­tary suffer­ing)? If we can’t rule out any­thing, these prin­ci­ples can never ac­tu­ally be ap­plied, and the wa­ger fails. (This ig­nores the prob­lem of more than countably many mu­tu­ally ex­clu­sive claims, since they can’t all be as­signed pos­i­tive cre­dence, as the sum of cre­dences would be in­finite > 1.) We also have rea­son to be­lieve that a moral par­li­a­ment ap­proach is wrong, since it ig­nores the rel­a­tive strengths of claims across differ­ent the­o­ries, and as far as I can tell, there’s no good way to in­cor­po­rate the rel­a­tive strengths of claims be­tween the­o­ries, ei­ther, so it doesn’t seem like there’s any good way to deal with this prob­lem. And again, there’s no con­vinc­ing pos­i­tive rea­son to choose any such ap­proach at all any­way, rather than re­ject them all. Maybe you ought to as­sign them all pos­i­tive cre­dence (and push the prob­lem up a level), but this says noth­ing about how much, or that I shouldn’t as­sign equal or more cre­dence to the “ex­act op­po­site” prin­ci­ples, e.g. if I have more cre­dence in x > y than y > x, then I should choose y over x. Fur­ther­more, Tarsney points out that GDoT* un­der­mines it­self for at least one par­tic­u­lar form of nihilism in sec­tion 2.2. • One re­sponse to In­fec­tious­ness is that ex­pected value is deriva­tive of more fun­da­men­tal ra­tio­nal­ity ax­ioms with cer­tain non-ra­tio­nal as­sump­tions, and those ra­tio­nal­ity ax­ioms on their own can still work fine to lead to the wa­ger if used di­rectly (similar to Hue­mer’s ar­gu­ment). From Re­ject­ing Su­pereroga­tion­ism by Chris­tian Tarsney: Strength­ened Gen­uine Dom­i­nance over The­o­ries (GDoT*) – If some the­o­ries in which you have cre­dence give you sub­jec­tive rea­son to choose x over y , and all other the­o­ries in which you have cre­dence give you equal* sub­jec­tive rea­son to choose x as to choose y , then, ra­tio­nally, you should choose x over y . and Fi­nal Dom­i­nance over The­o­ries (FDoT) – If (i) ev­ery the­ory in which an agent A has pos­i­tive cre­dence im­plies that, con­di­tional on her choos­ing op­tion O , she has equal* or greater sub­jec­tive rea­son to choose O as to choose P , (ii) one or more the­o­ries in which she has pos­i­tive cre­dence im­ply that, con­di­tional on her choos­ing O , she has greater sub­jec­tive rea­son to choose O than to choose P , and (iii) one or more the­o­ries in which she has pos­i­tive cre­dence im­ply that, con­di­tional on her choos­ing P , she has greater sub­jec­tive rea­son to choose O than to choose P , then A is ra­tio­nally pro­hibited from choos­ing P . Here, “equal*” is defined this way: ‘“x is equally as F as y ” means that [i] x is not F er than y , and [ii] y is not F er than x , and [iii] any­thing that is F er than y is also F er than x , and [iv] y is F er than any­thing x is F er than’ (Broome, 1997, p. 72). If nihilism is true, then all four clauses in Broome’s defi­ni­tion are triv­ially satis­fied for any x and y and any eval­u­a­tive prop­erty F (e.g. ‘good,’ ‘right,’ ‘choice­wor­thy,’ ‘sup­ported by ob­jec­tive/​sub­jec­tive rea­sons’): if noth­ing is bet­ter than any­thing else, then x is not bet­ter than y , y is not bet­ter than x , and since nei­ther x nor y is bet­ter than any­thing, it is vac­u­ously true that for any­thing ei­ther x or y is bet­ter than, the other is bet­ter as well. Fur­ther­more, by virtue of these last two clauses, Broome’s defi­ni­tion dis­t­in­guishes (as Broome in­tends it to) be­tween equal­ity and other re­la­tions like par­ity and in­com­pa­ra­bil­ity in the con­text of non‐nihilis­tic the­o­ries. Of course, why should we ac­cept GDoT* or FDoT or any kind of ra­tio­nal­ity/​dom­i­nance ax­ioms in the first place? • Com­bi­na­tion effects seem challeng­ing as you point out. I think it’s of­ten taken for granted that weight­ing things should be done lin­early, but there re­ally isn’t any rea­son to be­lieve this would ap­prox­i­mate the moral truth or what we’d want to care about upon re­flec­tion in this do­main, al­though it’s use­ful for its sim­plic­ity, in­ter­pretabil­ity and trans­parency. Another spe­cific challenge is whether we should ap­ply a given (usu­ally mono­tonic) trans­for­ma­tion to a fea­ture that comes in de­grees first. For ex­am­ple, if the de­gree of mat­ters, say neu­ron count or neu­ron count in a par­tic­u­lar part of the brain, should we use or some­thing else? There are in­finitely many de­grees of free­dom here. • You might also think you can gen­er­al­ize be­tween you and I us­ing a sym­me­try ar­gu­ment, but this is only by willful ig­no­rance. We could learn more about each other in a way that would sug­gest one of us ex­pe­riences cer­tain things more in­tensely than the other (e.g. based on the sizes of the parts of our brains used for pro­cess­ing emo­tion, our per­son­al­ities or ex­pe­riences) and ig­nor­ing these differ­ences would be the same philo­soph­i­cally as ig­nor­ing the differ­ences be­tween hu­mans and chick­ens. We might learn differ­ences that go in each di­rec­tion for you and I, re­sult­ing in a moral com­plex clue­less­ness, but the same can ac­tu­ally hap­pen with non­hu­man an­i­mals, too: there are rea­sons to be­lieve some non­hu­man an­i­mals could typ­i­cally ex­pe­rience some things more in­tensely than us, e.g. our bet­ter aware­ness of the con­text around an ex­pe­rience can re­duce its in­ten­sity, and some an­i­mals have faster pro­cess­ing times. It’s plau­si­ble enough to me that dogs have higher highs in prac­tice than me (al­though maybe I’m ca­pa­ble of higher highs; they just don’t hap­pen). • I share your over­all pes­simism of ar­riv­ing at an an­swer that will ac­tu­ally be satis­fy­ing philo­soph­i­cally, but I do think re­search in this area is still im­por­tant and use­ful. Our ul­ti­mately sub­jec­tive judge­ments can be bet­ter in­formed. We as­sume you and I have the same ca­pac­ity for hap­piness. I think the same prob­lem ap­plies here too, be­cause of the unique­ness of hu­mans (our ner­vous sys­tems, the den­sity of nerve end­ings, the thick­ness of our skin, etc.), al­though it’s much more rea­son­able to gen­er­al­ize from one hu­man to an­other than be­tween species, be­cause of similar­ity. Still, I don’t think it’s ac­tu­ally rea­son­able, us­ing the same stan­dard; I might as well be a talk­ing alien. And we have no way of ob­jec­tively quan­tify­ing how rea­son­able this ap­prox­i­ma­tion is or whether one hu­man’s welfare ca­pac­ity is greater or lower than an­other’s. That be­ing said, I don’t think you always need this as­sump­tion for hu­mans any­way, e.g. if you’re ran­domly sam­pling hu­mans to sur­vey from the same dis­tri­bu­tion that you’re gen­er­al­iz­ing to (or sam­pling hu­mans to gen­er­al­ize to), since the es­ti­ma­tor can be cho­sen to be statis­ti­cally un­bi­ased, re­gard­less of how well it mea­sures what we ac­tu­ally care about. (How­ever, in prac­tice, the dis­tri­bu­tions of­ten aren’t the same, and we know of gen­er­al­iz­abil­ity is­sues due to that, e.g. WEIRD. You can ad­just/​match/​con­trol for cer­tain char­ac­ter­is­tics, but you can never re­ally elimi­nate all bias. And for some­thing sub­jec­tive like welfare, we can’t bound the bias from the un­der­ly­ing con­cept we care aout, ei­ther, even if it were pos­si­ble to bound the statis­ti­cal bias, for the same rea­son we can’t bound how differ­ent my ex­pe­rience of a toe stub is from yours.) On the other hand, we can’t do this with non­hu­man an­i­mals, since we’re sam­pling from hu­mans and gen­er­al­iz­ing be­yond hu­mans. The dis­tri­bu­tions are definitely not the same. • I won­der what the point of giv­ing them$50K is. Are the kind of peo­ple who would do these kinds of things mo­ti­vated by this kind of money? What’s the ex­tra benefit of the money over just cash­less recog­ni­tion? Ex­tra pub­lic­ity? Are there cheaper ways to get that ex­tra pub­lic­ity? What about nam­ing things af­ter them?

Or do we need to give them cash to get them to ac­cept the re­ward or for the pub­lic to not look down on the award?

• Many of the ar­gu­ments are of the form “philoso­pher X thinks that Y is true”, but with­out ap­pro­pri­ate ar­gu­ments for Y.

I’d ap­pre­ci­ate some ex­am­ples (or just one) of this. :-)

I think 3.2 In­tra- and In­ter­per­sonal Claims and the dis­cus­sion of Parfit’s com­pen­sa­tion prin­ci­ple, Mill’s harm prin­ci­ple and Shiffrin’s con­sent prin­ci­ple just be­fore in 3.1 are ex­am­ples. You don’t dis­cuss how they defend these views/​prin­ci­ples.

(I only started read­ing last night, and this is about where I am now.)