Do you have any recommendations on how to avoid wasting time updating the current activity on Toggl?
Max_Carpendale
Another reason it might make sense to ignore flow-through effects is when you don’t know whether they would be positive or negative. If you were absolutely unsure about the flow-through effects, and figuring them out seemed impossible, then it seems right that they would balance out and that you can expect zero value from them. Insofar as this is the case, you should ignore them.
I think it’s somewhat stronger than “doing work on one philosophical question is relevant to all other philosophical questions.”
I guess if you were particularly sceptical about the possibility of digital sentience then you might focus on things like the Chinese room thought experiment, and that wouldn’t have that much overlap with invertebrate sentience research. I’m relatively confident that digital sentience is possible so I wasn’t really thinking about that when I made the claim that there is substantial overlap in all sentience research.
Some ways in which I think there is overlap are that looking at different potential cases of sentience can give us insight into which features give the best evidence of sentience. For example, many people think that mirror self-recognition is somehow important to sentience, but reflecting on the fact that you can specifically design a robot to pass something like a mirror test can give you perspective as to what aspects if any other test are actually suggestive of sentience.
Getting a better idea of what sentience is and what theories of it are most plausible is also useful for assessing sentience in any entity. One way of getting a better idea of what it is is to research cases of it that we are more confident in such as humans and to a lesser extent other vertebrates.
Thanks for the link! I’m a pretty big fan of that book.
I think you may be right that I should pivot more in that direction.
Research on degrees of sentience (including if that idea makes sense) and what degree of sentience different invertebrates have might still be relevant despite the argument that you’re quoting.
Thank you!
Great, I hadn’t noticed that article. Reading it now
Re: 1) I’m not sure. I would say that the number of people who might be considered experts on the subject of invertebrate consciousness is very low.
I can’t remember reading anything by these experts about who they consider to be the leading experts on the subject.
Re: 2) I have talked to him about it yet, but may do so at some point in the future. I doubt that anyone else has.
Thanks Jamie!
Nice article. Thanks for the link.
I don’t think I agree with your claim in the article that degrees of sentience has been scientifically demonstrated. Is there a source you have in mind for that? I’ve been looking at the literature on the topic and it seems like the arguments that there do exist degrees of sentience are based in philosophy and none are that strong.
I guess the reason you are using sentientism rather than hedonistic utilitarianism is because you think the term sounds better/has a better framing?
This might not seem like the most natural post for the EA forum, but I think it makes sense given the number of EA’s I know who have some problems with repetitive strain injuries.
Thanks! Hopefully it’s not too derivative of your work. I want to look into this more in the future and hopefully be able to say some more novel and insightful things.
I mainly relied on the FAO sourcebook on edible insects which claims higher efficiency for crickets. It seems like most articles on the subject claim higher efficiency, but I haven’t looked into it deeply enough to be able to determine that. I should probably have just relied on your article on that subject.
Yeah, I’m not sure about freezing. I mostly think we just don’t know enough about it and the Wikipedia page seems pretty sceptical about freezing as a method of killing.
Sometimes when it’s cold and I’m trying to sleep (like when I’m camping) I will manage a sort of sleep state, but one where I’m still feeling an unpleasant amount of cold. I guess I imagine that an insect’s response to freezing could be like that for some portion of the time.
I guess it wouldn’t make sense for the nervous system to send “avoid this” messages to the animal while the animal wasn’t able to avoid the situation because it was too cold, but the nervous system can’t get everything right in all circumstances.
Thank you!
Yes, I remember hearing in the 80K podcast about how you prefer it, and I was quite interested in that. I still find it quite frustrating to use sometimes because of crashes and software incompatibility, but I guess if you can choose when to use Dragon and when to use a keyboard, you can just stop using it when it’s being problematic.
I’m a bit reserved in my recommendation of it because I worry that it might take people to long to become good enough at it. I worry that people might either recover or quit using it in frustration before they start using it at a competitive speed.
Yeah, fair enough. I wish you good luck with your group and project :)
I think he may be answering the question in terms of sensory pain rather than affective pain. I was mainly interested in affective pain, I probably should have specified that in the question. In terms of sensory pain it seems to me like his answer make sense and is right because it makes sense that more nociceptors would give you a richer and more complex sensory pain. But it doesn’t make sense in terms of affective pain.
I agree with Siebe that he is using ‘suffering’ in a nonstandard way. He seems to be using ‘pain’ to refer to ‘acute pain” and ‘suffering’ to refer to ‘long-lasting, non-acute pain.’
Yeah, I think this is a worry for his view. I do also personally assign a somewhat higher likelihood to invertebrate consciousness than modern AI consciousness because of evolutionary relatedness, greater structural homology, and because they probably satisfy more of the criteria for consciousness that I would use.
You might be interested in my next interview on this subject which will be with someone who discusses modern AI and robotics findings in the context of invertebrate consciousness, and comes to a more sceptical conclusion based on that.
Haha, oh, I didn’t know you wrote that page :) That’s good enough for the future.
My impression is that experts are divided as to whether or not insects have phenomenal consciousness. Some people seem to have strong intuitions one way, and others have strong intuitions the other way. Ultimately I don’t think we know enough about the subject for anyone to be too confident one way or the other, and given this uncertainty we should take precautions.
I didn’t think it was worth getting into the question how likely it is that insects are conscious because it’s something that I’ve written about extensively elsewhere (mostly in a forthcoming report). And there are other posts on the question. In hindsight maybe a paragraph on it would have been good.
It’s true that their minds are more divergent from ours, but I think that tends to mean there is more uncertainty about what they feel stress in response to, not that they feel less environmentally induced stress. Also, as I say in the post, the uncertainty makes it harder to improve their welfare.
I probably should have paid more attention to arguments about how they could have net positive welfare to have a more balanced post. Though I have seen a real bias in favour of eating insects (at least outside the EA community), and so I still see this post as contributing to a more balanced discussion of the issue. And for the reasons I given the post I still view it is unlikely that they have net positive welfare.
Thanks, I fixed those typos.
I guess my basic reason for thinking so is because there is around six order of magnitude difference in how much meat a cow provides and how much meat a cricket provides. But if you think about which attributes provide evidence of consciousness, I don’t think you’ll find that cows do not have vastly more of these than do crickets and cricket consciousness seems like a reasonable hypothesis.
Hi Tofan, I’m glad you got relief from that! That must be amazing for you! Sorry if this comment is a bit caustic, in general I’m critical, though undecided about Sarno. I tried it and it hasn’t worked for me. I’m definitely aware of it, and I’ve read Sarno’s books. Sarno insists that you might have to fully believe his theory to get the results, and it’s possible I haven’t succeeded in doing that, though I have ‘tried on’ the hypothesis. I’ve also tried out the “Curable” app and found that they advocate a less extreme and more plausible version of the psychosomatic pain hypothesis then Sarno.
I was planning on adding a section on investigating the possibility of your pain is psychosomatic, but I’ve left that out for now because I didn’t feel I had a settled opinion on the subject or knew what to recommend.
Sarno says some things that I view as deeply problematic, like when he says that lifting techniques doesn’t matter or when he recommends discontinuing physical therapies. His theory of unconscious rage being responsible for chronic pain is also Freudian, and Freud is quite discredited.
My leading hypothesis about why he gets the results that he does in some cases is that his treatment gets people to return to activity and helps remove the psychological contribution to pain. Some people are probably actually recovered enough that returning to his fine and even helpful. I also imagine for a lot of people (myself included) the secondary psychological reaction to the pain (such as viewing yourself as crippled and feeling helpless) is more significant than the pain itself.
What makes you think there is more scientific backing to the TMS theory than the RSI theory? It seems to be true that there is a lot that isn’t understood about how chronic pain and RSI work, but TMS seems to me even more mysterious.
I like Paul Ingraham’s analysis of Sarno here.
I’m Max and I’m from Vancouver, Canada. I am interested in far future causes and animal causes. I think that in terms of improving near term well being, animal causes dominate in expectation, but I am still unsure whether they have strong enough long run effects or if more specifically far future causes would be better.
I’m finishing up a philosophy BA right now and I am still trying to decide what to do after. It seems like my comparative advantage is definitely in philosophy or something else that uses verbal reasoning, but I’m not sure if the options are good enough in those areas. I am also interning at ACE and I may end up continuing to do direct work for EA charities if it seems like I would be good enough at it.
I’m looking to skype with people about productivity, career choice, cause selection and for expanding my comfort zone, so contact me if you’d be interested in skyping as well.