I have my own sense of what EA is doing all wrong
Have you written that up anywhere? Would be interesting to read.
I have my own sense of what EA is doing all wrong
Have you written that up anywhere? Would be interesting to read.
Thanks! Now we need to hide the evidence to avert EA having no billionaire funders.
Fair.
TBH, this has put me off of utilitarianism somewhat. Those silly textbook counter-examples to utilitarianism don’t look quite so silly now.
Nice.
Have you tried grantee-reachout@googlegroups.com?
I have an Anki deck devoted to memories I want to savour.
I really like this idea. But surely the specialness wears off over time? Do you have a lot of churn?
Thanks! I have added your contribution here.
Thanks!
You are right about SBF personally bankruptcy. I was confused.
The felony conviction comes from a Manifold market embedded within the Nathan post. I have added a link directly to Manifold to make this clearer.
Thanks!
I have incorporated information from Molly’s post.
Thanks! I have added this to the “Where can I get help” section.
Can you be more specific? Which disabilities? Blindness? Colour blindness? Is this like an HTML issue? What exactly needs to be changed? I’m independently working on EA communications and might possibly be able to help in some capacity.
“interpersonal harm” link broken
why?
May be of interest:
I redid that funding plot (same data) in a way that is hopefully clearer.
TL;DR:
Reasons to think that “neuron count” correlates with “moral weight”:
Neuron counts correlate with our intuitions of moral weights
“Pains, for example, would seem to minimally require at least some representation of the body in space, some ability to quantify intensity, and some connections to behavioral responses, all of which require a certain degree of processing power.”
“There are studies that show increased volume of brain regions correlated with valenced experience, such as a study showing that cortical thickness in a particular region increased along with pain sensitivity.” (But the opposite is also true. See 6. below.)
Reasons to think that “neuron count” does NOT correlate “moral weight”
There’s more to information processing capacity than neuron count. There’s also:
Number of neural connections (synapses)
Distance between neurons (more distance → more latency)
Conduction velocity of neurons
Neuron refactory period (“rest time” between neuron activation)
“There’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence”
“It seems conceptually possible to increase intelligence without increasing the intensity of experience”
Within humans, we don’t think that more intelligence implies more moral weight. We don’t generally give less moral to children, elderly, or the cognitiviely impaired.
The top-down cognitive influences on pain suggest that maybe intelligence actually mitigates suffering.
There are “studies showing that increased pain is correlated with decreased brain volume in areas associated with pain”
Hundreds of brain imaging experiments haven’t uncovered any simple relationship between quantity of neurons firing and “amount of pain”
Bees have small brains, but have “cognitive flexibility, cross-modal recognition of objects, and play behavior”
There are competing ideas for correlates of moral weight/consciousness/self-awareness:
Thanks for feedback.
Not sure I agree with the “TL” part haha
Well, yeah. Maybe. It’s also about making the structure more legible.
there are some additional arguments in the report that aren’t included in the summary.
Anything specific I should look at?
oh that guy
Thanks for the feedback!
I guess it would’ve made more sense to do multiple low-effort illustrations rather than one high-effort illustration. But I really wanted to do the Spongebob reference...