Re: more neurons = more valenced consciousness, does the full report address the hidden qualia possibility? (I didn’t notice it at a quick glance.) My sense was that people who argue for more neurons = more valenced consciousness are typically assuming hidden qualia, but your objections involving empirical studies are presumably assuming no hidden qualia.
lukeprog
- Nov 30, 2022, 8:36 PM; 2 points) 's comment on Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight by (
I really appreciate this format and would love to see other inaccurate articles covered in this way (so long as the reviewer is intellectually honest, of course).
I suspect this is because there isn’t a globally credible/legible consensus body generating or validating the forecasts, akin to the IPCC for climate forecasts that are made with even longer time horizons.
Cool, I might be spending a few weeks in Belgrade sometime next year! I’ll reach out if that ends up happening. (Writing from Dubrovnik now, and I met up with some rationalists/EAs in Zagreb ~1mo ago.)
Re: Shut Up and Divide. I haven’t read the other comments here but…
For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don’t want to help strangers, animals, future people, etc. But I think I “want to want to” help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don’t really detect in myself a symmetrical second-order want to NOT want to help strangers. So that’s one thing that “Shut up and multiply” has over “shut up and divide,” at least for me.
That said, I realize now that I’m often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor’s occasional desire to help strangers and suggest they generalize it, but I don’t symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that’s a more complicated conversation.
FWIW I generally agree with Eli’s reply here. I think maybe EAG should 2x or 3x in size, but I’d lobby for it to not be fully open.
Not sure it’s worth the effort, but I’d find the charts easier to read if you used a wider variety of colors.
As someone with a fair amount of context on longtermist AI policy-related grantmaking that is and isn’t happening, I’ll just pop in here briefly to say that I broadly disagree with the original post and broadly agree with [https://forum.effectivealtruism.org/posts/Xfon9oxyMFv47kFnc/some-concerns-about-policy-work-funding-and-the-long-term?commentId=TEHjaMd9srQtuc2W9](abergal’s reply).
Thanks, Anna!
FWIW I don’t use “theory of victory” to refer to 95th+ percentile outcomes (plus a theory of how we could plausibly have ended up there). I use it to refer to outcomes where we “succeed / achieve victory,” whether I think that represents the top 5% of outcomes or the top 20% or whatever. So e.g. my theory of victory for climate change would include more likely outcomes than my theory of victory for AI does, because I think succeeding re: AI is less likely.
FWIW, I wouldn’t say I’m “dumb,” but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire “EA” career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don’t have, and I mostly just skip those.
Sometimes this makes me insecure, but mostly I’ve been able to just keep repeating to myself something like “Whatever, I’m excited about this idea of helping others as much as possible, I’m able to contribute in various ways despite not being able to understand half of what Paul Christiano says, and other EAs are generally friendly to me.”
A couple things that have been helpful to me: comparative advantage and stoic philosophy.
At some point it would also be cool if there was some kind of regular EA webzine that published only stuff suitable for a general audience, like The Economist or Scientific American but for EA topics.
- Jul 12, 2022, 12:18 PM; 241 points) 's comment on EA for dumb people? by (
Since this exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the “probability of consciousness” numbers, though that was years ago and my numbers would probably be different now.)
- Nov 22, 2023, 3:55 PM; 20 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
- Nov 28, 2023, 7:08 AM; 16 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
Despite ample need for materials science in pandemic prevention, electrical engineers in climate change, civil engineers in civilisational resilience, and bioengineering in alternative proteins, EA has not yet built a community fostering the talent needed to meet these needs.
Also engineers who work on AI hardware, e.g. to help develop the technologies and processes needed to implement most compute governance ideas!
Very exciting!
+1 to the question, I tried to figure this out a couple years ago and all the footnotes and citations kept bottoming out without much information having been provided.
Thanks for this! I looked into this further and tweaked the final paragraph of the post and its footnote as a result.
Thank you for everything you’re doing!
Yeah, bummer, not happy about this.
How many independent or semi-independent abolitionist movements were there around the world during the period of global abolition, vs. one big one that started with Quakers+Britain and then was spread around the world primarily by Europeans? (E.g. see footnote 82 here.)