See also Enthea’s post (a) on the paper.
+1 to “The Media Training Bible” being good.
+1 to doing something with Sci-Hub.
Sci-Hub has had a huge positive impact. Finding ways to support it / make it more legal / defend it from rent-seeking academic publishers would be great.
We largely chose not to do this because we mostly just agree with what Luke wrote and didn’t think we would be able to meaningfully improve upon it.
fwiw I found your comment really helpful & I think the RP content would benefit from including a sketch like this.
Got it, thanks!
Shameless plug for my essay on cluelessness: 1, 2, 3, 4
I give some examples here; the “stratospheric aerosol injection to blunt impacts of climate change” example is an x-risk reduction one.
It’s pretty straightforward to tell a story about how any well-intentioned action could have unintended, negative consequences in the long run. Lots of sci-fi uses this premise.
This doesn’t mean the stories are always plausible (though note that “plausibility” here is usually assessed by intuition), and it’s not the same as generating a comprehensive catalog of stories about how an action could go (the state space here is too large to generate such a catalog).
I guess I’m desiring more of a common vocabulary here, maybe something like “here are some open questions about consciousness that are cruxy, here’s where [our organization] ended up on each of those questions, here are some things that could change our mind.”
Luke did a good job of this in his report. From a quick look at Rethink Priorities’ consciousness stuff, I’m not sure what they concluded about the important open questions. (e.g. Where do they land on IIT? Where do they land on panpsychism? What premises would I have to hold to agree with their conclusions?)
Thanks for highlighting; I had only thought a little about RP’s work on consciousness. I’ll take a closer look. (This essay seems especially relevant.)
I’d love to see an independent dive into consciousness & moral patienthood.
Luke Muehlhauser did a thorough report (a) on this a couple years ago. As far as I know, that work is informing a lot of EA prioritization. It’s quite opinionated, and I haven’t seen too much discussion of its conclusions (there’s some in the AMA; the topic definitely warrants more).
Consciousness and its relationship to morality is complicated enough & important enough that an independent pass seems high value.
Potential entry point: Integrated Information Theory is currently pretty prominent in neuroscience; I’d love to see an EA steelman of it. (Luke on IIT, after giving a brief explainer: “let me jump straight to my reservations about IIT.”)
Also would be great to see an EA steelman of panpsychism, which is considered plausible by a bunch of philosophers and some scientists.
Wow, I didn’t know about this. Thank you for drawing attention to it.
The EA funding map I’d most want see would focus on current funding volumes and potential funding volumes:
Giant circle for Open Phil
Small circles for Jaan, Thiel, and Ben Delo (and maybe Vitalik?)
Cloud of tiny circles representing everyone else
Yeah, at a glance the current presentation really makes AI safety look like Patrick’s empire.
Ah, Oli beat me to it: Survival and Flourishing Fund grant applications open until October 4th
Update: looks the Survival and Flourishing Fund (a) is running some of Jaan’s organizational grant-making now.
fwiw when I donate to many charities in the same cycle, a lot of the reason is for the fuzzies. Probably a similar dynamic is at play for lots of other people too.
Wow, this a great point.
The standard “academic knowledge generation usually isn’t tooled towards focusing on the most important stuff” probably applies here.
Here’s a preprint (archive).