(b), perhaps with a dash of (a) too
… I’m not sure this is such a bad setup overall.
Yeah it doesn’t seem terrible. It probably misses a lot of upside, though.
(This might be the closest thing I’ve seen to that so far.)
Whoa, I didn’t know about this one. Thanks for the link!
Thanks, I think I overstated this in the OP (added a disclaimer noting this). I still think there’s a thing here but probably not to the degree I was holding.
In particular it felt strange that there wasn’t much engagement with the trauma argument or the moral uncertainty / moral hedging argument (“psychedelics are plausibly promising under both longtermist & short-termist views, so the case for psychedelics is more robust overall.”)
There was also basically no engagement with the studies I pointed to.
All of this felt strange (and still feels strange), though I now think I was too strong in the OP.
You asked for the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour, so this doesn’t seem that surprising.
Fair enough. I think I felt surprised because I’ve spent a long time thinking about this & tried to give the best case I could in support, and then submissions for “best case against” didn’t seem to engage heavily with my “best case for.”
1. I like that the originality of it. (It’s not just saying “the evidence base isn’t strong enough!”)
2. The objection better accords with my current worldview.
If you instead look at CFAR as a funnel for people working on AI risk, the “evidence base” seems clearer.
Do you know if there are stats on this, somewhere?
e.g. Out of X workshop participants in 2016, Y are now working on AI risk.
I agree if for CFAR you are looking at the metric of how rational their alumni are. If you instead look at CFAR as a funnel for people working on AI risk, the “evidence base” seems clearer.
Sure, I was pointing to the evidence base for the techniques taught by CFAR & other rationality training programs.
CFAR could be effective at recruiting people into AI risk due to Schelling-point dynamics, without the particular techniques it teaches being efficacious. (I’m not sure that’s true, just pointing out an orthogonality here.)
Why are popularity-contest dynamics harmful, precisely?
A similar sort of thing is a big part of the reason why Eliezer had difficulty advocating for AI safety, back in the 2000s.
Easy money :-)
New this month: Two users who have a recent history of strong posts and comments (Larks and Khorton)
Could you say more about the process by which Larks & Khorton were added to the roster of people who have a vote?
(I’m pretty sure I’ve been commenting & posting at the roughly same cadence as them. No one approached me about this, so I’m curious about the process here.)
My sense is that there’s a lot of causal/top down planning in EA.
My quick thought here is that EA currently has a very strong “evaluative” function (i.e. strong at assessing the pro / con of existing ideas), and a weak “generative” function (i.e. weak at coming up with new ideas).
I’m bullish on increasing EA generativity from the present margin.
Thanks, this is helpful.
I am talking about obligations in this Introduction (rather than ‘opportunities’)
Could you say a bit more about why you chose to go with the ‘obligations’ framing?
From my quick read of your Norton Introduction, it seems like you’re arguing for moral realism being a prerequisite to EA. (Words like “duty” and “command” make me think this.)
Is that right?
Thanks, super helpful!
Do you happen to know how promising it could be to work on innovating on new methods of discovery and tracking things like Damocloids?
An unknown number of those guys being out there is scary :-/
Thanks, I haven’t thought about this enough to say with confidence, but it seems plausible that many-worlds implies determinism such that this is really a question about determinism / living in a deterministic system.