The AngelList link is disrupted by that trailing ‘.’, without that it works: https://angel.co/l/2vTgdS
valence
- valence 8 Sep 2021 16:43 UTC4 points0 ∶ 0
Error
The value NIL is not of type SIMPLE-STRING when binding #:USER-ID163
- valence 8 Sep 2021 15:31 UTC7 points0 ∶ 0in reply to: Abby Babby’s comment on: A Primer on the Symmetry Theory of Valence
4. Why can’t you just ask people if they’re suffering? What’s the value of quantifying the degree of their suffering using harmonic coherence?
Why can’t you just observe that objects fall towards the ground? What’s the value of quantifying the degree of their falling using laws of motion?
How much do newborns suffer? Whales? Ants?
- valence 8 Sep 2021 4:43 UTC17 points0 ∶ 0in reply to: Holly Elmore ⏸️ 🔸’s comment on: A Primer on the Symmetry Theory of Valence
I agree that honesty is more important than weirdness. Maybe I’m being taken, but I see miscommunication and not dishonesty from QRI.
I am not sure what an appropriate standard of rigor is for a preparadigmatic area. I would welcome more qualifiers and softer claims.
- valence 8 Sep 2021 1:56 UTC7 points0 ∶ 0in reply to: MichaelPlant’s comment on: A Primer on the Symmetry Theory of Valence
I’ll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
1. Which question is QRI trying to answer?
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
2. Why does that all matter?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don’t understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we’re in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.
- valence 6 Sep 2021 22:50 UTC11 points1 ∶ 0in reply to: Abby Babby’s comment on: A Primer on the Symmetry Theory of Valence
This is a post from an organization trying to raise hundreds of thousands of dollars.
...
If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This reads to me as insinuating fraud, without much supporting evidence.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the “Keep EA Weird” spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn’t have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I’ll admit to finding the problem worth working on. )
It’s not catchy, but conceptually I like Hans Rosling’s classification into Levels 1, 2, 3, & 4, with breakpoints around $2, $8, and $32 per day. It’s also useful to be able to say “Country X is largely at Level 2, but a significant population is still at Level 1 and would benefit from Intervention Y.”
A short review of Factfulness: https://www.gatesnotes.com/books/factfulness
- valence 19 Jun 2020 15:59 UTC11 points0 ∶ 0in reply to: valence’s comment on: Against opposing SJ activism/cancellations
This post describes related concerns, and helpfully links to previous discussions in Appendix 1.
- valence 19 Jun 2020 14:23 UTC15 points0 ∶ 0in reply to: ChichikoBendeliani’s comment on: Against opposing SJ activism/cancellations
I am also highly uncertain of EAs’ ability to intervene in cultural change, but I do want us to take a hard look at it and discuss it. It may be a cause that is tractable early on, but hopeless if ignored.
You may not think Hsu’s case “actually matters”, but how many turns of the wheel is it before it is someone else?
Peter Singer has taken enough controversial stances to be “cancelled” from any direction. I want the next Singer(s) to still feel free to try to figure out what really matters, and what we should do.
We needn’t take on reputational risk unnecessarily, but if it is possible for EAs to coordinate to stop a Cultural Revolution, that would seem to be a Cause X candidate. Toby Ord describes a great-power war as an existential risk factor, as it would hurt our odds on: AI, nuclear war, and climate change, all at once. I think losing free expression would also qualify as an existential risk factor.
You might browse Intro to Brain-Like-AGI Safety , or check back in a few weeks once it’s all published. Towards the end of the sequence Steve intends to include “a list of open questions and advice for getting involved in the field.”
DeepMind takes a fair amount of inspiration from neuroscience.
Diving in to their related papers might be worthwhile, though the emphasis is often on capabilities rather than safety.
Your personal fit is a huge consideration when evaluating the two paths (80k hours might be able to help you think through this). But if you’re on the fence, I’d lean towards the more technical degree.