Forum? I’m against ’em!
utilistrutil
Apply for MATS Winter 2023-24!
I’ll be looking forward to hearing more about your work on whistleblowing! I’ve heard some promising takes about this direction. Strikes me as broadly good and currently neglected.
This is so well-written!
I’m cringing so hard already fr
A Love Letter to EA
Thanks for such a thorough response! I am also curious to hear Oscar’s answer :)
When applicants requested feedback, did they do that in the application or by reaching out after receiving a rejection?
Is that lognormal distribution responsible for
the cost-effectiveness is non-linearly related to speed-up time.
If yes, what’s the intuition behind this distribution? If not, why is cost-effectiveness non-linear in speed-up time?
Something I found especially troubling when applying to many EA jobs is the sense that I am p-hacking my way in. Perhaps I am never the best candidate, but the hiring process is sufficiently noisy that I can expect to be hired somewhere if I apply to enough places. This feels like I am deceiving the organizations that I believe in and misallocating the community’s resources.
There might be some truth in this, but it’s easy to take the idea too far. I like to remind myself:
The process is so noisy! A lot of the time the best applicant doesn’t get the job, and sometimes that will be me. I ask myself, “do I really think they understand my abilities based on that cover letter and work test?”
A job is a high-dimensional object, and it’s hard to screen for many of those dimensions. This means that the fact that you were rejected from one job might not be very strong evidence that you are a poor fit for another (even superficially similar) role. It also means that you can be an excellent fit in surprising ways: maybe you know that you’re a talented public speaker, but no one ever asks you to prove it in an interview. So conditional on getting a job, I think you shouldn’t feel like an imposter but rather eager to contribute your unique talents. My old manager was fond of saying “in a high-dimensional sphere, most of the points are close to the edge,” by which he meant that most people have a unique skill profile: maybe I’m not the best at research or ops or comms, but I could still be the best at (research x ops x comms).
Thanks for the references! Looking forward to reading :)
Wild Animal Welfare Scenarios for AI Doom
Fantastic, thanks for sharing!
Thanks for this! I would still be interested to see estimates of eg mice per acre in forests vs farms and I’m not sure yet whether this deforestation effect is reversible. I’ll follow up if I come across anything like that.
I agree that the quality of life question is thornier.
Under CP and CKR, Zuckerberg would have given higher credence to AI risk purely on observing Yudkowsky’s higher credence, and/or Yudkowsky would have given higher credence to AI risk purely on observing Zuckerberg’s lower credence, until they agreed.
Should that say lower, instead?
Decreasing the production of animal feed, and therefore reducing crop area, which tends to: Increase the population of wild animals
Could you share the source for this? I’ve wondered about the empirics here. Farms do support wild animals (mice, birds, insects, etc), and there is precedent for farms being paved over when they shut down, which prevents the land from being rewilded.
Suppose someone is an ethical realist: the One True Morality is out there, somewhere, for us to discover. Is it likely that AGI will be able to reason its way to finding it?
What are the best examples of AI behavior we have seen where a model does something “unreasonable” to further its goals? Hallucinating citations?
What are the arguments for why someone should work in AI safety over wild animal welfare? (Holding constant personal fit etc)
If someone thinks wild animals live positive lives, is it reasonable to think that AI doom would mean human extinction but maintain ecosystems? Or does AI doom threaten animals as well?
Does anyone have BOTECs on numbers of wild animals vs numbers of digital minds?
At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia.
We should also think about these on the margin. Ie the lives averted might have been shorter than average and consumed less meat than average.
I imagine a proof (by contradiction) would work something like this:
Suppose you place > 1/x probability on your credences moving by a factor of x. Then the expectation of your future beliefs is > prior * x * 1/x = prior, so your credence will increase. With our remaining probability mass, can we anticipate some evidence in the other direction, such that our beliefs still satisfy conservation of expected evidence? The lowest our credence can go is 0, but even if we place our remaining < 1 − 1/x probability on 0, we would still find future beliefs > prior * x * 1/x + 0 * [remaining probability] = prior. So we would necessarily violate conservation of expected evidence, and we conclude that Joe’s rule holds.
Note that all of these comments apply, symmetrically, to people nearly certain of doom. 99.99%? OK, so less than 1% than you ever drop to 99% or lower?
But I don’t think this proof works for beliefs decreasing (because we don’t have the lower bound of 0). Consider this counterexample:
prior = 10%
probability of decreasing to 5% (factor of 2) = 60% > 1⁄2 —> violates the rule
probability of increasing to 17.5% = 40%
Then, expectation of future beliefs = 5% * 60% + 17.5% * 40% = 10%
So conservation of expected evidence doesn’t seem to imply Joe’s rule in this direction? (Maybe it holds once you introduce some restrictions on your prior, like in his 99.99% example, where you can’t place the remaining probability mass any higher than 1, so the rule still bites.)
This asymmetry seems weird?? Would love for someone to clear this up.
Update: We have finalized our selection of mentors.