Iāve been in awe of your team these last couple years. Thank you for your great work. It meant so much to me that I got to meet with you personally when I took the pledge and became a GWWC Ambassador in 2022! Best of luck with your next steps.
Spencer R. Ericson šø
TemĀplate: StrateĀgic FundĀing ApĀproach for An EffecĀtive Foundation
Thank you for checking it out! Iāll check the settings on this. I havenāt been able to find a way to make this visible yet, and I think the best that Guided Track has to offer might be to click the previous section headings...
Guided moral weights clarĀifiĀcaĀtion tools: How much do you value savĀing a life vs. diĀrect givĀing or inĀcreasĀing wellbeĀing?
Thanks, I largely agree with this, but I worry that a Type I error could be much worse than is implied by the model here.
Suppose we believe there is a sentient type of AI, and we train powerful (human or artificial) agents to maximize the welfare of things we believe experience welfare. (The agents need not be the same beings as the ostensibly-sentient AIs.) Suppose we also believe itās easier to improve AI wellbeing than our own, either because we believe they have a higher floor or ceiling on their welfare range, or because itās easier to make more of them, or because we believe they have happier dispositions on average.
Being in constant triage, the agents might deprioritize human or animal welfare to improve the supposed wellbeing of the AIs. This is like a paperclip maximizing problem, but with the additional issue that extremely moral people who believe the AIs are sentient might not see a problem with it and may not attempt to stop it, or may even try to help it along.
SeĀcureBioāNotes from SoGive
Thank you Philippe. A family member has always described me as an HSP, but I hadnāt thought about it in relation to EA before. Your post helped me realize that I hold back from writing as much as I can/ābringing maximum value to the Forum because Iām worried that my work being recognized would be overwhelming in the HSP way Iām familiar with.
It leads to a catch-22 in that I thrive on meaningful, helpful work, as you mentioned. I love writing anything new and useful, from research to user manuals. But I can hardly think of something as frightening as āprolific output, eventually changing the course of ā¦ a discipline.ā I shudder to think of being influential as an individual. Iād much rather contribute to the influence of an anonymous mass. Not yet sure how to tackle this. Let me know if this is a familiar feeling.
Iām also wondering if the butcher shop and the grocery store didnāt have different answers because of the name you gave the store. Maybe it was because you gave the quantity in pounds instead of in items?
You previously told ChatGPT āThatās because youāre basically taking (and wasting) the whole item.ā ChatGPT might not have an association between āpoundā and āitemā the way a ācalzoneā is an āitem,ā so it might not use your earlier mention of āitemā as something that should affect how it predicts the words that come after āpound.ā
Or ChatGPT might have a really strong prior association between pounds ā mass ā [numbers that show up as decimals in texts about shopping] that overrode your earlier lesson.
To successfully reason in the way it did, ChatGPT would have needed a meta-representation for the word āactually,ā in order to understand that its prior answer was incorrect.
What makes this a meta-representation instead of something next-word-weight-y, like merely associating the appearance of āActually,ā with a goal that the following words should be negatively correlated in the corpus with the words that were in the previous message?
Thank you for your integrity, and congratulations on your successful research into the cost-effectiveness of this intervention!
So true! When I read the 80k article, it looks like Iād fit well with ops, but these are two important executive function traits that make me pretty bad at a lot of ops work. Iām great at long-term system organization/āevaluation projects (hence a lot of my past ops work on databases), but day-to-day fireman stuff is awful for me.
StrongMinds (5 of 9) - DeĀpresĀsionās MoĀral Weight
StrongMinds (4 of 9) - PsyĀchotherĀapyās imĀpact may be shorter lived than preĀviĀously esĀtiĀmated
Whatās the effect size of therĀapy? (StrongMinds 3 of 9)
SoGive launches exĀpanded adĀvisĀing and cusĀtom reĀsearch serĀvice: Feel more conĀfiĀdent in your givĀing, across cause areas
Was coming here to do the same thing!
You might like these articles:
Deep Report on Hypertension by Joel Tan
Intermediate Report on Hypertension by Joel Tan
Hypertension is Extremely Important, Tractable, and Neglected by Marshall
Unfortunately, Iām not available during the time period specified, but Iām interested in hearing how this goes and whether you open up a cohort later with different timezone availability.
Interesting point about how any extinction timelines less than the length of a human life change the thresholds we should be using for neartermism as well! Thank you, Greg. Iāll read what you linked.
Thank you Vasco! This seems hard to model, but worthwhile. Iāll think on it.
Hi Peter, I found this old post in my bookmarks! I went through your post history and couldnāt find the time when you clearly became more supportive of x-risk research, but you run IAPS now. I am still sympathetic to a lot of what you say in this old post, so I was wondering if you could describe when you became more supportive of x-risk work and why?