Iāve been in awe of your team these last couple years. Thank you for your great work. It meant so much to me that I got to meet with you personally when I took the pledge and became a GWWC Ambassador in 2022! Best of luck with your next steps.
Spencer R. Ericson šø
Thank you for checking it out! Iāll check the settings on this. I havenāt been able to find a way to make this visible yet, and I think the best that Guided Track has to offer might be to click the previous section headings...
Thanks, I largely agree with this, but I worry that a Type I error could be much worse than is implied by the model here.
Suppose we believe there is a sentient type of AI, and we train powerful (human or artificial) agents to maximize the welfare of things we believe experience welfare. (The agents need not be the same beings as the ostensibly-sentient AIs.) Suppose we also believe itās easier to improve AI wellbeing than our own, either because we believe they have a higher floor or ceiling on their welfare range, or because itās easier to make more of them, or because we believe they have happier dispositions on average.
Being in constant triage, the agents might deprioritize human or animal welfare to improve the supposed wellbeing of the AIs. This is like a paperclip maximizing problem, but with the additional issue that extremely moral people who believe the AIs are sentient might not see a problem with it and may not attempt to stop it, or may even try to help it along.
Thank you Philippe. A family member has always described me as an HSP, but I hadnāt thought about it in relation to EA before. Your post helped me realize that I hold back from writing as much as I can/ābringing maximum value to the Forum because Iām worried that my work being recognized would be overwhelming in the HSP way Iām familiar with.
It leads to a catch-22 in that I thrive on meaningful, helpful work, as you mentioned. I love writing anything new and useful, from research to user manuals. But I can hardly think of something as frightening as āprolific output, eventually changing the course of ā¦ a discipline.ā I shudder to think of being influential as an individual. Iād much rather contribute to the influence of an anonymous mass. Not yet sure how to tackle this. Let me know if this is a familiar feeling.
Iām also wondering if the butcher shop and the grocery store didnāt have different answers because of the name you gave the store. Maybe it was because you gave the quantity in pounds instead of in items?
You previously told ChatGPT āThatās because youāre basically taking (and wasting) the whole item.ā ChatGPT might not have an association between āpoundā and āitemā the way a ācalzoneā is an āitem,ā so it might not use your earlier mention of āitemā as something that should affect how it predicts the words that come after āpound.ā
Or ChatGPT might have a really strong prior association between pounds ā mass ā [numbers that show up as decimals in texts about shopping] that overrode your earlier lesson.
To successfully reason in the way it did, ChatGPT would have needed a meta-representation for the word āactually,ā in order to understand that its prior answer was incorrect.
What makes this a meta-representation instead of something next-word-weight-y, like merely associating the appearance of āActually,ā with a goal that the following words should be negatively correlated in the corpus with the words that were in the previous message?
Thank you for your integrity, and congratulations on your successful research into the cost-effectiveness of this intervention!
So true! When I read the 80k article, it looks like Iād fit well with ops, but these are two important executive function traits that make me pretty bad at a lot of ops work. Iām great at long-term system organization/āevaluation projects (hence a lot of my past ops work on databases), but day-to-day fireman stuff is awful for me.
Was coming here to do the same thing!
You might like these articles:
Deep Report on Hypertension by Joel Tan
Intermediate Report on Hypertension by Joel Tan
Hypertension is Extremely Important, Tractable, and Neglected by Marshall
Unfortunately, Iām not available during the time period specified, but Iām interested in hearing how this goes and whether you open up a cohort later with different timezone availability.
Interesting point about how any extinction timelines less than the length of a human life change the thresholds we should be using for neartermism as well! Thank you, Greg. Iāll read what you linked.
Thank you Vasco! This seems hard to model, but worthwhile. Iāll think on it.
Good to know, thanks! Iāve only been to EAGxNYC and EAGxBerkeley so far, so this is useful to help me calibrate.
I did feel like it was fancier than we needed it to be. I loved it, it was a great experience! But now that I know how great it is to have Listerine at conferences, I feel like I can bring my own for cheap. Iād also be happy enough to see like, instant oatmeal next to a kettle for breakfast. āBring your own lunch/ādinner,ā especially if the venue was down the road from a market. Iām a foodie for sure, and there is something important about showing people that vegan catering can be awesome. Good food is a big part of what turned me vegan. But it also makes me feel weird to see the EA community pampering me.
It was, however, important to me that it was in a central location. Living in Canada, any conference that I go to is probably going to be a travel situation. I donāt have a license (in any country), so I wouldnāt be able to rent a car if it was like, in the suburbs.
Having just gone to EAGxNYC, Iād be really alarmed if I walked into an EAG and it had higher production value than that. The chairs were so many different-but-coordinated styles. There was Listerine and contact lens fluid in the bathrooms. The soap was from a perfume house!
Cool, thanks! My bookmarks include AAC and 80k, which you have on there, as well as Tom Wein and EA Opportunity Board mentioned by other commenters. I also have:
https://āācharityvillage.com/āā
https://āāgfi.org/āāvocation/āā
https://āāwww.facebook.com/āāgroups/āā1062957250383195/āā
https://āāwww.effectivejobsboard.org/āā
https://āāwww.eawork.club/āā
Edit: And to tree out even more in the vegan space, GFIās alt protein career portal includes links to the TƤlist, Alt Protein Careers, and Blue Horizon job boards.
(Canāt say enough how much I appreciate it when people take my words of uncertainty like ācouldā literally!) Indeed, in most situations I can think of, Iād prefer a quantitative model. Especially by an experienced expert! Would that it were always available. Thanks for your comment!
GiveDirectly is a great option for people who put a high value on beneficiary autonomy and are open to giving anywhere in the world! This post is more about including people in the effective giving conversation who want to give back to their own communityāmaybe because they already live in one of the communities in the world with extreme poverty, or maybe because theyāre not all the way EA and thatās just how they prefer to give.
Amazing. Maybe Iāll see you there!
Thanks Vasco! This helps my understanding.
Hi Peter, I found this old post in my bookmarks! I went through your post history and couldnāt find the time when you clearly became more supportive of x-risk research, but you run IAPS now. I am still sympathetic to a lot of what you say in this old post, so I was wondering if you could describe when you became more supportive of x-risk work and why?