Thanks for the detailed reply. So instead of informing people who value altruism about the best ways to help, we should also try to elevate the importance of altruism within people’s values. Am I understanding correctly?
JakubK
Next steps after AGISF at UMich
List of technical AI safety exercises and projects
6-paragraph AI risk intro for MAISI
Awesome, I added a link to your spreadsheet.
Thank you!
Great! Added.
Thanks! 1st link is my doc, 2nd + 3rd + 4th are on Stafforini’s list of EA syllabi, and the 5th link is in my doc.
Awesome! Yours is better than mine. I’ll just add to yours from now on.
List of lists of EA syllabi
Big list of AI safety videos
Big list of EA videos
Big list of icebreaker questions
I wonder if this project could also provide merch for various EA Groups? E.g. a university group could request a design specific to their university. A centralized merch system would be less work overall than having each EA group make its own merch.
Similar to some of the other comments, I think EA groups should pay for this service.
Besides Global health/ Biomedical Innovation, some of the tags that are EA problems profiles or related problems are: “Climate”, “Healthcare”, “Biotech”, “Digital Health”, “Health Tech”, “Mental Health Tech”, “Telemedicine”, “GovTech”, “AI-powered Drug Discovery”, “Electric Vehicles”, “Telehealth”, “Diagnostics”, “Carbon Capture and Removal”, “Drug Discovery”, “Synthetic Biology”, “Sustainable Fashion”, “Genomics”, “Agriculture”, “NeuroTechnology”, “SleepTech”, “Fertility Tech”, “Oncology”, “Cellular Agriculture”, “Civic Tech”, “FemTech”, “Medical Robotics”, “Alternative Battery Tech”, “Cultivated Meat”, “Cultured Meat”, “3D Printed Foods”, “NanoMedicines”, and “Rocketry”.
Worth noting that some technologies in these areas could easily be misused or repurposed to cause harm.
If the data is available, for each of these 5 iterations could you please list the following?
number of people who applied
number of people you interviewed (done)
number of people you admitted
number of people who completed the program
This is because the current paradigm for community building emphasises finding talented and ambitious people that want to tackle the world’s most pressing problem, and not to create them.
Can you clarify this? This could mean different things:
We should make EA people more talented
We should make EA people more ambitious
We should make people more talented
We should make people more ambitious
We should make people want to tackle the world’s most pressing problems
and other stuff. I assume you mean (3)+(4)+(5) all at once via a new strategy targeting the “four reasons why people are not joining your Introductory EA Program.” IMO current community building is already trying to do (5). And there seem to be efforts to make people more productive (e.g. some office spaces provide food and bring people together so they can share ideas).
Summary of 80k’s AI problem profile
Thanks for your reply. I think the biggest cruxes are about how quickly humans can adapt to change and how quickly AI capabilities can grow.
To my original point in (2), I’d also add something like “crossing the finish line” or “reaching the end”: within the next few decades, I expect AI to be capable of automating nearly all knowledge work. By “all knowledge work,” I mean all thinking-related tasks, includes both 2022 jobs and post-2022 jobs. I worry that this capability level (or a level reasonably close to it) might arrive quickly, before we’re prepared to deal with the ensuing unemployment spike.
I was about to comment this too. From a brief skim I can’t find any clarification about what the term “fellowship programs” is referring to.