This looks so cool! Good luck!!!
Abby Babby
The US AISI is looking for public comments on best practices to manage model misuse risk
This course sounds cool! Unfortunately there doesn’t seem to be too much relevant material out there.
This is a stretch, but I think there’s probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117
For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai
US Secretary of Commerce releases strategic vision on AI Safety, announces plan for global cooperation among AI Safety Institutes
I think NOVAH may have been inspired by Betsy Levy Paluck’s research into using radio dramas to reduce racial prejudice. https://sparq.stanford.edu/solutions/radio-soaps-stop-hate
Thanks for the clarification, too many Carnegies!
From what I understand, the MacArthur foundation was one of the main funders of nuclear security research, including at the Carnegie Endowment for International Peace, but they massively reduced their funding of nuclear projects and no large funder has replaced them. https://www.macfound.org/grantee/carnegie-endowment-for-international-peace-2457/
(I’ve edited this comment, I got confused between the MacArthur foundation and the various Carnegie philanthropic efforts.)
I am not more informed on NIST than you are, but I would offer the following framework:
1. If your comment is taken into account, FANTASTIC.
2. If your comment is not taken into account, how much do you learn from deeply engaging with US policy and generating your own ideas about how to improve it? If you’re considering pivoting into AI governance/evals, this might be a great learning opportunity. If that’s not relevant to you, then maybe commenting has less value.
Sounds like we need some people to make some comments!!!!!
NIST Seeks Comments on Draft AI Guidance Documents, Announces Launch of New Program to Evaluate and Measure GenAI Technologies
Wow, this seems like really great news!
I think we’re still the youngest parents at daycare, a year and a half after I initially posted this.
CNN reporting US fertility rates dropping to “lowest in a century”. Seems bad: https://www.cnn.com/2024/04/24/health/us-birth-rate-decline-2023-cdc/index.html
I first got interested in Effective Altruism in 2011, before CEA or Anthropic existed. Over the past 13 years, I’ve been rejected from jobs at Open Philanthropy, GiveWell, DeepMind, and the Forethought Foundation. I work at a core EA org now, so I don’t know if my perspective is what you’re looking for. But it still might be useful to think about the EA community from a historical perspective.
Back in ye olden days, EA was a philosophy more than a career plan; you could agree with the core concepts—we should care about how efficiently we can convert resources to helping people; we should care about all people equally, even people we will never meet—but there were very few EA orgs/jobs. So many of the super hardcore EAs were just doing normal things in their daily lives, then thinking hard about where to donate relatively small amounts of money.
This seems great. People got to meet their personal obligations/follow their passions, and then make a difference via donations. Some people took EA principles extremely seriously by deciding to go vegan, massively cutting back on personal consumption to donate more, or totally changing their career to optimize for earning to give. But none of this was necessary to remain a member of good standing in the community. I myself didn’t really change my career trajectory until about 10 years after I first heard about EA. None of my EA friends seemed to judge me for this.
If you also care about people across the world (not just those in your tribe), consider the effectiveness of different charitable programs, and take weird ideas seriously if they’re logically sound, then I think you too qualify as a valued member of the EA community, if you want that affiliation.
To be honest, I am actually excited about people who share these values to be active participants in the normal/real world, instead of all sequestered away in insular EA orgs. Your career path could be: “I do normal things at my normal job, but I vote and donate in ways guided by my principles, and I talk to my social network about problems that I think are really important in the world. I raise the sanity level of my company and social network, and make it easier for the world to coordinate around important issues by signaling that citizens care about this stuff and will support policies that protect the future. I save a life in expectation every year via my donations.” That seems pretty great! People who do stuff like that are welcome in the community.
I think maybe this sort of “normal” trajectory seems disappointing because there are more EA opportunities available now than there used to be. But I think the “normal” route is still the right path for many (most?) people who agree with EA principles.
Not sure what the inclusion criteria is for conferences, but I thought it was interesting the Cognitive Neuroscience Society made it on the list you linked. I would consider the Society for Neuroscience conference, just because it has tens of thousands of attendees, so somebody will be presenting on the neuro topic you’re interested in there: https://www.sfn.org/
This is so, so, so, wonderful! Thanks for organizing such a fantastic event, as well as sharing all this analysis/feedback/reflection. I want to go next year!!!!
EA Funds That Exist 2024 (Linkpost)
So glad somebody is finally fixing Swapcard!
Any plans to have this printed on t shirts?
Would the new CEA be considered EA adjacent?
This is a really complex space with lots of moving parts; very cool to see how you’ve compiled/analyzed everything! Haven’t finished going through your report yet, but it looks awesome :)