Gaurav Yadav
Yadav
Nice! I’m down in Sheffield during points, would love to visit when I am around!
[Video] - How does the EU AI Act Work?
[Video] Why SB-1047 deserves a fairer debate
The Bill has passed the appropriations committee and will now move onto the Assembly floor. There were some changes made to the Bill. From the press release:
Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.
Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.
Adjusting legal standards—The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.
New threshold to protect startups’ ability to fine-tune open sourced models – Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.
Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.
I’d like to get opinions on something. I’m planning to experiment with making YouTube videos on AI Governance over the next month or two. Ideally, I want people to see these videos so I can get feedback or get told that I’ve said something incorrect, which is helpful for correcting my own model around things.
I’d share these videos by posting on the EA Forum, but I’m unsure about the best approach:a) Posting on the frontpage feels like seeking attention or promoting for views, especially since I’m new to video-making and don’t expect high quality initially.
b) Posting as personal blog posts seems less intrusive, as only those who opt to see personal posts will see them. This feels like I have “permission” to make noise and is less intimidating.
C) Putting them in my quick takes section, which is currently my default, would be even more out of the way.Given my account’s karma, my posts typically start with 4 or 5 karma and stay on the frontpage for a few hours by default. I think the forum has improved a lot recently—there’s less volume of posts and more interesting discussions. I don’t want to create noise each time I make a video.
However, each video is relevant to the EA community. If people don’t like a video, it’ll naturally move off the front page fairly soon. I’m more likely to get views if I don’t make it a personal blog post or update my quick takes. These views are important to me because they mean more interesting feedback and a higher likelihood that I’ll improve at making videos. (Also given I am only human, more views and engagement means more motivation to keep making things).
I’d appreciate others’ opinions on this. I recognise that part of my hesitation probably stems from a lack of confidence and fear of others’ opinions, but I don’t think these are necessarily good justifications for my decision.
I am now trying to make YouTube videos explaining AI Governance. Here is a video on RSPs. The video has a few problems, and the editing is sometimes choppy. This can be a fun hobby, and I could improve on skills that seem useful to have. The first is having the confidence to talk to a camera. If you have feedback, here is a form.
I run frequently, and it would be nice to eventually see more GiveWell-recommended charities represented at marathon events in the UK. For example, I didn’t get a place through the ballot for the London Marathon, but I could still obtain a charity place. However, I don’t find any of the available charities particularly appealing to fundraise for, and I wish orgs like the Against Malaria Foundation were offered instead.
I knew Marissa briefly while they were running EA Anywhere, it was one of my first points of contact with the EA community given I was living somewhere without much of an EA presence at the time. This is painful news to hear. May they rest in peace.
OpenAI’s Superalignment team has opened Fast Grants
Open questions on a Chinese invasion of Taiwan and its effects on the semiconductor stock
(Report) Evaluating Taiwan’s Tactics to Safeguard its Semiconductor Assets Against a Chinese Invasion
(I am mostly articulating feelings here. I am unsure about what I think should change).
I am somewhat disappointed with the way Manifund has turned out. This isn’t to critique the manifund team or that regranting as an idea is bad, but after a few months of excitement and momentum, things have somewhat decelerated. While you get the occasional cool projects, most of the projects on the website don’t seem particularly impressive to me. I also feel like some of the regrantors seem slow to move money, but it could be that the previous problem is feeding into this.
Maybe you’re referring to this—https://forum.effectivealtruism.org/posts/56CHyqoZskFejWgae/ea-is-a-global-community-but-should-it-be?
Hmm I’d very keen to see what an answer to this might look like. I know some people I work with are interested in making a similar kind of switch.
It might be helpful to also think of China’s compute access in a world where they invade Taiwan. I don’t think this should be weighed highly IMO but still seems personally useful to work through.
i’d recommend reading the following for people interested by this—https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation
I assume the actions you’ve taken can’t be shared? (No pressure if it can’t).
Hmm I disagree. I’m not a fan of making it a norm for someone to reply to me, feels icky and I don’t think anyone has responsibility to message me back unless we’ve scheduled something beforehand.
I empathise with the awkwardness of trying to reach out again but something like ‘Hey I tried reaching you at EAG: London, but didn’t get a response. No pressure but if you’d have time at this EAGx to have a chat I’d love to. Some things I’d like to get out of our conversation: X, Y, Z….’ could be reasonable way of dealing with this.
PauseAI seem funding constraint—probably needs more runway for returns to be seen on their work
]