Throwaway81
Most legislation is written broadly enough so that it won’t have to be repealed because it’s rare that legislation is repealed. For example, take their current definition of frontier AI model, which is extremely prescriptive and uses 10^26 in some cases. Usually to be future proof these definitions will be written broadly enough such that the executive can update the technical specifics as the technology advances. Those types of regulations are the sorts of things that would include such details, but not the legislation. I can imagine a future where models are all over 10^26 and meet the other requirements of the model act’s definition of frontier AI model. The reason to even govern the frontier in the first place is because you don’t know what’s coming—it’s not like we know that dangerous capabilities emerge at 10^26 so there’s no reason to use this threshold as putting models under regulatory scrutiny forever. Also (eventually) we might achieve some algorithmic efficiency breakthroughs in which the most capable (and therefore dangerous) models don’t need as much compute anymore and so might not even qualify as frontier AI models under the Act anymore. So I see the risk of this bill first capturing a bunch of models that it doesn’t mean to cover and then possibly not covering any models—all because it’s not written in a future proof way. The bill is written more like a regulation for for the executive level rather than the legislative level.
Sure, I’m not going to be able to respond any more to this thread but the methods of governance prescribed themselves are not future proof, as AI governance may need may change as the tech or landscape changes, and the definitions are not future proof.
This contains several inaccuracies and misleading statements that I won’t fully enumerate, but at least 2:
The Nucleic Acid Synthesis Act does not at all “require biolabs that receive federal funding to confirm the real identity of customers who are buying their synthetic DNA.” It empowers NIST to create standards and best practices for screening.
It’s not the case that “The particular bills that we edited did not pass Congress, but this is because almost nothing passed out of the 118th Congress.” Lots of bills passed in the CR and other packages. But it was a historically dysfunctional and slow year
Personal gripe: the model legislation is overly prescriptive in a way that does not future proof the statute enough to protect against the fast moving nature of AI and how governance may need to shift and adapt.
I disagree voted because I think that withholding of private info should be a strong norm and that it’s not the poster’s job to please the community with privileged info that could hurt them when they are already doing a service by posting. I also think it could possibly serve as an indicator of some sort (eg, if people searched the forum for comments like this perhaps it might point towards a trend of posters worrying about how much blowback they think they might get from funders/other EA orgs if actual criticism of them/backdoor convos were revealed—whether that’s a warranted worry or not). I also think that by leaking private convos that will hurt a person because now every time someone interacts with that person they will think that their convo might get leaked online and will not engages with that person. Seems mean to ask someone to do that for you just so you can have more data to judge them on—they are trying to communicate something real to you but obviously can’t. I don’t have any reason to doubt the poster unless they’ve lied before and have a strong trust norm unless there is a reason not to trust. But I double liked because most of the rest of your comment was good :-)
Ahhh got it, thanks! Funny how most of the comments there are trying to rationalize his affiliation with EA as “not EA” lol.
Hadn’t seen it mentioned anywhere yet that Luigi Mangione (US person who killed a health insurance executive) was interested in EA. “He suggested I schedule group video calls as he really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism.”
Throwaway81′s Quick takes
GiveDirectly
shrug I think it would be helpful to me, and like I said the reader can take it or leave it. Thems the breaks. I think commenting from a throwaway account providing the data and letting the reader decide is better than not commenting and not providing data
Thanks again! I guess I’m just trying to understand why these metrics are important or how they are important. Why does it matter how many people in the US have heard of EA or how they feel about it? What is the underlying question the survey and its year over-end fellows are trying to get at? Eg, is it trying to measure how well CEA is performing in terms of whether its programs are making a difference in the populace?
- Dec 7, 2024, 12:03 PM; 8 points) 's comment on JWS’s Quick takes by (
The reader can take it or leave it given these facts, but imo it serves as a data point that someone from US Policy is pointing to this real thing.
Very detailed and thorough response, thank you!
Last question if you have time: what questions was this survey trying to answer?
If you are trying to get a US policy job than probably no, but it also depends on the section of US policy
What questions was this survey trying to answer? I kind of feel like the most important version of a survey like this would be certain subsets of people (eg, tech, policy, animal welfare).
Also why didn’t you call out that the more people know what EA is, the less they seem to like it? Or was that difference not statistically significant?
(“Sentiment towards EA among those who had heard of it was positive (51% positive vs. 38% negative among those stringently aware, and 70% positive, vs. 22% negative among those permissively aware).”
That’s just completely false. Sorry I can’t say more.
I can try to answer 3 for Marcus. Imagine that AI policy is a soccer game for professional soccer players. You’ve put in a lot of practice, know the rules, and know how to work well with your teammates. You’re scoring some goals.
Then someone from an interim/pick-up game league who is just learning to play soccer comes along and tried to be on the team, or—in this case is not even aware of the team? If we let them on the team, not only do we look bad to the other team, but since policy is a team sport, they drive our overall impact down because it’s kind of dead weight that we now have to try to guard against for things they do that they think are helpful but are not, depleting energy and resources better spent on getting goals.
Ah, formerly CE. No, I think that formerly CE is not well suited for US Policy-focused spinouts. There aren’t any people on staff that can advise on that well (I’ve been involved in a couple of policy consultation projects for that and it seemed that the advisors just had no grasp regarding what was going on in US policy/advocacy). I think their classic charities are good though!
I’m not familiar with that program, sorry.
I think it would be helpful to be able to see the number of applications to EA global over time compared to attendance.