Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
Your opening line seems to be trying to mimic the tone of mocking someone obnoxiously. Then you follow-up with an exaggerated telling of events. Then another exaggerated comparison.
Weird bug. But it only happens when someone votes and unvotes multiple times, and when you vote again the count resets. So this is unlikely to skew anything by much.
Given that I just got a notification for someone disagree-voting on this:
This is definitely no longer the case in the current EA Funding landscape. It used to be the case, but various changes in the memetic and political landscape have made funding gaps much stickier, and much less anti-inductive (mostly because cost-effectiveness prioritization of the big funders got a lot less comprehensive, so there is low-hanging fruit again).
I’m not making any claims about whether the thresholds above are sensible, or whether it was wise for them to be suggested when they were. I do think it seems clear with hindsight that some of them are unworkably low. But again, advocating that AI development be regulated at a certain level is not the same as predicting with certainty that it would be catastrophic not to. I often feel that taking action to mitigate low probabilities of very severe harm, otherwise known as “erring on the side of caution” somehow becomes a foreign concept in discussions of AI risk.
(On a quick skim, and from what I remember from what the people actually called for, I think basically all of these thresholds were not for banning the technology, but for things like liability regimes, and in some cases I think the thresholds mentioned are completely made up)
You’re welcome, and makes sense. And yeah, I knew there was a period where ARC avoided getting OP funding for COI reasons, so I was extrapolating from that to not having received funding at all, but it does seem like OP had still funded ARC back in 2022.
Thanks! This does seem helpful.
One random question/possible correction:
Is Kelsey an OpenPhil grantee or employee? Future Perfect never listed OpenPhil as one of its funders, so I am a bit surprised. Possibly Kelsey received some other OP grants, but I had a bit of a sense Kelsey and Future Perfect more general cared about having financial independence from OP.
Relatedly, is Eric Neyman an Open Phil grantee or employee? I thought ARC was not being funded by OP either. Again, maybe he is a grantee for other reasons.
(I am somewhat sympathetic to this request, but really, I don’t think posts on the EA Forum should be that narrow in its scope. Clearly modeling important society-wide dynamics is useful to the broader EA mission. To do the most good you need to model societies and how people coordinate and such. Those things to me seem much more useful than the marginal random fact about factory farming or malaria nets)
I don’t think this is true, or at least I think you are misrepresenting the tradeoffs and diversity here. There is some publication bias here because people are more precise in papers, but honestly, scientists are also not more precise than many top LW posts in the discussion section of their papers, especially when covering wider-ranging topics.
Predictive coding papers use language incredibly imprecisely, analytic philosophy often uses words in really confusing and inconsistent ways, economists (especially macroeconomists) throw out various terms in quite imprecise ways.
But also, as soon as you leave the context of official publications, but are instead looking at lectures, or books, or private letters, you will see people use language much less precisely, and those contexts are where a lot of the relevant intellectual work happens. Especially when scientists start talking about the kind of stuff that LW likes to talk about, like intelligence and philosophy of science, there is much less rigor (and also, I recommend people read a human’s guide to words as a general set of arguments for why “precise definitions” are really not viable as a constraint on language)
AI systems modeling their own training process is a pretty big deal for modeling what AIs will end up caring about, and how well you can control them (cf. the latest Anthropic paper)
For most cognitive tasks, there does not seem to be a particularly fundamental threshold at human-level performance (this one is still out in many ways, but we are seeing more evidence for this on an ongoing basis as we reach superhuman performance on many measures)
Developing “contextual awareness” does not require some special grounding insight (i.e. training systems to be general purpose problem solvers naturally causes them to optimize themselves and their environment and become aware of their context, etc.). This was back in 2020, 2021, 2022 one of the recurring disagreements between me and many ML people.
(In general, the salaries which I will work for in EA go up with funding uncertainty, not down, because indeed it means future funding is more likely to dry up, and I have to pay the high costs of a career transition, or self-fund for many years)
You are right! I had mostly paid attention to the bullet points, which didn’t extract the parts of the linked report that addressed my concerns, but you are right that it totally links to the same report that totally does!
Sure, I don’t think it makes a difference whether the chicken grows to a bigger size in total, or grows to a bigger size more quickly, both would establish a prior that you need fewer years of chicken-suffering for the same amount of meat, and as such that this would be good (barring other considerations).
No, those are two totally separate types of considerations? In one you are directly aiming to work against the goals of someone else in a zero-sum fashion, the other one is just a normal prediction about what will actually happen?
You really should have very different norms about how you are dealing with adversarial considerations and how you are dealing with normal causal/environmental considerations. I don’t care about calling them “vanilla” or not, I think we should generally have a high prior against arguments of the form “X is bad, Y is hurting X, therefore Y is good”.
Thank you! This is the kind of analysis I was looking for.
Huh, yeah, seems like a loss to me.
Correspondingly, while the OP does not engage in “literally lying” I think sentences like “In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens.” and “The judges have ruled in favour on our main argument—that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred.” strike me as highly misleading, or at least willfully ignorant, based on your explanation here.
- Plus #1: I assume that anything the animal industry doesn’t like would increase costs for raising chickens. I’d correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency).
I think this feels like a very aggressive zero-sum mindset. I agree that sometimes you want to have an attitude like this, but I at least at the present think that acting with the attitude of “let’s just make animal industry as costly as possible” would understandable cause backlash, make it harder to come to agreements, and I think a reasonable justice system would punish people who do such things (even if they think they are morally in the right).
Wow, yeah, I was quite misled by the lead. Can anyone give a more independent assessment of what this actually means legally?
It’s been confirmed that the donation matching still applies to early employees: https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-shortform?commentId=oeXHdxZixbc7wwqna