Oh whoops, I was looking for a tweet they wrote a while back and confused it with the one I linked. I was thinking of this one, where he states that âslowing down AI developmentâ is a mistake. But Iâm realizing that this was also only in January, when the OpenAI funding thing came out, so doesnât necessarily tell us much about historical values.
I suppose you could interpret some tweets like this or this in a variety of ways but it now reads as consistent with âdonât let AI fear get in the way of progressâ type views. I donât say this to suggest that EA funders should have been able to tell ages ago, btw, just trying to see if thereâs any way to get additional past data.
Another fairly relevant thing to me is that their work is on benchmarking and forecasting potential outcomes, something that doesnât seem directly tied to safety and which is also clearly useful to accelerationists. As a relative outsider to this space, it surprises me much less that Epoch would be mostly made up of folks interested in AI acceleration or at least neutral towards it, than if I found out that some group researching something more explicitly safety-focused had those values. Maybe the takeaway there is that if someone is doing something that is useful both to acceleration-y people and safety people, check the details? But perhaps thatâs being overly suspicious.
And I guess also more generally, again from a relatively outside perspective, itâs always seemed like AI folks in EA have been concerned with both gaining the benefits of AI and avoiding X risk. That kind of tension was at issue when this article blew up here a few years back and seems to be a key part of why the OpenAI thing backfired so badly. It just seems really hard to combine building the tool and making it safe into the same movement; if you do, I donât think stuff like Mechanize coming out of it should be that surprising, because your party will have guests who only care about one thing or the other.
Oh whoops, I was looking for a tweet they wrote a while back and confused it with the one I linked. I was thinking of this one, where he states that âslowing down AI developmentâ is a mistake. But Iâm realizing that this was also only in January, when the OpenAI funding thing came out, so doesnât necessarily tell us much about historical values.
I suppose you could interpret some tweets like this or this in a variety of ways but it now reads as consistent with âdonât let AI fear get in the way of progressâ type views. I donât say this to suggest that EA funders should have been able to tell ages ago, btw, just trying to see if thereâs any way to get additional past data.
Another fairly relevant thing to me is that their work is on benchmarking and forecasting potential outcomes, something that doesnât seem directly tied to safety and which is also clearly useful to accelerationists. As a relative outsider to this space, it surprises me much less that Epoch would be mostly made up of folks interested in AI acceleration or at least neutral towards it, than if I found out that some group researching something more explicitly safety-focused had those values. Maybe the takeaway there is that if someone is doing something that is useful both to acceleration-y people and safety people, check the details? But perhaps thatâs being overly suspicious.
And I guess also more generally, again from a relatively outside perspective, itâs always seemed like AI folks in EA have been concerned with both gaining the benefits of AI and avoiding X risk. That kind of tension was at issue when this article blew up here a few years back and seems to be a key part of why the OpenAI thing backfired so badly. It just seems really hard to combine building the tool and making it safe into the same movement; if you do, I donât think stuff like Mechanize coming out of it should be that surprising, because your party will have guests who only care about one thing or the other.