+1 to Cory Doctorow
tamgent
[Question] Which aid charity should I give to for people in Gaza?
tamgent’s Quick takes
Come across this? https://aistandardshub.org/ai-standards-search/
Interesting that you don’t think the post acknowledged your second collection of points. I thought it mostly did.
1. The post did say it was not suggesting to shut down existing initiatives. So where people disagree on (for example) which evals to do, they can just do the ones they think are important and then both kinds get done. I think the post was identifying a third set of things we can do together, and this was not specific evals, but more about big narrative alliance when influencing large/important audiences. The post also suggested some other areas of collaboration, on policy and regulation, and some of these may relate to evals so there could be room for collaboration there, but I’d guess that more demand, funding, infrastructure for evals helps both kinds of evals.
2. Again I think the post addresses this issue: it talks about how there is this specific set of things the two groups can work on together that is both in their interest to do. It doesn’t mean that all people from each group will only work on this new third thing (coalition building), but if a substantial number do, it’ll help. I don’t think the OP was suggesting a full merger of the groups. They acknowledge the ‘personal and ethical problems with one another; [and say] that needn’t translate to political issues’. The call is specifically for political coalition building.
3. Again I don’t think the OP is calling for a merger of the groups. They are calling for collaborating on something.
4. OK the post didn’t do this that much, but I don’t think every post needs to and I personally really liked that this one made its point so clearly. I would read a post which responds to this with some counterarguments with interest so maybe that implies I think it’d benefit from one too, but I wouldn’t want a rule/social expectation that every post lists counterarguments as that can raise the barrier to entry for posting and people are free to comment in disagreements and write counter posts.
Nice paper on the technical ways you could monitor compute usage, but governance-wise, I think we’re extremely behind on anything making an approach like this remotely plausible (unless I’m missing something, which I may well be).
If we put aside the question b) in the abstract, of getting international compliance, and just focus on a) national governments regulating this for their own citizens. This likely requires some kind of regulatory authority with the remit and the authority to do this. This includes information gathering powers, which require companies by law to give specified information to the regulator. Such powers are common in regulation. However, we do not have AI regulators or even tech regulators (with the exception of data protection whose remit is more specific). We have a bunch of sector regulators, and some cross-sectoral ones (such as data protection, competition etc). The closest regulatory regime to being able to legally do something like this that I’m aware of is the EU, via the EU’s AI Act, still in draft. Under this horizontal legislation which is not sector specific it will regulate all high risk AI systems (the annexes stipulate examples of what they consider high-risk). However, they have not defined compute as a relevant risk parameter (to my knowledge, although I think they have a new thing on General Purpose AI systems which could put this in so you might want to influence that, but I’m not sure what their capacity to enforce looks like).
No other western government has a comparable AI regulation plan. The US have a voluntary risk management framework. The UK has a largely voluntary policy framework they’re developing (although they are starting to introduce more tech regulation some of which will include AI regulation).
Of course there are other parts of governments than regulators—and I’d really like it if ‘compute monitoring’ started to pay attention to how differently these different parts might use such a capability. One advantage of regulators is that they have clear, specified, remits and transparency requirements they routinely balance with confidentiality obligations. Other government departments may have more latitude and less transparency.
On the competition vs caution approach, I think that often people assume government is a homogenous entity, when instead there are very different parts of government with very different remits and some remits are naturally aligned with a caution approach and others to a competition approach.
This was discussed here too.
I don’t think it’s obvious that Google alone is the engine of competition here, it’s hard to expect any company to simply do nothing if their core revenue generator is threatened (I’m not justifying them here), they’re likely to try to compete rather than give up immediately and work on other ways to monetiz. It’s interesting to note that it just happened to be the case that Google’s core revenue generator (search) is a possible application area of one of the LLMs, the fastest progressing/most promising area of AI research right now. I don’t think OpenAI pursued LLMs for this reason (to compete with Google), and instead pursued them because they’re promising, but interesting to note that search and LLMs are both bets on language being the thing to bet on.
Maybe posts themselves should have separate agree/disagree vote.
I am imagining a hoverable [i] for info button, not putting it in the terms, as people often don’t bother to even open them as they know they’ll be long and legalistic.
There could be a little information summary next to the terms of use which is more accessible that explains the implications eg as you have here.
I would also be interested in knowing who/which org was “owning” the relationship with FTX...
Not to assign blame, but to figure out what the right institutional responsibility/oversight should have been, and what needs to be put in place should a similar situation emerge in future.
Are people downvoting because they believe this not relevant enough to the FTX scandal? I understand it is only tangentially relevant (ie. FTX abused its customers money, they did not start a ponzi scheme). Or maybe because it is insensitive or wrong to share critical pieces of the wider area at a time like this in case people’s emotions about the event get overgeneralised to related wider debates? If people disagreed with my view that the video has good arguments or is educational, they would have disagree-voted instead. My intention in sharing it was that, as someone who doesn’t know much about crypto, watching this video helped me to understand some things about some of the wider claims that I had heard made. I thought that if there were people in a similar position of not knowing much, it could be helpful. Additionally, I thought that at a time of reflection and reckoning, a healthy dose of reviewing the more skeptical material is worthwhile, and this came to mind. It feels silly to be justifying simply sharing a video—but I actually just happened to be reading this thread which made me feel like it was worth asking downvoters to double check if they’d be willing to explain their reason for downvoting (I don’t feel too personally upset about it, but do feel a bit of concern about community voting norms).
Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.
Earlier this year when it went semi-viral I watched ‘The Line Goes Up’, which I found pretty educational (as an outsider). Despite the title, it’s about more than NFTs, and covers crypto/blockchain/DLT/so-called ‘web3’ stuff. It is a critical/skeptical take on the whole space with lots of good arguments (in my view).
Was going to ask if you had integrity failure or failure by capture, but I think what I had in mind in these overlaps already to a large extent with what you have under rigor failure.
It seems to me Jack believes that they are impactful and is wondering why they are therefore absent from EA literature. I could be wrong here, he could instead be unsure how impactful it is and assuming that if EA hasn’t indexed it it’s not impactful (fwiw I think this general inference pattern is pretty wrong). Seems to additionally be wondering whether he should work there, and taking into account views people from this community might have when making his decision.
I also don’t get this. I can;t help thinking about the Inner Ring essay by C.S. Lewis. I hope that’s not what’s happening.
Why isn’t anyone talking about the Israel-Gaza situation much on the EA Forum? I know it’s a big time for AI, but I just read that number of Palestinian deaths, the vast majority of whom are innocent people, and 65% are women and children, is approaching the level of civilians killed in Ukraine since the Russian invasion 21 months ago; just in the last 3-4 weeks.