Every time I’ve used VR (including latest ones), I feel sick and dizzy afterwards. I don’t think this issue is unique to me. It feels difficult to me to imagine that most people would want to spend significant daily time in something that has such an effect and nothing in this post addressed this issue. Your prediction feels wildly wrong to me.
tamgent
Great development. Does this mean GovAI will start inputting to more government consultations on AI and algorithms? The UK gov recently published a call for input on its AI regulation strategy—is GovAI planning to respond to it? On the regulation area—there’s a lot of different areas of regulation (financial, content, communication infra, data protection, competition and consumer law), and the UK gov is taking a decentralised approach, relying on individual regulators’ areas of expertise rather than creating a central body. How will GovAI stay on top of these different subject matter areas?
Just to add to UK regulator stuff in the space: the DRCF has a stream on algorithm auditing. Here is a paper with a short section on standards. Obviously it’s early days, and focused on current AI systems, but it’s a start: https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook
Well I disagree but there’s no need to agree—diverse approaches to a hard problem sounds good to me.
AI doesn’t exist in a vacuum, and TAI won’t either. AI has messed up, is messing up and will mess up bigger as it gets more advanced. Security will never be a 100% solved problem, and aiming for zero breaches of all AI systems is unrealistic. I think we’re more likely to have better AI security with standards—do you disagree with that? I’m not a security expert, but here some relevant considerations of one applied to TAI. See in particular the section “Assurance Requires Formal Proofs, Which Are Provably Impossible”. Given the probably impossible nature of having formal guarantees (not to say we shouldn’t try to get as close as possible), it really does seem that leveraging whatever institutional and coordination mechanisms have worked in the past is a worthwhile idea. I consider SSOs to be one set of these, all things considered.
Here is a section from an article written by someone who has worked in SSOs and security for decades:
> Most modern encryption is based on standardised algorithms and protocols; the use of open, well-tested and thoroughly analysed encryption standards is generally recommended. WhatsApp, Facebook Messenger, Skype, and Google Messages now all use the same encryption standard (the Signal protocol) because it has proven to be secure and reliable. Even if weaknesses are found in such encryption standards, solutions are often quickly made available thanks to the sheer number of adopters.
I can respond to your message right now via a myriad of potential software because of the establishment of a technical standard, HTTP. Additionally, all major web browsers run and interpret Javascript, in large part due to SSOs like IETF and W3C. By contrast, on mobile, we have two languages for the duopoly, and a myriad of issues I won’t go into, but suffice to say there has been a failure of SSOs in the space to replicate what happened with web browsing and early internet. It may be that TAI present novel and harder challenges, but in some of the hardest such technical coordination challenges to date, SSOs have been very useful. I’m not as worried about defection as you if we get something good going—the leaders will likely have significant resources, and therefore be under bigger public scrutiny and will want to show they are also leading on participating in standard setting. I am hopeful that there will be significant innovation in this area in the next few years. [Disclaimer, I work in this area, so naturally biased]
Thank you kindly for the summary! I was just thinking today when the paper was making the rounds—I’d really like a summary of this whilst I’m waiting on making the time to read it in full. So this is really helpful for me.
I work in this area, and can attest to the difficulty of getting resources towards capability building for detecting trends towards future risks, as opposed to simply firefighting the ones we’ve been neglecting. However, I think the near vs long term distinction is often unhelpful and limited, and I prefer to try to think about things in the medium term (next 2-10 years). There’s a good paper on this by FHI and CSER.
I agree with you that the approach outlined in the paper is generally good, and with your caveats/risks too. I also think it’s nice that there is variation amongst nations’ approaches; hopefully they’ll be complementary and borrow pieces from each other’s work.
Sorry more like a finite budget and proportions, not probabilities.
Agree on aggregate it’s good for a collection of people to pursue many different strategies, but would you personally/individually weight all of these equally? If so, maybe you’re just uncertain? My guess is that you don’t weight all of these equally. Maybe another framing is to put probabilities on each and then dedicate the appropriate proportion of resources accordingly. This is a very top down approach though and in reality people will do what they will! I guess it seems hard to span more than two beliefs next to each other on any axis as an individual to me. And when I look at my work and my beliefs personally, that checks out.
Could you elaborate on what you mean by as ad tech gets stronger? Is that just because all tech gets stronger with time, or is it in response to the current shifts, like privacy sandbox?
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:
IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
I found this to be an interesting way to think about this that I hadn’t considered before—thanks for taking the time to write it up.
On the philosophical side paragraph—totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual’s most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that’s just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things done—and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
I’d guess there may be a correlation between people who think there should be more deference being in the “row” camp and people who think less in the “steer” camp, or another camp, described here.
This is not about the EA community, but something that comes to mind which I enjoyed is the essay Tyranny of the Structurelessness, written in the 70s.
I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we’re making. Goodharting ourselves.
Have you spoken to the Czech group about their early days? I’d recommend it, and can put you in touch with some folks there if you like.
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it’s all part of the same overarching problem area.
I’m not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I’m trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that’s because of a misalignment of values between the user and the company or because it’s just really hard to learn human preferences because they’re complex. In doing this, it’s really tricky to actually distinguish in the wild between the choice architecture (behavioural parts) vs the algorithm when it comes to attributing to users’ actions.
So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting):
- easier to manipulatetheir trajectory
- easier to exploit their labour
- free selection, build on top of/continue rich get richer effects of ‘talented’ people
- let’s apply a supervised learning approach to high impact people acquisition, the training data biases won’t affect it
I am a software engineer who transitioned to tech/AI policy/governance. I strongly agree with the overall message (or at least title) of this article: that AI governance needs technical people/work, especially for the ability to enforce regulation.
However in the ‘types of technical work’ you lay out I see some gaping governance questions/gaps. You outline various tools that could be built to improve the capability of actors in the governance space, but there are many such actors, and tools by their nature are dual use—where is the piece on who these tools would be wielded by, and how they can be used responsibly? I would be more excited about seeing new initiatives in this space that clearly set out which actors it works with for which kinds of policy issues and which not and why. Also there is a big hole around not being conflicted etc. There’s lots of legal issues that can’t be avoided that crop up when you need to actually use such tools in any context beyond a voluntary initiative of a company (which does not give as many guarantees as things that apply to all current and future companies, like regulations or to some extent standards). There is and will be increasingly a huge demand for companies with practical AI auditing expertise—this is a big opportunity to start trying to fill that gap.
I think the section on ‘advising on the above’ could be fleshed out a whole lot more. At least I’ve found that because this area is very new, there is a lot of talking to do with lots of different people, lots of translation, before getting to actually do these things… it’s helpful if you’re the kind of technical person who is willing to learn how to communicate to a non-technical audience, and to learn from people with other backgrounds about the constraints and complexities of the policymaking world, and derives satisfaction from this. I think this is hugely worthwhile though—and if you’re the kind of person who is willing to do that and looking for work in the area, do get in touch as I have some opportunities (in the UK).
Finally, I’ll just more explicitly now highlight the risk of technical people being used for the aims of others (that may or may not lead to good outcomes) in this space. In my view, if you really want to work in this intersection you should be asking the above questions about anything you build—who will use this thing and how, and what are the risks and can I reduce them. And when you advise powerful actors, bringing your technical knowledge and expertise, do not be afraid to also give your opinions to decision-makers on what might lead to what kinds of real world outcomes, and ask questions about the application aims, and improve those aims.