UK Civil Servant and prolific tweeter (@EAheadlines)
Kirsten
Seven Habits of Highly Effective People—Read it in high school and found it’s influenced my thinking since, especially the part about keeping promises to yourself. Persepolis—Read it when I was 18 and found it a useful fictional introduction to a culture very different from my own. Many other books could do a similar job. Getting to Yes—Changed how I think about negotiation. The Gospel of John—Changed how I think about everything. Deep Work—I read this recently, and while I don’t agree with everything (I think the author overreaches occasionally), I do think it would have been useful at 18.
If possible, I’d reduce the reading age of the questions by using simpler words and shorter sentences. I consistently overestimate the reading ability of average citizens.
If these statements were really on a ballot, people would likely have seen advertisements or news clips about the proposal. Right now, people have never heard these proposals before. It’s important that they understand what you’re asking.
This makes a lot of sense to me—people usually give me a funny look if I mention AI risks. I’ll try mentioning “AI accidents” to fellow public policy students and see if that phrase is more intuitive.
Both creating and sustaining a government agency will likely take more popular support than we currently have, but I still think it’s an important long term goal.
I’m under the impression that agencies are less dependent on the ebb and flow of public opinion than individual policy ideas. However, they would certainly still need some public support. On the other hand, having an agency for catastrophic risk prevention might give the issue legitimacy and actually make it more popular.
I saw the idea in passing and it caught my eye. I’ll look out for this kind of information over the next week.
We’d have to think very carefully about how we frame it. The solution is less obvious than it might appear at first, irreversible, and a major factor in how successful we are at improving government responses to risks overall.
They’ll expect it to address different issues if it’s under Defense rather than Health and Human Services or Homeland Security. If we make it a part of the defense bureaucracy, it’s there forever, which has pros and cons. That would likely be a better approach somewhere like the US where defense is relatively well-funded than somewhere like Canada where the defense budget is regularly being cut. It’s also a better approach if we’re very concerned about nuclear war and bioterrorism and we want to frame AGI as a hostile power. It’s a worse option if we want to frame dangerous AGI as domestic enterprise gone wrong and focus on issues like pandemics and climate change. If we decide creation of government agencies are an important part of our long-term policy strategy, several people should think very hard about where these agencies should be located in each government we lobby.
I think the breadth and interdisciplinary nature of x-risks are the best arguments for a dedicated agency with a mandate to consider any plausible catastrophic risks. It’s too easy to overlook risks without a natural “home” in a particular department right now.
It sounds like your main concerns are creating needless bureaucracy and moving researchers/civil servants from areas where they have a natural fit (eg pandemic research in the Department of Health) to an interdisciplinary group where they can’t easily draw on relevant expertise and might be unhappy over different funding levels.
The part of that I’m most concerned about is moving people from a relevant organisation to a less relevant organisation. It does make sense for pandemic preparation to be under health.
The part of the current system that I’m most concerned about is identification of new risks. In the policy world, things don’t get done unless there’s a clear person responsible for them. If there’s no one responsible for thinking about “What else could go wrong?” no one will be thinking about it. Alternatively, if people are only responsible for thinking “What could go wrong?” in their own departments (Health, Defense, etc) it could be easy to miss a risk that falls outside of the current structure of government departments. Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen. If we have a clear place to go every time a concern comes up, where concerns can be assessed against each other and prioritised, we would be better at dealing with risks.
In the US, maybe Homeland Security fills this role? Canada has a Minister of Public Safety and Emergency Preparedness, so that’s quite clear. The UK doesn’t have anything as clearcut, and that can be a problem when it comes to advocacy.
About your other points: -I don’t like needless bureaucracy either, but it seems like bureaucracy is a major part of getting your issue on the agenda. It might be necessary, even if it doesn’t seem incredibly efficient. -I actually think it would be really good to compare government funding for different catastrophic scenarios. I doubt it’s based on anything so rational as weighting current people more highly than future people. :) -On government risks: hopefully if it’s explicitly someone’s job to consider risks, they can put forward policy ideas to mitigate those risks. I want those risks to be easy to notice and make policy around.
This was very informative. Thank you for taking the time to share it.
“I assume she makes this judgement solely based on the author’s willingness to explore hypotheses besides the discrimination one”—This seems like a very uncharitable assumption to make. I can easily think of multiple other reasons why she might consider him an asshole.
I would agree that the comments will likely be from a small subset of real opinions because this topic can be quite emotionally charged. From a look at the comments landscape right now - in particular, the number of posts that seem to question the existence of sexism—I think it’s plausible that a woman who had experienced sexism in EA would not be incentivized to comment.
I’m seeing a lot of comments questioning the literature around diversity improving performance. EA prizes accuracy, so that’s a good thing.
However, I’m concerned we’re falling into two very common traps of requiring women to prove themselves more competent than men and status quo bias.
In general, I’d expect teams to be diverse unless a non-diverse team can be proven more effective. Because so many EA leaders are currently white men, I can imagine some reasons why we might have less-diverse teams in the short-term, but my baseline expectation would always be to prefer a more diverse team all other things being equal.
“fighting urban food deserts without looking into the.” I think there’s a word or phrase missing.
If you really want to publicize something time-sensitive, perhaps you could pitch to multiple publications with query letters personalized to each? You could end up writing 2+ articles or op-eds on the same topic (open letter about factory farming) but with different angles and tones (focus on famous people who are concerned vs focus on animal rights vs focus on risks to humans).
I’ve seen this option suggested online eg here: https://www.theadventurouswriter.com/blogwriting/multiple-query-letters-magazines/
[Edited for clarity.]
Hi Wyatt,
I’m a Canadian currently studying public policy in London. I’m planning to write my dissertation on AI policy and gender, so naturally I’m fascinated by your organization.
The topics you’re planning to discuss, especially the risk of a general artificial intelligence, seem quite sensitive. You didn’t say a lot about your background. What relevant experience does your team have at handling sensitive issues or framing political debates? (I mean in your day jobs; I know the nonprofit is new.)
Kirsten
I’m tentatively planning to look at the government’s role in regards to AI that is discriminating or perceived to be discriminating based on sex. For example, if an AI system was only short-listing men for top jobs, should the government respond with regulation, make it easier for offended parties to challenge in court, try to provide incentives for the company to improve their technology, or something else entirely?
I just started my MA a month ago, though, and won’t be seriously focusing on my dissertation until May, so I will have a much better idea in six months. :)
I’d find this pretty surprising based on my knowledge of the Canadian (Albertan) & British education systems. Does anyone have evidence for standardized exams decreasing “corruption”? (Ben, I’m not sure exactly what you meant by corruption here—do you mean grades that don’t match ability, or lazy teaching, or something else?)
What’s the deal with the stars? What makes a project 1 or 3?
I’m excited about the idea of new funds. As a prospective user, my preferences are:
Limited / well-organised choices. This is because I, like many people, get overwhelmed by too many choices. For example, perhaps I could choose between global poverty, animal welfare, and existential risks, and then choose between options within the category (eg “Low-Risk Global Poverty Fund” or “Food Security Research Fund”).
Trustworthy fund managers / reasonable allocation of funds. There are many reasonable ways to vet new funds, but ultimately I’m using the service because I don’t want to have to carefully vet them myself.