Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
It doesn’t quite ring true to me that we need an investigation into what top EA figures knew. What we need is an investigation more broadly into how this was allowed to happen. We need to ask:
How did EA ideology play into SBF/FTX’s decisions?
Could we have seen this coming, or at least known to place less trust in SBF/FTX?
Can we do anything to mitigate the large harms that have come about?
How can we remove whatever conditions that allowed this to happen, and that might allow other large-scale harms to occur, if they are not remedied?
It’s not totally unreasonable to ask what EA figures knew, but it’s not likely that they knew about the fraud, based on priors (it’s risky to tell people beyond your inner circle about fraudulent plans), and insider reports. (And for me personally, based on knowledge of their character, although obviously that’s not going to convince a sceptic.)
Should we fund people for more years at a time? I’ve heard that various EA organisations and individuals with substantial track records still need to apply for funding one year at a time, because they either are refused longer-term funding, or they perceive they will be.
For example, the LTFF page asks for applications to be “as few as possible”, but clarifies that this means “established organizations once a year unless there is a significant reason for submitting multiple applications”. Even the largest organisations seem to only receive OpenPhil funding every 2-4 years. For individuals, even if they are highly capable, ~12 months seems to be the norm.
Offering longer (2-5 year) grants would have some obvious benefits:
Grantees spend less time writing grant applications
Evaluators spend less time reviewing grant applications
Grantees plan their activities longer-term
The biggest benefit, though, I think, is that:
Grantees would have greater career security.
Job security is something people value immensely. This is especially true as you get older (something I’ve noticed tbh), and would be even moreso for someone trying to raise kids. In the EA economy, many people get by on short-term grants and contracts, and even if they are employed, their organisation might itself not have a very steady stream of income. Overall, I would say that although EA has made significant progress in offering good salaries and great offices, the job stability is still not great. Moreover, career security is a potential blind spot for grantmakers, who generally do have ~permanent employment from a stable employer.
What’s more, I think that offering stable income may in many cases be cheaper than improving one’s salary and office. Because some people have, for years, never been refused a grant, and would likely return any funds that turn out not to be needed. And despite the low chance of funding being “wasted”, they still have to apply annually. In such cases, it seems especially clear that the time savings and talent retention benefits would outweigh any small losses.
like Bostrom’s influential Superintelligence—Eliezer with the serial numbers filed off and an Oxford logo added
It’s not accurate that the key ideas of Superintelligence came to Bostrom from Eliezer, who originated them. Rather, at least some of the main ideas came to Eliezer from Nick. For instance, in one message from Nick to Eliezer on the Extropians mailing list, dated to Dec 6th 1998, inline quotations show Eliezer arguing that it would be good to allow a superintelligent AI system to choose own its morality. Nick responds that it’s possible for an AI system to be highly intelligent without being motivated to act morally. In other words, Nick explains to Eliezer an early version of the orthogonality thesis.
Nick was not lagging behind Eliezer on evaluating the ideal timing of a singularity, either—the same thread reveals that they both had some grasp of the issue. Nick said that the fact that 150,000 people die per day must be contextualised against “the total number of sentiences that have died or may come to live”, foreshadowing his piece on Astronomical Waste, that would be published five years later. Eliezer said that having waited billions of years, the probability of a success is more important than any delay of hundreds of years.
These are indeed two of the most-important macrostrategy insights relating to AI. A reasonable guess is that a lot of the big ideas in Superintelligence were discovered by Bostrom. Some surely came from Eliezer and his sequences, or from discussions between the two, and I suppose that some came from other utilitarians and extropians.
I think there’s a bit of a misunderstanding—I’m not asking people to narrowly conform to some message. For example, if you want to disagree with Andrew’s estimate of the number of lives that Carrick has saved, go ahead. I’m saying exhibit a basic level of cultural and political sensitivity. One of the strengths of the effective altruism community is that it’s been able to incorporate people to whom that doesn’t always come naturally, but this seems like a moment when it’s required anyway.
My best guess is that without Eliezer, we wouldn’t have a culture of [forecasting and predictions]
The timeline doesn’t make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000). Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005).
If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.
Tangentially related: I would love to see a book of career decision worked examples. Rather than 80k’s cases, which often read like biographies or testimonials, these would go deeper on the problem of choosing jobs and activities. They would present a person (real or hypothetical), along with a snapshot of their career plans and questions. Then, once the reader has formulated some thoughts, the book would outline what it would advise, what that might depend on, and what career outcomes occurred in similar cases.
A lot of fields are often taught in a case-based fashion, including medicine, poker, ethics, and law. Often, a reader can make good decisions in problems they encounter by interpolating between cases, even when they would struggle to analyse these problems analytically. Some of my favourite books have a case-based style, such as An Anthropologist on Mars by Oliver Sacks. It’s not always the most efficient way to learn, but it’s pretty fun.
Several nitpicks:
“2022 was a year of continued growth for CEA and our programs.”—A bit of a misleading way to summarise CEA’s year?
“maintaining high retention and morale”—to me there did seem to be a dip in morale at the office recently
“[EA Forum] grew by around 2.9x this year.”—yes, although a bit of this was due to the FTX catastrophe
“Overall, we think that the quality of posts and discussion is roughly flat over the year, but it’s hard to judge.”—this year, a handful of people told me they felt the quality had decreased, which didn’t happen in previous years, and I noticed this too.
“Recently the community took a significant hit from the collapse of FTX and the suspected illegal and/or immoral behaviour of FTX executives.”—this is a very understated way to note that a former board member of CEA committed one of the largest financial frauds of all time.
I realise there are legal and other constraints, so maybe I am being harsh, but overall, several components of this post seemed not very “real” or straightforward relative to what I would usually expect from this sort of EA org update.
I personally would feel excited about rebranding “effective altruism” to a less ideological and more ideas-oriented brand (e.g., “global priorities community”, or simply “priorities community”), but I realize that others probably wouldn’t agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it’s perhaps worth making the change now? I’d love to be proven wrong, of course.
This sounds very right to me.
Another way of putting this argument is that “global priorities (GP)” community is both more likable and more appropriate than “effective altruism (EA)” community. More likable because it’s less self-congratulatory, arrogant, identity-oriented, and ideologically intense.
More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I’d also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: “how ought one to decide what to work on?”, or “what are the big problems of our time?” rather than “how much ought one to give?” or “what is the best way to solve problem X?” Moreover, I’d more likely bring up Parfit’s catastrophic risks thought experiment, than Singer’s shallow pond. A more appropriate name could help reduce bait-and-switch dynamics, and help with recruiting people more suited to the jobs that we need done.
If you have a name that’s much more likable and somewhat more appropriate, then you’re in a much stronger position introducing the ideas to new people, whether they are highly-susceptible to them, or less so. So I imagine introducing these ideas as “GP” to a parent, an acquaintance, a donor, or an adjacent student group, would be less of an uphill battle than “EA” in almost all cases.
Apart from likability and appropriateness, the other five of Neumeir’s naming criteria are:
Distinctiveness. EA wins.
Brevity. GP wins. It’s 16 letters rather than 17, and 6 syllables rather than 7.
Easy spelling and punctuation. GP wins. In a word frequency corpus “Global” and “Priorities” feature 93M and 11M times, compared to “Effective” (75M) and “Altruism” (0.4M). Relatedly, “effective altruism” is annoying enough to say that people tend to abbreviate it to “EA”, which is somewhat opaque and exclusionary.
Extendability. GP wins. It’s more natural to use GP than EA to describe non-agents e.g. GP research vs EA research, and “policy prioritisation” is a better extension than “effective policy”, because we’re more about doing the important thing than just doing something well.
Protectability. EA wins, I guess, although note that “global priorities” already leads me exclusively to organisations in the EA community, so probably GP is protectable enough.
Overall, GP looks like a big upgrade. Another thing to keep in mind is that it may be more of an upgrade than it seems based on discussions within the existing community, because it consists of only those who were not repelled by the current “EA” name.
Concretely, what would this mean? Well… instead of EA Global, EA Forum, EA Handbook, EA Funds, EA Wiki, you would probably have GP Summit, GP Forum, (G)P Handbook, (G)P Funds, GP Wiki etc. Obviously, there are some switching costs in regard to the effort of renaming, and of name recognition, but as an originator of two of these things, the names themselves seem like improvements to me—it seems much more useful to go to a summit, or read resources about global priorities, rather than one focused on altruism in abstract. Orgs like OpenPhil/LongView/80k wouldn’t have to change their names at all.
Moreover, changing the name to GP would break the names of some named orgs, it wouldn’t always do that. In fact, the Global Priorities Institute was initially going to be the EA Institute, but the name had to be switched to sound more academically respectable. If the community was renamed as the Global Priorities Community, then GPI would get to be named after the community that it originated from and be academically respectable at the same time, which would be super-awesome. The fact that prioritisation arises more frequently in EA org names than any phrase except for “EA” itself might be telling us something important. Consider: “Rethink Priorities”, “Global Priorities Project”, “Legal Priorities Project”, “Global Priorities Institute”, “Priority Wiki”, “Cause Prioritisation Wiki”.
Another possible disadvantage would be if it made it harder for us to attract our core audience. But to be honest, I think that the people who are super-excited about utilitarianism and rationality are pretty likely to find us anyway, and that having a slightly larger and more respectable-looking community would help with that in some ways anyway.
Finally, renaming can be an opportunity for re-centering the brand and strategy overall. How exactly we might refocus could be controversial, but it would be a valuable opportunity.
So overall, I’d be really excited about a name change!
Comments on Jacy Reese Anthis’ Some Early History of EA (archived version).
Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case.
I’ll follow the chronological structure of Jacy’s post, focusing first on 2008-2012, then 2012-2021. Finally, I’ll discuss “founders” of EA, and sum up.
2008-2012
Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great—so far I agree.
What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostly oriented toward its own “rationality” community.
Finally, Felicifia is discussed at greatest length in the piece, and Jacy clearly has a special affinity to it, based on his history there, as do I. He goes as far as to describe the 2008-12 period as a history of “Felicifia and other proto-EA communities”. Although I would love to take credit for the development of EA in this period, I consider Felicifia to have had the third- or fourth-largest role in “founding EA” of groups on this list. I understand its role as roughly analogous to the one currently played (in 2022) by the EA Forum, as compared to those of CEA and OpenPhil: it provides a loose social scaffolding that extends to parts of the world that lack any other EA organisation. It therefore provides some interesting ideas and leads to the discovery of some interesting people, but it is not where most of the work gets done.
Jacy largely discusses the Felicifia Forum as a key component, rather than the Felicifia group-blog. However, once again, this is not quite what I would focus on. I agree that the Forum contributed a useful social-networking function to EA. However, I suspect we will find that more of the important ideas originated on Seth Baum’s Felicifia group-blog and more of the big contributors started there. Overall, I think the emphasis on the blog should be at least as great as that of the forum.
2012 onwards
Jacy describes how he co-founded THINK in 2012 as the first student network explicitly focused on this emergent community. What he neglects to discuss at this time is that the GWWC and 80k Hours student networks already existed, focusing on effective giving and impactful careers. He also mentions that a forum post dated to 2014 discussed the naming of CEA but fails to note that the events described in the post occurred in 2011, culminating in the name “effective altruism” being selected for that community in December 2011. So steps had already been taken toward having an “EA” moniker and an EA organisation before THINK began.
Co-founders of EA
To wrap things up, let’s get to the question of how this history connects to the “co-founding” of EA.
Some people including me have described themselves as “co-founders” of EA. I hesitate to use this term for anyone because this has been a diverse, diffuse convergence of many communities. However, I think insofar as anyone does speak of founders or founding members, it should be acknowledged that dozens of people worked full-time on EA community-building and research since before 2012, and very few ideas in EA have been the responsibility of one or even a small number of thinkers. We should be consistent in the recognition of these contributions.
There may have been more, but only three people come to mind, who have described themselves as co-founders of EA: Will, Toby, and Jacy. For Will and Toby, this makes absolute sense: they were the main ringleaders of the main group (the Oxford community) that started EA, and they founded the main institutions there. The basis for considering Jacy among the founders, however, is that he was around in the early days (as were a couple of hundred others), and that he started one of the three main student groups—the latest, and least-important among them. In my view, it’s not a reasonable claim to have made.
Having said that, I agree that it is good to emphasise that as the “founders” of EA, Will and Toby only did a minority—perhaps 20% - of the actual work involved in founding it. Moreover, I think there is a related, interesting question: if Will and Toby had not founded EA, would it have happened otherwise? The groundswell of interest that Jacy describes suggests to me an affirmative answer: a large group of people were already becoming increasingly interested in areas relating to applied utilitarianism, and increasingly connected with one another, via GiveWell, academic utilitarian research, Felicifia, utilitarian Facebook groups, and other mechanisms. I lean toward thinking that something like an EA movement would have happened one way or another, although it’s characteristics might have been different.
That flag is cool, but here’s an alternative that uses some of the same ideas.
The black background represents the vastness of space, and its current emptiness. The blue dot represents our fragile home. The ratio of their sizes represents the importance of our cosmic potential (larger version here).
It’s also a reference to Carl Sagan’s Pale Blue Dot—a photo taken of Earth, from a spacecraft that is now further from Earth than any other human-made object, and that was the first to leave our solar system.
Sagan wrote this famous passage about the image:
Look again at that dot. That’s here. That’s home. That’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there-on a mote of dust suspended in a sunbeam.
The Earth is a very small stage in a vast cosmic arena. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot.
Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.
The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.
It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.
Yes, unfortunately I’ve also been hearing negatives about Conjecture, so much so that I was thinking of writing my own critical post (and for the record, I spoke to another non-Omega person who felt similarly). Now that your post is written, I won’t need to, but for the record, my three main concerns were as follows:
1. The dimension of honesty, and the genuineness of their business plan. I won’t repeat it here, because it was one of your main points, but I don’t think that it’s a way to run a business, to sell your investors on a product-oriented vision for the company, but to tell EAs that the focus is overwhelmingly on safety.
2. Turnover issues, including the interpretability team. I’ve encountered at least half a dozen stories of people working at or considering work at Conjecture, and I’ve yet to hear of any that were positive. This is about as negative a set of testimonials as I’ve heard about any EA organisation. Some prominent figures like Janus and Beren have left. In the last couple of months, turnover has been especially high—my understanding is that Connor told the interpretability team that they were to work instead on cognitive emulations, and most of them left. Much talent has been lost, and this wasn’t a smooth breakup. One aspect of this is that Conjecture abruptly cancelled an interpretability workshop, that they were scheduled to host, after some had already flown to the UK to attend it.
3. Overconfidence. Some will find Connor’s views very sane, but I don’t, and would be remiss to ignore:
Thinking AGI 99%-likely by 2100
even though 90%+ can be normal
Most staff thinking AGI ruin >60% likely and most expecting AGI in <7 years, and tweeting it.
i.e. including non-researchers—it at-least makes one wonder about groupthink.
Ranting about the harm of interpretability research at an EAG afterparty, so prominently that I heard about several times the next day
the impact of interpretability research is hard to judge, and this comes across as unprofessional.
When I put this together, I get an overall picture that makes it pretty hard to recommend people work with Conjecture, and I would also be thinking about how to disentangle things like MATS from it.
No offense to Neel’s writing, but it’s instructive that Scott manages to write the same thesis so much better. It:
is 1⁄3 the length
Caveats are naturally interspersed, e.g. “Philosophers shouldn’t be constrained by PR.”
No extraneous content about Norman Borlaug, leverage, etc
has a less bossy title
distills the core question using crisp phrasing, e.g. “Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?” (my emphasis)
...and a ton of other things. Long-live the short EA Forum post!
I can think of problems like this with non-EA academics too. There was a a famous medic who taught at my undergrad degree and iirc gave weird physical compliments to female students during his lectures, and I can think of at least one non-EA prof who made multiple female students uncomfortable.
Having said that, my personal hunch would be that things are worse in EA. Some of the reasons are unpopular to talk about, but they include it being quite male, young (including minors), poly, aspie, less professional and due to what we are discovering can be quite a fine line between consequentialism and amorality. In some of these respects, it resembles the chess community and the atheism community, which have had significant problems.
A case of precocious policy influence, and my pitch for more research on how to get a top policy job.
Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year − 2021 - she was appointed by Biden.
The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government.
I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. Jason Matheny’s success is arguably a related example.
This also suggests a possible research agenda surrounding how people get influential jobs in general. For many talented young EAs, it would be very useful to know. Similar to how Wiblin ran some numbers in 2015 on the chances at a seat in congress given a background at Yale Law, we could ask about the whitehouse, external political appointments (such as FTC commissioner) and the judiciary. Also, this ought to be quite tractable: all the names are in public, e.g. here [Trump years] and here [Obama years], most of the CVs are in the public domain—it just needs doing.
Condolences to the campaign team. Their efforts were not a waste, because we’ve learned a lot from them.
In retrospect, we can see that too many variables were stacked against Carrick.
The candidate had little history in the location, voting history, credentials, political inclination, little practice with speaking to a mass audience, and little of the style of a public figure.
The campaign suffered from too few local connections, a hostile media response, too little local funding.
Outside the campaign, there was an allergic reaction to the crypto-backing, and the HMP endorsement.
There were some variables in his favour. To name a few, his compelling personal history, talented team and volunteers, and campaign and outside funding. But it wasn’t nearly enough. A good half of these weaknesses are changeable, though. I think a candidate who has a bit more inclination toward politics and mass outreach, who has experience in the state house, and some of the connections that come with that, will have good chances.
Precisely. Also, the frugality of past EA creates a selection effect, so probably there is a larger fraction of anti-frugal people outside the community (and among people who might be interested) than we would expect from looking inside it.
A lot of the comments seem fixated on, and wanting to object to the idea of “reputational collapse” in a way that I find hard to relate to. This wasn’t a particularly load-bearing part of my argument, it was only used to argue that the idea that EA is a particularly promising way to get people interested in x-risk has become less plausible. Which was only one of three reasons not to promote EA in order to promote x-risk. Which was only one of many strategic suggestions.
That said, I find it hard not to notice that the reputation of, and enthusiasm for EA has changed, to a degree that must affect recruitment via EA to AI safety. If you’re surrounded by EAs, it feels obvious. Trajan had a funereal atmosphere for weeks. Some were depressed for months. In news articles and on the forum, there was a cascade of PR disasters took much airtime from Q4 2022 to Q1 2023. There’s been nothing like it in my 15 years around this community. The polling would have to have been pretty extraordinary to convince me that somehow I’ve misperceived what is really a pretty clear social reality.
The polling had some interesting findings, but not necessarily in a good way. The widely touted figure was that people’s recalled satisfaction dropped “only” 0.5 on a ten-point scale. But most people rate their satisfaction around 7 most of the time, so this looks like an effect size of Cohen’s d=0.4 or so. And this is in the more enthusiastic sample who were willing to keep answering the EA survey even after these disasters. Scanning over the next few questions, you then see that 55%+ of respondants now have some form of concerns about the EA community’s meta organisations, and likewise the community and its norms—much more than the 25% who had some concerns with the philosophy. Moreover, 39% agree in some way that they want to see the community look very different, and the same number say they are less likely to associate with EA. And 31% substantially lost trust in EA public figures or leadership. Those who were more engaged were in most ways more concerned, which would fit with the selection effect hypothesis (those of the less engaged EAs who became disaffected simply left, and didn’t respond to the survey). I find it really hard to understand those who would want to regard these results as “pretty compelling evidence” that EA has not suffered a major hit that would affect its viability as a way of recruiting to AIS.
The polling of people outside the EA community is least convincing to me for a variety of reasons. Firstly, these people currently know least, and many of them will hear more in the future, such as when SBF’s trial happens, the Michael Lewis book is released, or some of the nine film/video adaptations come. Importantly, if any of them become interested in EA, they are likely to hear about such things, and come to reflect the first cohort to a greater extent. But already, ~5% of them mention FTX in the interview, and ~1% of them mention it in the context of the meaning of EA or how they heard about it. In other words, the “scenario where promoting EA could go badly” is something that a community-builder would likely experience at least once. And those who know about FTX have a much more negative view (d=1.5 with high uncertainty). So although this is the more positive of the two batches of polling, I wouldn’t necessarily gloss it as “there’s no big problem”.
I’m sorry you feel that way. I’m a bit confused about the distinction, unless by ‘EA Movement’ you mean ‘EA Community’
I mean I’ve lost enthusiasm for the community/movement element, at least on a gut level. I’ve no objection to people donating, and living a consequentialist-leaning philosophy—rather I’m in favour of that so long as they’re applying the ideas carefully.
This is probably as good a place as any to mention that whatever people say about this race could very easily get picked up by local media and affect it. As a general principle, if you have an unintuitive idea for how to help Carrick’s candidacy, it might be an occasion to keep it to yourself, or discuss it privately. Generally, here, on Twitter, and everywhere, thinking twice before posting about this topic would be a reasonable policy.