It sounds like if his org had expected mass emigration they’d have spent less time making other human capital investments as well though.
Larks
Yes and no—the only concrete thing I see @WillieG having done was “sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications.”
Sounds like they did more than this, though the description is vague:
We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded.
Thanks for providing this summary!
A possible comparison is to dollar-a-year men, successful business leaders who go to work for the government for basically zero.
Be the change you want to see in the world.
Visa cap on the number of souls allowed to migrate to earth per year.
I think the lowest hanging fruit is ‘don’t repeatedly post publicly about how conservatives are odious people that we don’t want to be even vaguely associated with’.
You might also enjoy this longer piece I shared here.
Or maybe you think that abortion bans seem 4 orders of magnitude more tractable than factory farming bans, which seems extremely unlikely to me.
You might be interested in this excellent post by Ariel Simnegar, which argues that mandating fetal anesthesia for late-term abortions could be an effective and tractable intervention.
Thanks for sharing this detailed report, and most important for your important work keeping a potentially viable anti-pandemic technology legal!
I realized that if you were even arguing about abortion, then you must value human fetuses(which look a lot like chicken fetuses) 8,650 times more than tortured, murdered chickens.
This seems not at all true to me? Quite apart from my being skeptical about your maths, people are allowed to care and argue about things that aren’t as important as factory farming. Very few people spend all their effort on the single most important cause. To be honest, this seems like an isolated demand for rigour.
I think that centralisation (by which I assume you really mean OP-funding-centralisation) is a contingent fact about the EA movement, rather than an inherent one. And it sounds like you agree. But then I’m not sure why we’d use this as an exclusion criteria? If nothing else, if, once centralised, no a group being quite independent is sufficient alone for exclusion, then you can basically never decentralise.
Oh wow, I actually think your grandparent comment here was way more misleading than their tweet was! It sounds like they almost verbatim quoted you. Yes, they took out that you set up the experiment… but of course? If write “John attempted to kill Sally when he was drunk and angry”, and you summarise it was “John attempted to kill Sally, he’s dangerous, be careful!” this is a totally fair summarisation. Yes it cuts context but that is always the case—any short summarisation does this.
In contrast, unlike your comment, they never said ‘escape into the wild’. When I read your comment I assumed they had said this.
Also, the tweet direct quotes your tweet, so users can easily look at the original source. In contrast your comment here doesn’t link to their tweet—before you linked to it I assumed they had done something significantly worse.
in response to our recent paper “Alignment Faking in Large Langauge Models”, they posted a tweet which implied that we caught the model trying to escape in the wild. I tried to correct possible misunderstandings here.
Probably would be easier for people to evaluate this if you included a link?
Thanks for the comment! You’re right that this approach would need modification if ‘dangers that only become apparent after mass deployment’ becomes a major risk factor, and that a ‘trial’ commercialisation period could be a good response. My hope is that the regulatory exam period would be able to catch much more than at present though—the regulator would have ample time to design and deploy more sophisticated tests, with the aid of labs who would presumably love to submit a test their competitor would fail (so long as they themselves pass).
If EA was a broad and decentralised movement, similar to e.g., environmentalism, I’d classify SMA as an EA project. But right now EA isn’t quite that. Personally, I hope we one day get there.
This seems pretty circular to me?
Interesting suggestion! Continuous or pseudo-continuous threshold raising isn’t something I considered. Here are some quick thoughts:
Continuous scaling could make eval validity easier, because the jump between eval-train (n-1) and eval-deploy (n) is smaller.
Continuous scaling encourages training to be done quickly, because you want to get your model launched before it is outdated.
Continuous scaling means you give up on the idea of models being evaluated side-by-side.
They rightly note that protectionism constitutes a sales tax which falls hardest on low-income Americans
A bit of a nitpick but no they don’t? They argue it is similar in many ways to a consumption tax, but consumption taxes are not the same as sales taxes. Sales taxes have unique difficulties around compliance which other types of consumption taxes like VAT do not have. Sales taxes are an unusually hard type of tax to enforce (because shops will increasingly under-report sales) leading to distortions in favour of less compliant businesses, but tariffs are unusually easy to enforce because the government controls the ports and airports. My recollection is economists generally think well-designed consumption taxes, like VAT, are unusually good taxes. The problem is that neither sales taxes nor tariffs are particularly good examples of consumption taxes.
Using a very simple cntrl-f methodology, I estimated that over 95% of this post is the CC Guidelines. In contrast you spend less than one sentence arguing for having such a policy, and zero words whatsoever considering tradeoffs. If you want engagement on something you need to provide some material to engage with, and if you thought the CC Guidelines were inappropriate for EA orgs and require significant modification you should say so in the post! I don’t think you can share a lengthy post and then declare, post hoc, that almost the entire post is off-limits, running the risk that orgs might copy-paste it without understanding the issues, and that comments should be restricted to a topic which was barely mentioned in the post.
Indeed, as far as I can see even you agree with this, since one paragraph after chastising me for getting “bogged down in the specifics of the document” you ask for advice on how to adapt these policies, which necessarily involves engagement with the specifics.
Thanks for working on this, seems potentially very valuable, good initiative!
At this point I think we are reading tea leaves that the OP could easily clarify, but FWIW my interpritation was they invested more than they would have otherwise, e.g. in less specific training, because they thought this training was a secondary route to impact.