AI policy
Aidanšø
Awesome, thanks Adam, this makes a lot of sense. Iād be excited to see reports on specific thinkers like Gwern and Yuval Noah Harrari. Iād be especially excited to look at the track records of institutions, like frontier developers or governments (e.g. the UK Government or its AISI).
Iād prefer to see you pick āpeople who have made AI predictions who are not famous for those predictionsā in some random-ish way. I could just say āGary Marcusā and be done, but Iād only be saying that because he disagrees with me on AI progress and I think heād look bad if his track record was examined.
Youāre probably not trying to be super scientific, but I definitely wouldnāt cite anything to a policymaker that was cherry-picked. I also wouldnāt cite your tool if you only found an effect because you mainly compared people who are famous for scaling predictions, like Gary Marcus and Gwern.
Thanks for all the work you guys do! And love the new website.
I have really enjoyed using Squiggle AI for estimation tasks, and particularly cost-benefit analyses, since going to your talk about it at Minifest in mid December. Thanks for this post and for building this!
AI: MATS, CAIS, IAPS
Animal welfare: EA Animal Welfare Fund, THL, Animals Aotearoa
As a vegan I agree with Marcus and Jeffās takes but also think at least carnitarianism (not eating fish) is justifiable on pure utilitarian grounds. The 5 cent offset estimate is miles off (by a factor of 50-100) for fish and shrimp here, and this is how your argument falls.
I made a rough model that suggests a 100g cooked serving for farmed carp is ~1.1 years in a factory farm, and that of farmed shrimp is ~6 years in a factory farm. I modelled salmon and it came out much lower than this, but I expect this to grow when I factor in the fact salmon are carnivorous and farmed fish are used in salmon fish feed.
This is a lot of time, and itās more expensive to pay for offsets that cover a longer time period. We have two main EA-aligned options for aquaculture āoffsetsā, one is the Fish Welfare Initiative, which (iirc) improves the life of a single fish across its lifetime for a marginal dollar, and the other is the Shrimp Welfare Project, which improves the death (a process lasting 3-5 minutes) of 1000 shrimp per year for a marginal dollar (we donāt know how good their corporate campaigns will be yet).
Iām really not sure how good it is for a carp to have a lower stocking density and higher water quality, which is FWIās intervention in India, and essentially the best case for FWIās effectiveness. If we assume itās a 30% reduction in lifetime pain we can offset a fish meal for roughly $3.33.
I donāt think itās good to prevent 1 year of shrimp suffocation and then go off and cause shrimp to spend 100 years in farmed conditions (which are really bad, to be clear). Biting the bullet on that and assuming a stunner lasts 20 years and no discount rate, to offset a single shrimp meal youād have to pay $4.6 (nearly 100 times more than the estimate you used).
Maybe you could offset using a different species (chicken, through corporate commitments). Vasco Grilo thinks a marginal dollar gets you 2 years of chicken life in factory farms averted. Naively Iād think that chicken lives are better than shrimp lives, but shrimp matter slightly less morally. This time you probably have to pay $3 to offset a shrimp meal using the easiest species to influence.
Additionally, the lead time on offsets is long (I would think at least five years from a donation to a corporate commitment being implemented). Itās not good to have an offset that realises most of its value 20 years from now when, by then, there is a much higher chance of lab grown meat being cheaper or animal welfare regulations being better.
I think that you should at least be carnitarian because this is incredibly easy and based on my modelling (second sheet) itās the vast majority (90-95%) of the (morally adjusted) time saved in factory farms associated with vegetarianism. I doubt that any person gets $4 of utility from eating a different kind of meat, and this just adds up over time.
While regulation would be best,[1] commitments still make a case like this incredibly useful. The Humane League and the Better Chicken Commitment have been working on getting corporations to commit to use better breeds. It might be easier to (1) commit to using better breeds and (2) actually follow through with that commitment[2] if all UK producers have to switch to better breeds.
Yes EAs are especially altruistic. Although especially altruistic people exist across all economic classes and races, youād still expect to see more privileged people in EA because they have the means (alongside other factors like the cycle of low diversity).
And so, EA is not a good measure of who is altruistic because it incidentally filters out people who are less wealthy, have less spare time, are more risk-averse, or donāt want to be in spaces that donāt represent them. If you have more privilege, you can (not want to, but have the means to) do more altruism. Itās important for people to have self-serving motivations if they donāt have much time or money: they know the best way to spend it.
That leads to my next point, which is that the vast majority of elite white rich people (needlessly) have selfish motivations, and canāt exactly be expected to altruistically set up co-ops or start a business with no expectation of high returns in the world where it works out. This makes your point irrelevant, because it shows that even when people have the means they are still mostly not altruistic.
EA is possible because of a small minority of people having the sufficient means (time or money) and a weird altruism. Anyone who feels this weird altruism is welcome. If you know how to make people more altruistic, that would be fantastic information. Note there would be many things with a higher priority on the to-do list than āsocialismā.
I think one of the reasons why socialism is so unfalsifiable is because itās incredibly easy for socialists to shift the goalposts rapidly to another form of āsocialismā upon critique, so thank you for your definition.
You say āitās just the reduction the private capitalā or this relatively benign form of anti-imperialism, but the post above outlines the USSRās space program or Chinaās economic growth as examples of socialist successes, so it must be to do with the āsocialismā in those economies. Your definition sounds like capitalism to me: you can pay rent to a landlord, have your surplus labour taken, the only condition is that āprivate capitalā is being āreducedā (something like the railways are being nationalised or corporations are being taxed).
On your institutional point, being intergovernmental organisations, I donāt believe IMF and World Bank are āprivate capitalā. Furthermore, the Belt & Road initiative is run by China, which is listed as a socialist country in the post.
On your intervention point, would I prefer US intervention done in the name of anti-communism across the late 20th century wasnāt so brutal and destructive. Does that make me any less of a liberal? I think you can be pro-capitalism and anti-imperialism, in the same way you can be pro-socialism and pro-imperialism (China, Venezuela, USSR). In other words the attributes pro-imperialism and pro-capitalism are independent.
Itās important not to feel as if you are āwastingā your life because people tell you that you are smart. It seems like a pretty good rule of thumb to prioritise the sustainability of your EA actionsāmaking sure you are happy and comfortable in your job, and putting yourself first.
If you are truly intrisically interested in a career change towards something particularly effective, I wouldnāt be super concerned about test scores, they probably arenāt the best metric for how youād do in grad study or fair in your career. Your GPA is great, and being from an āunremarkableā university wonāt matter.
It seems like you may not be so comfortable in more quantitative fields, but 80k recommends heaps of areas that sound like a great fit: Philosophy and Psychology seem like particularly important areas for EAs!
A quick once over on their career reviews section reveals:
Population ethics researcher /ā policy analyst
Journalism
Research management
Non-technical roles in technical AI or biorisk research
Startup employee
Startup founding
Community building
Just to gauge more closely, it could be worth expanding that list, and running through this article.
80k has a lot of reflecting to do if what you say about them being not useful to most people is true. In my opinion though that they do try and frame things in a way that appeals to the average competent person!
It would seem counterproductive, at least to policymakers who think AI is helpful, to place any kind of widespread ban on essay-writing AI, or to somehow regulate ChatGPT and others to ensure students donāt use their platforms nefariously. Regulations wonāt keep with the times, and wonāt be understood well by lawmakers and enforcers.
As a student, ChatGPT has made me vastly more productive (especially as a student researcher in a field I donāt know much about). It seems like this sort of technology is here to stay, so it seems useful for students to learn to incorporate the tool in their lives. I wasnāt old enough to remember, but I assume a similar debate may have taken place with search engines.
There are probably a myriad of ways education institutions can pick up on cheating. If not used to analyse text as AI generated itself, institutions could possibly use AI to perform linguistic analysis on irregularities and writing patterns, like those used against the unabomber in his trial. Children especially, I assume, would have these writing patterns, though I am not qualified to speak on any of this. Cheaters tend to (in a self reinforcing cycle) not be so smart, so I would expect schools to find a way around them using AI.
Overall it seems more plausible and productive for schools to regulate this themselves. Where there is worry about academic misconduct, there will be market based solutions as there already exist for plagiarism checking.
Seems like a waste of time to read news, even if you work in AI policy.
The example you provided of the Axios article illustrates why I think youāre incorrect. I remember reading this article, and being convinced that the US AISI was all but guaranteed to be scrapped. Yet knowledgeable people I heard from soon after implied they thought this was still a coin toss. Why such a discrepancy? I think the news has a āsomething is happeningā bias. Sometimes they twist words to imply something is happening, even when itās not clear-cut.
The way fast news presents issues can lead to receiving the wrong conclusion: my summation at the time was incorrect, as yours seems to be.
Really? It seems more appropriate to say, āthe Trump White House was maybe planning to fire all probationary employees at NIST, which would have gutted AISI, but the final scope was not determined at that time.ā And actually, this would have been a great hedge, because it turns out that a month later the US AISI is still here, despite many probational NIST employees having been fired earlier in the month.