AI governance and animal welfare
adnhwđ¸
Yes EAs are especially altruistic. Although especially altruistic people exist across all economic classes and races, youâd still expect to see more privileged people in EA because they have the means (alongside other factors like the cycle of low diversity).
And so, EA is not a good measure of who is altruistic because it incidentally filters out people who are less wealthy, have less spare time, are more risk-averse, or donât want to be in spaces that donât represent them. If you have more privilege, you can (not want to, but have the means to) do more altruism. Itâs important for people to have self-serving motivations if they donât have much time or money: they know the best way to spend it.
That leads to my next point, which is that the vast majority of elite white rich people (needlessly) have selfish motivations, and canât exactly be expected to altruistically set up co-ops or start a business with no expectation of high returns in the world where it works out. This makes your point irrelevant, because it shows that even when people have the means they are still mostly not altruistic.
EA is possible because of a small minority of people having the sufficient means (time or money) and a weird altruism. Anyone who feels this weird altruism is welcome. If you know how to make people more altruistic, that would be fantastic information. Note there would be many things with a higher priority on the to-do list than âsocialismâ.
I think one of the reasons why socialism is so unfalsifiable is because itâs incredibly easy for socialists to shift the goalposts rapidly to another form of âsocialismâ upon critique, so thank you for your definition.
You say âitâs just the reduction the private capitalâ or this relatively benign form of anti-imperialism, but the post above outlines the USSRâs space program or Chinaâs economic growth as examples of socialist successes, so it must be to do with the âsocialismâ in those economies. Your definition sounds like capitalism to me: you can pay rent to a landlord, have your surplus labour taken, the only condition is that âprivate capitalâ is being âreducedâ (something like the railways are being nationalised or corporations are being taxed).
On your institutional point, being intergovernmental organisations, I donât believe IMF and World Bank are âprivate capitalâ. Furthermore, the Belt & Road initiative is run by China, which is listed as a socialist country in the post.
On your intervention point, would I prefer US intervention done in the name of anti-communism across the late 20th century wasnât so brutal and destructive. Does that make me any less of a liberal? I think you can be pro-capitalism and anti-imperialism, in the same way you can be pro-socialism and pro-imperialism (China, Venezuela, USSR). In other words the attributes pro-imperialism and pro-capitalism are independent.
Itâs important not to feel as if you are âwastingâ your life because people tell you that you are smart. It seems like a pretty good rule of thumb to prioritise the sustainability of your EA actionsâmaking sure you are happy and comfortable in your job, and putting yourself first.
If you are truly intrisically interested in a career change towards something particularly effective, I wouldnât be super concerned about test scores, they probably arenât the best metric for how youâd do in grad study or fair in your career. Your GPA is great, and being from an âunremarkableâ university wonât matter.
It seems like you may not be so comfortable in more quantitative fields, but 80k recommends heaps of areas that sound like a great fit: Philosophy and Psychology seem like particularly important areas for EAs!
A quick once over on their career reviews section reveals:
Population ethics researcher /â policy analyst
Journalism
Research management
Non-technical roles in technical AI or biorisk research
Startup employee
Startup founding
Community building
Just to gauge more closely, it could be worth expanding that list, and running through this article.
80k has a lot of reflecting to do if what you say about them being not useful to most people is true. In my opinion though that they do try and frame things in a way that appeals to the average competent person!
It would seem counterproductive, at least to policymakers who think AI is helpful, to place any kind of widespread ban on essay-writing AI, or to somehow regulate ChatGPT and others to ensure students donât use their platforms nefariously. Regulations wonât keep with the times, and wonât be understood well by lawmakers and enforcers.
As a student, ChatGPT has made me vastly more productive (especially as a student researcher in a field I donât know much about). It seems like this sort of technology is here to stay, so it seems useful for students to learn to incorporate the tool in their lives. I wasnât old enough to remember, but I assume a similar debate may have taken place with search engines.
There are probably a myriad of ways education institutions can pick up on cheating. If not used to analyse text as AI generated itself, institutions could possibly use AI to perform linguistic analysis on irregularities and writing patterns, like those used against the unabomber in his trial. Children especially, I assume, would have these writing patterns, though I am not qualified to speak on any of this. Cheaters tend to (in a self reinforcing cycle) not be so smart, so I would expect schools to find a way around them using AI.
Overall it seems more plausible and productive for schools to regulate this themselves. Where there is worry about academic misconduct, there will be market based solutions as there already exist for plagiarism checking.
While regulation would be best,[1] commitments still make a case like this incredibly useful. The Humane League and the Better Chicken Commitment have been working on getting corporations to commit to use better breeds. It might be easier to (1) commit to using better breeds and (2) actually follow through with that commitment[2] if all UK producers have to switch to better breeds.
See APIâs work on that here
See Lewis Bollardâs remarks on that here