Still feeling a bit disillusioned after pursuing academic research up to Post-Doctoral level, spent some time teaching languages and working at a democracy NGO, I feel that I haven’t found a way to do good for the world and sustain myself and my wife at the same time.
Haris Shekeris
Dear Alex,
Fascinating read, both for the title and the topic tackled and recommendations.
So, I think that history is a very good source of ideas and of inspiration on what to do next, something which I think is neglected by the EA movement in general and its primary actors specifically—I would even argue that Toby Ord’s answer to you was lukewarm at best if not just polite at worst. (though i do remember tackling a fascinating book by a historian in my first book club participation).
As for the topic, I’d like to provide some comments (these are just my ignorant comments, not to be treated as authoritative in any way). First from what I remember, some of the Fabian ideas, such as at least the vote for women, were around way before the Fabians—according to Wikipedia, “A member of the Liberal Party and author of the early feminist work The Subjection of Women, Mill was also the second member of Parliament to call for women’s suffrage after Henry Hunt in 1832.[5][6]” (here’s he link: https://en.wikipedia.org/wiki/John_Stuart_Mill ). Wikipedia does not seem to consider J.S. Mill or his predecessor Henry Hunt as a Fabian.
Furthermore, I think that in the grand scheme of historical progress, it may actually be a bit more randomly that some movements’ ideas get applied and other’s not, or even the extent that these ideas are ‘theirs’ as opposed to mostly around in the ether (or zeitgeist, in other words). So, for example, I think many of the concrete ideas you attribute to the Fabians probably overlapped with other socialist or other thinkers of the time, and on the other hand, as Ord remarks, their ways of influencing and pushing their agenda forward may have also had to do more with peculiarities of British culture (groups of gentlemen, usually rich, in a stratified society, rather than say mass revolution—even Science, a previous quintessentially British revolution in 1650, was for many years practiced and propagated only among learned Gentlemen).
As for communism, in your introduction, I’d be a bit more charitable, in recognising many of its positive ideas such as trade unions, though i wouldn’t hasten to contradict the preceding paragraph of the text by remarking that maybe many of the measures suggested by Marx may have also drawn from the Zeitgeist of mid-to end of the 10th century. Also however i’d remark that it’s possible for communism to have the last laugh after all as I heard some time ago that whereas in the first half of the twentieth century it was the poor and uneducated who supported it, nowadays it’s also championed by many educated people in the US (think, of course in a much more constrained way, of the support for Bernie Sanders).
Anyway, History’s a funny thing, so you never know what may be rediscovered later and where its random (according to me, even though that’s one of the huge points of contention i have with EA doctrine—longtermism - ) walk will take us and which movements will be deemed to have influenced what in the future.
And a note on your recommendations, yeah, it sounds good to attract elites, but yeah, as a personal life-choice, i’ve recently decided that for my biggest passions and struggles i choose to fight, I would go with the non-elite masses—at least that’s the aspiration.
Best Wishes,Haris
Wow, nice! I think this is a nice way of bringing important stakeholders on the table!
whoops, scrap my previous answer, especially the first point. I now see that you were referring to a specific quote. Let me see.
Ah, yes, you may be right that I may have equivocated in the quote you cite, that it may have been more precise had I used the shorthand LLMs. So thanks for your charity!
However, I would like to point out that the fact that you can find something either trivially true or trivially false, under a binary logic may leave the proposition itself as not trivial at all under a different interpretation, no? I mean it’s significant that it is not trivially true, it already has two interpretations. But ok, that’s an aside that i’m not interested much in, and I think you may not be interested in it either.
And now your request for a meaningful definition suddenly makes a lot of sense too!!!! I think what I was trying to express is revealed by ‘on their own’. I mean that whereas humans (and maybe animals, though not 100% sure, as i state in my caps bold letters, i may be guilty of anthropomorphism) may sometimes do as others do, and at other times do as they please (judge, choose, etc), LLMs only have one of these options (at the time of writing i may have thought that LLMs don’t judge-opine etc without prompts—to which of course you can reply that humans always do so too (to which I’d reply that this a) isn’t so, humans do sometimes opine unprompted and that b) that i’d rather anthropomorphise in the sense of treating animals as imbued with human traits rather than treat humans as glorified machines. This is a matter of arbitrary (you may say) choice on my part, and I will not offer an argument for it, at least not now—hence the caps bold.
Once again, many thanks for enlightening me and apologies if the first post had misunderstood your comment, i hope now I am more on the ball!
Best Wishes,
Looking forward to an answer from you!
Haris
Dear Daniel,
First of all, many many thanks for your time, charity and quickness!! I really appreciate it that you deemed my post worthy of a reply!
Now, as for the reply and the specific points that you raise. First of all, I think I am quite clear and explicit regarding the use of the shorthand LLM and algorithms. Indeed, in the epilogue, I end with the example of the Youtube algorithm, which I believe is an algorithm but not an LLM (please correct me if i’m wrong).
Now, on to your second point. I am puzzled by your assertion in brackets that ‘(rules, I might add, that we don’t know)‘, are you saying that not even the coders who code LLMs know these rules (in this case I’d use the word algorithms, as the rules would in my poor grasp of the matter, be in the forms of algorithms, such as ’if you get prompt X look into dataset Y etc), or do you mean that the rules are not known to the user? I would appreciate it if you could clarify this for me.Finally, could you please explain to me what specific ‘meaningful definition’ your after in your last sentence? I feel a bit lost.
Once again, many thanks for your prompt response, I would love it if my comments elicit another response for you that will allow both of us to reach a synthesis :)
Best Wishes,
Haris
Γαμινγκ the Algorithms: Large Language Models as Mirrors
Dear friend @titotal
Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won’t be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven’t managed to turn into a hyperlink), I’m a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to ‘responding to prompts’).
Best Wishes
Apologies if I was waffling a bit above, I’d be delighted to hear your thoughts!
HarisPS: The edit is just changing the link to the article into a hyperlink :)
ChatGPT not so clever or not so artificial as hyped to be?
This sounds quite shocking—the absence of an answer and the laughter in the video, as well as the −23 votes here and the lack of a big discussion.
On the conspiracy theory front, it may be that the guy doesn’t want to create panic. Or that the threat is not there, despite what the main players/experts believe (the Musks and Zuckerbergs of the world).
I think we should take seriously the first possibility, that the key political player thinks the threat is real (and thus agrees with the players/experts) and knows his stuff, it’s just that he doesn’t want to reveal much to the public. What do you think?
Dear friends,
I won’t hide this, I was kindly asked by a friend to take a look at this thread. I have to admit that I was surprised and taken aback from the fact that the discussion focused not on whether this will restore dignity and give independence and a new lease on life to those not-so-well-off for whatever reason, and the reduction of inequality (after all, from what I hear, the US is one of the most unequal societies in the developed world), but rather gave me the impression that it was concerning itself too much with minutiae. From the evidence and the history, as this article points out: https://en.wikipedia.org/wiki/Universal_basic_income , it seems that the idea is a) not new at all and that it has quite a venerable and ‘universal’ history (from Julius Caesar’s Rome to Ahmadinejad’s Iran) and that b) it has worked well in various settings (not everywhere admittedly).
So with all due respect I would kindly ask you to see the forest and miss it for the tree, in other words consider whether UBI can help alleviate poverty and reduce inequality (my take would be by empowering people through guaranteed money—If I remember correctly, in some experiments with UBI, there was a surge in enterpreneurship from formerly disempowered sections of the population).
As for numbers (I think EA likes numbers), if a person with an income of 5,000 annually receives a 1,000 annual help, this represents a 20% increase in their revenues. If a person earns 1,000,000 annually then a 1,000 help is merely a (if i’m doing my sums right) 0,1% percent revenue added. However, the difference may be that the first person feeds their whole family milk and bread for the year whilst the second one buys their third Rolex watch. So everybody’s happy.
Apologies in advance if this sounds a bit crude and not logical enough, I’m just feeling a bit sentimental today,
Haris
Dear Jon,
Many thanks for this, for your kindness in answering so thoghtfully and giving me food for thought too! I’m quite a lazy reader but I may actually spend money to buy the book you suggest (ok, let’s take the babystep of reading the summary as soon as possible first). If you still don’t want to give up on your left leanings, you may be interested in an older classic (if you haven’t already read it): https://en.wikipedia.org/wiki/The_Great_Transformation_(book)
The great takeaway for me from this book was that the ‘modern’ (from a historical perspective) perception of labor is a relatively recent development, plus that it’s an inherently political development (born out of legislation rather than as a product of the free market). My own politics (or scientopolitics let’s call them) are that politics and legislation should be above all, so I wouldn’t feel squeamish about political solutions (i know this positions has its own obvious pitfalls though).
Dear friends, you talk about AI generating a lot of riches, and I get the feeling that you mean ‘generate a lot of riches for everybody’ - however, I fail to understand this. How will AI generate income for a person with no job, even if the prices of goods drop? Won’t the riches be generated only for those who run the AIs? Can somebody please clarify for me? I hope I haven’t missed something totally obvious
Dear @JonCefalu, thanks for this very honest, insightful and thought-provoking article!
You do seem very anxious and you do touch on quite a number of topics. I would like to engage with you on the topic of joblessness, which I find really interesting and neglected (i think) by at least the EA literature that I have seen.
To me, a future where most people no longer have to work (because AI and general robots or whatever take care of food-production, production of entertainment programs, work in the technoscientific sector) could go both ways, in the sense that: a) it can indeed be an s-risk dystopia where we spend our time consuming questionable culture at home or at malls (and generally suffer from ill-health and associated risks) (though with no job to give us money, I don’t know how these transactions would be made, and I’d like to hear some thoughts about this) or b) it can be a utopia and a virtuous circle where we produce new ways of entertaining ourselves or producing quality time (family, new forms of art or philosphy, etc.) or keeping ourselves busy, the AI-AGI saturates the market, we react (in a virtuous way, nothing sinister), the AGI catches up, and so on.
So to sum up, the substance of the above all-too likely thought-experiment would be, in the event of AGI taking off, what will happen to (free) time, and what would happen to money? Regarding the latter, given that the most advanced technology lies with companies whose motive is money-making, I would be a bit pessimistic.
As for the other thoughts about nuclear weapons and Skynet, I’d really love to learn more as it sounds fascinating and like stuff which mere mortals rarely get to know about :)
Flagging a potential problem for longtermism and the possibility on expanding human civilisation on other planets: What will the people eat there? Can we just assume that technoscience will give us the answer’? Or is that a quick and too optimistic question? Can one imagine a situation where humanity goes extinct because the earth finally becomes uninhabitable and the on the first new planet on which we step on the technology either fails or the settlers miss the opportunity window to develop their food? I’m sure there must be some such examples in the history of settlers into new worlds in the existing human history, I don’t know if anybody’s working on this in the context of longtermism though.
Just some food for thought hopefully
https://www.theguardian.com/environment/2023/jan/07/holy-grail-wheat-gene-discovery-could-feed-our-overheated-world
https://www.theguardian.com/commentisfree/2022/nov/30/science-hear-nature-digital-bioacoustics
what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/ proteins are deemed dangerous to human health?
Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures?
Hey hello,
Thanks, let’s digest stuff a bit in the next few days and see how it goes. Thanks for the offer, same goes for me, at the moment I’ve got time! :)
Best Wishes,
Haris
Hey hello,
wow, that sounds really interesting, the lobsters evidence! Though if you ask most people they’ll probably say that humans are ‘something more’ than just animals, be they either god’s images or just rational beings (suggesting that other beings are less or not rational).
Best Wishes,
Haris
Hey hello,
As you may have found out by now, I’m sometimes a bit sceptical about such ‘scientific principles’ - also you may have seen what I hinted about before, that there have been human societies which didn’t have hierarchies, so it’s not totally not impossible.
Best Wishes,
Haris
Dear Miguel,
Excellent!! I saw you’ve posted on another topic but haven’t read it yet. At the moment I have no further work than what you can see on my Researchgate profile (where the link of the presentation you saw is) I’m mostly interested in decision-making and governance, though also with the AI/AGI alignement problem (though I don’t have the background in computing). I’m taking the policy-making course by an EA affiliated entity (it starts the week after next) and the EA course (was supposed to start yesterday, but will start next Tuesday) and in the meanwhile just scoping around for ideas.
Best Wishes,
Haris
Dear Miguel,
First of all, it seems to me that your first paragraph on expertise seems to go against the firefighters (and free speech) example. As for money, I take home the standardizing function (what you label as simplification of measurement), however, about that and then about capitalism, I think we can agree to disagree as to what facts and truths are in this given discussion. I don’t think these facts are important for lions for example, so maybe we can try to live like lions? Or like foragers for example? :) (especially if we don’t all have the space constraint by all being constricted on Earth).
Many thanks for your ideas too, they’re good food for thought and a reality check for me!
Best Wishes,
Haris
Dear Alex,
Many thanks for getting back to me, yes, I agree that having a method to actually implement your wishful revolutionary thinking is very important, and yes, that could be the base of a long and fascinating discussion i’d like to have over a glass of wine and/or good food :)
Best Wishes,
Haris