Still feeling a bit disillusioned after pursuing academic research up to Post-Doctoral level, spent some time teaching languages and working at a democracy NGO, I feel that I haven’t found a way to do good for the world and sustain myself and my wife at the same time.
Haris Shekeris
whoops, scrap my previous answer, especially the first point. I now see that you were referring to a specific quote. Let me see.
Ah, yes, you may be right that I may have equivocated in the quote you cite, that it may have been more precise had I used the shorthand LLMs. So thanks for your charity!
However, I would like to point out that the fact that you can find something either trivially true or trivially false, under a binary logic may leave the proposition itself as not trivial at all under a different interpretation, no? I mean it’s significant that it is not trivially true, it already has two interpretations. But ok, that’s an aside that i’m not interested much in, and I think you may not be interested in it either.
And now your request for a meaningful definition suddenly makes a lot of sense too!!!! I think what I was trying to express is revealed by ‘on their own’. I mean that whereas humans (and maybe animals, though not 100% sure, as i state in my caps bold letters, i may be guilty of anthropomorphism) may sometimes do as others do, and at other times do as they please (judge, choose, etc), LLMs only have one of these options (at the time of writing i may have thought that LLMs don’t judge-opine etc without prompts—to which of course you can reply that humans always do so too (to which I’d reply that this a) isn’t so, humans do sometimes opine unprompted and that b) that i’d rather anthropomorphise in the sense of treating animals as imbued with human traits rather than treat humans as glorified machines. This is a matter of arbitrary (you may say) choice on my part, and I will not offer an argument for it, at least not now—hence the caps bold.
Once again, many thanks for enlightening me and apologies if the first post had misunderstood your comment, i hope now I am more on the ball!
Best Wishes,
Looking forward to an answer from you!
Haris
Dear Daniel,
First of all, many many thanks for your time, charity and quickness!! I really appreciate it that you deemed my post worthy of a reply!
Now, as for the reply and the specific points that you raise. First of all, I think I am quite clear and explicit regarding the use of the shorthand LLM and algorithms. Indeed, in the epilogue, I end with the example of the Youtube algorithm, which I believe is an algorithm but not an LLM (please correct me if i’m wrong).
Now, on to your second point. I am puzzled by your assertion in brackets that ‘(rules, I might add, that we don’t know)‘, are you saying that not even the coders who code LLMs know these rules (in this case I’d use the word algorithms, as the rules would in my poor grasp of the matter, be in the forms of algorithms, such as ’if you get prompt X look into dataset Y etc), or do you mean that the rules are not known to the user? I would appreciate it if you could clarify this for me.Finally, could you please explain to me what specific ‘meaningful definition’ your after in your last sentence? I feel a bit lost.
Once again, many thanks for your prompt response, I would love it if my comments elicit another response for you that will allow both of us to reach a synthesis :)
Best Wishes,
Haris
Dear friend @titotal
Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won’t be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven’t managed to turn into a hyperlink), I’m a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to ‘responding to prompts’).
Best Wishes
Apologies if I was waffling a bit above, I’d be delighted to hear your thoughts!
HarisPS: The edit is just changing the link to the article into a hyperlink :)
This sounds quite shocking—the absence of an answer and the laughter in the video, as well as the −23 votes here and the lack of a big discussion.
On the conspiracy theory front, it may be that the guy doesn’t want to create panic. Or that the threat is not there, despite what the main players/experts believe (the Musks and Zuckerbergs of the world).
I think we should take seriously the first possibility, that the key political player thinks the threat is real (and thus agrees with the players/experts) and knows his stuff, it’s just that he doesn’t want to reveal much to the public. What do you think?
Dear friends,
I won’t hide this, I was kindly asked by a friend to take a look at this thread. I have to admit that I was surprised and taken aback from the fact that the discussion focused not on whether this will restore dignity and give independence and a new lease on life to those not-so-well-off for whatever reason, and the reduction of inequality (after all, from what I hear, the US is one of the most unequal societies in the developed world), but rather gave me the impression that it was concerning itself too much with minutiae. From the evidence and the history, as this article points out: https://en.wikipedia.org/wiki/Universal_basic_income , it seems that the idea is a) not new at all and that it has quite a venerable and ‘universal’ history (from Julius Caesar’s Rome to Ahmadinejad’s Iran) and that b) it has worked well in various settings (not everywhere admittedly).
So with all due respect I would kindly ask you to see the forest and miss it for the tree, in other words consider whether UBI can help alleviate poverty and reduce inequality (my take would be by empowering people through guaranteed money—If I remember correctly, in some experiments with UBI, there was a surge in enterpreneurship from formerly disempowered sections of the population).
As for numbers (I think EA likes numbers), if a person with an income of 5,000 annually receives a 1,000 annual help, this represents a 20% increase in their revenues. If a person earns 1,000,000 annually then a 1,000 help is merely a (if i’m doing my sums right) 0,1% percent revenue added. However, the difference may be that the first person feeds their whole family milk and bread for the year whilst the second one buys their third Rolex watch. So everybody’s happy.
Apologies in advance if this sounds a bit crude and not logical enough, I’m just feeling a bit sentimental today,
Haris
Dear Jon,
Many thanks for this, for your kindness in answering so thoghtfully and giving me food for thought too! I’m quite a lazy reader but I may actually spend money to buy the book you suggest (ok, let’s take the babystep of reading the summary as soon as possible first). If you still don’t want to give up on your left leanings, you may be interested in an older classic (if you haven’t already read it): https://en.wikipedia.org/wiki/The_Great_Transformation_(book)
The great takeaway for me from this book was that the ‘modern’ (from a historical perspective) perception of labor is a relatively recent development, plus that it’s an inherently political development (born out of legislation rather than as a product of the free market). My own politics (or scientopolitics let’s call them) are that politics and legislation should be above all, so I wouldn’t feel squeamish about political solutions (i know this positions has its own obvious pitfalls though).
Dear friends, you talk about AI generating a lot of riches, and I get the feeling that you mean ‘generate a lot of riches for everybody’ - however, I fail to understand this. How will AI generate income for a person with no job, even if the prices of goods drop? Won’t the riches be generated only for those who run the AIs? Can somebody please clarify for me? I hope I haven’t missed something totally obvious
Dear @JonCefalu, thanks for this very honest, insightful and thought-provoking article!
You do seem very anxious and you do touch on quite a number of topics. I would like to engage with you on the topic of joblessness, which I find really interesting and neglected (i think) by at least the EA literature that I have seen.
To me, a future where most people no longer have to work (because AI and general robots or whatever take care of food-production, production of entertainment programs, work in the technoscientific sector) could go both ways, in the sense that: a) it can indeed be an s-risk dystopia where we spend our time consuming questionable culture at home or at malls (and generally suffer from ill-health and associated risks) (though with no job to give us money, I don’t know how these transactions would be made, and I’d like to hear some thoughts about this) or b) it can be a utopia and a virtuous circle where we produce new ways of entertaining ourselves or producing quality time (family, new forms of art or philosphy, etc.) or keeping ourselves busy, the AI-AGI saturates the market, we react (in a virtuous way, nothing sinister), the AGI catches up, and so on.
So to sum up, the substance of the above all-too likely thought-experiment would be, in the event of AGI taking off, what will happen to (free) time, and what would happen to money? Regarding the latter, given that the most advanced technology lies with companies whose motive is money-making, I would be a bit pessimistic.
As for the other thoughts about nuclear weapons and Skynet, I’d really love to learn more as it sounds fascinating and like stuff which mere mortals rarely get to know about :)
Flagging a potential problem for longtermism and the possibility on expanding human civilisation on other planets: What will the people eat there? Can we just assume that technoscience will give us the answer’? Or is that a quick and too optimistic question? Can one imagine a situation where humanity goes extinct because the earth finally becomes uninhabitable and the on the first new planet on which we step on the technology either fails or the settlers miss the opportunity window to develop their food? I’m sure there must be some such examples in the history of settlers into new worlds in the existing human history, I don’t know if anybody’s working on this in the context of longtermism though.
Just some food for thought hopefully
https://www.theguardian.com/environment/2023/jan/07/holy-grail-wheat-gene-discovery-could-feed-our-overheated-world
https://www.theguardian.com/commentisfree/2022/nov/30/science-hear-nature-digital-bioacoustics
what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/ proteins are deemed dangerous to human health?
Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures?
Hey hello,
Thanks, let’s digest stuff a bit in the next few days and see how it goes. Thanks for the offer, same goes for me, at the moment I’ve got time! :)
Best Wishes,
Haris
Hey hello,
wow, that sounds really interesting, the lobsters evidence! Though if you ask most people they’ll probably say that humans are ‘something more’ than just animals, be they either god’s images or just rational beings (suggesting that other beings are less or not rational).
Best Wishes,
Haris
Hey hello,
As you may have found out by now, I’m sometimes a bit sceptical about such ‘scientific principles’ - also you may have seen what I hinted about before, that there have been human societies which didn’t have hierarchies, so it’s not totally not impossible.
Best Wishes,
Haris
Dear Miguel,
Excellent!! I saw you’ve posted on another topic but haven’t read it yet. At the moment I have no further work than what you can see on my Researchgate profile (where the link of the presentation you saw is) I’m mostly interested in decision-making and governance, though also with the AI/AGI alignement problem (though I don’t have the background in computing). I’m taking the policy-making course by an EA affiliated entity (it starts the week after next) and the EA course (was supposed to start yesterday, but will start next Tuesday) and in the meanwhile just scoping around for ideas.
Best Wishes,
Haris
Dear Miguel,
First of all, it seems to me that your first paragraph on expertise seems to go against the firefighters (and free speech) example. As for money, I take home the standardizing function (what you label as simplification of measurement), however, about that and then about capitalism, I think we can agree to disagree as to what facts and truths are in this given discussion. I don’t think these facts are important for lions for example, so maybe we can try to live like lions? Or like foragers for example? :) (especially if we don’t all have the space constraint by all being constricted on Earth).
Many thanks for your ideas too, they’re good food for thought and a reality check for me!
Best Wishes,
Haris
Dear Miguel, thanks I’ll check them out!
I’m here also because I want my life to have a bit more of an impact on others and the future. It’s a journey that I have only undertaken recently (I mean with EA), we’ll see where it leads me—you never know, we may end up working together, hehe :)
Yup, let me know of your ideas—it may even affect our discussion above about experts and problems.
Best Wishes,
Haris
Dear Miguel, so this Project Ideas of Future Fund gives out money for things such as a constitution for the future or other decision-making tools? You’re tempting me there, I’ll have to check it out.
As for wicked problems, here’s the wikipedia post, it’s a bit more specific than what you mention there, let’s see: https://en.wikipedia.org/wiki/Wicked_problem this could be a good start.
Best Wishes,
Haris
Dear Miguel,
Do you think there are unsolveable problems? Though about the specific one, I think it can be solved by a mix of selection or just by random selection of candidates (in the context of public decision-makers).
Mm to answer your second question first, I struggle to see the connection between fairness and the exchange of goods for money. For example, I could well envisage a gift economy or exchange of time instead of credit and material goods. Though again, not sure all of this would be fairer rather than just alternatives. I’ll have to look into the notion of fairness a bit more. I remember a bit of Rawls and justice as fairness, but I’ll have to spend some time dusting that up in order to have something interesting to say.
As for why I said exchange for money only produces inequality, i guess what i meant was that if one medium of the exchange (money) can be used for anything, whilst the other has specific uses, then I guess the person who ends up with the thing for anything is better off—especially if they start specialising in collecting this use-for-everything good and yardstick. In plain words, I guess if some people end up with more money and others with less, then that for me is inequality enough.
Best Wishes,
Haris
Dear Miguel,
Many thanks for this, quite thought-provoking. Looking forward to your blog post, I hope I can brag a bit that I played a role in it by giving you some food for thought :)
As for the powerpoint, mind telling me what you mean by flow? As unfortunately I don’t have the video for that one, we may have to go either with focussed questions or just a video session where I walk you through it. I suppose one think you can get started, do you know what ‘wicked problems’ are? You can get a bit of an idea about them by wikipedia, i trust the wisdom of the crowd on that (otherwise i can refer you to the original paper).
Best Wishes,
Haris (not Miguel, see above, hehe)
Wow, nice! I think this is a nice way of bringing important stakeholders on the table!