This post makes me feel very positive about GWWC and its future! Itās hard to overrate the value of focus. One thing I would love to learn is what you are doubling down on as a result.
MathiasKBšø
No idea, itās probably worth reaching out to ask them and alert them in case they arenāt already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.
I am not saying this to be a dick (I hope), but because I donāt want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.
I think people are far too happy to give an answer like: āThanks for highlighting this concern, we are very mindful of this throughout our workā which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.
I donāt mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I donāt currently consider it worthwhile to spend time investigating or even encouraging others to investigate.
Wishing Centre For, the best of luck!
I do a lot of writing at my job, and find myself using AI more and more for drafting. I find it especially helpful when I am stuck.
Like any human assigned with a writing task, Claude cannot magically guess what you want. I find that when I see other people get lackluster writing results with AI, itās very often due to providing almost no context for the AI to work with.
When asking for help with a draft, I will often write out a few paragraphs of thoughts on the draft. For example, if I were brainstorming ideas for a title, I might write out a prompt like:
āI am looking to create a title for the following document: <document>.
My current best attempt at a title is: āWhy LLMs need context to do good workā
I think this title does a good job at explaining the core message, namely that LLMs cannot guess what you want if you donāt provide sufficient context, but it does a poor job at communicating <some other thing I care about communicating>.
Please help brainstorm ten other titles, from which we can ideate.ā
Perhaps Claude comes up with two good titles, or one title has a word I particularly like. Then I might follow up saying:āI like this word, it captures <some concept> very well. Can we ideate a few more ideas using this word?ā
From this process, I will usually get out something good, which I wouldnāt have been able to think of myself. Usually Iāll take those sentences, work them into my draft, and continue.
Really incredible job, really exciting to see so many great projects come out of Catalyze. Hopefully people will consider funding not just the projects, but also consider the new incubator which created them!
On a side note, I am especially excited about TamperSec and see their work as the most important technical contribution that can be made to AI governance currently.
Donāt put all of your savings into shady cryptocurrencies. If it sounds good to be true, it is because it probably is.
that when I wrote the comment, the post was at ā4 upvotes!
Why on earth are people downvoting this post?
Figuring out how to respond to the USAID freeze (and then doing it) is probably the most important question in global health and development right now. That there has been virtually no discussion on the forum so far has frankly been quite shocking to me.
Have a fat upvote, wishing you the best of luck
More dakka is to pour more firepower onto a problem. Two examples:
Example: ābright lights donāt help my seasonal depressionā. More dakka: āhave you tried even brighter lights?ā
Example: we brainstormed ten ideas, none of them seem good. More dakka: āTry listing a 100 ideasā
Thank you for pursuing this line of argument, I think the question of legal rights for AI is a really important one. One thought iāve had reading your previous posts about this, is whether it legal rights will matter not only for securing the welfare of AI but also for safeguarding humanity.
I havenāt really thought this fully through, but hereās my line of thinking:
As we are currently on track to create superintelligence and I donāt think we can say anything with much confidence about whether the AI we create will value the same things as us, it might be important to set up mechanisms which make peaceful collaboration with humanity the most attractive option for superintelligent AI(s) to get what they want.
If your best bet for getting what you want involves eliminating all humans, you are a lot more likely to eliminate all humans!
ScrewĀworm Free FuĀture is hiring for a Director
I have pushed the idea on the CE research team to the point Iām sure theyāre sick of hearing me rant about it!
To my knowledge itās on their list of ideas to research for their next round of animal welfare charities
LaunchĀing ScrewĀworm-Free FuĀture ā FundĀing and SupĀport Request
Merry Christmas and happy holidays :)
I havenāt looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
Iām in San Francisco this weekend and next week, if youāre in the bay area want to meet, donāt hesitate to reach out :) (both happy to meet 1-1 and at social events)
Absolutely do so! In the eyes of the vast majority of employers, organizing a university group centered around charity shows character and energy, highly positive qualities in an employee.
How big a deal is the congressional commission? What is the historical track record of Congress implementing the commissionās top recommendation?
With hindsight, this comment from Jan Kulveit looks prescient.
I decided to drown a puppy in a local pond. Hopefully, doing so would toughen its character, rather than allowing it to succumb to modern frailty.
Laughed out loud for a good minute after reading this!
no, thanks