The EA movement is chock-full of people who are good at programming. What about open-sourcing the EA source code and outsourcing development of new features to volunteer members who want to contribute?
MathiasKBđ¸
no, thanks
This post makes me feel very positive about GWWC and its future! Itâs hard to overrate the value of focus. One thing I would love to learn is what you are doubling down on as a result.
No idea, itâs probably worth reaching out to ask them and alert them in case they arenât already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.
I am not saying this to be a dick (I hope), but because I donât want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.
I think people are far too happy to give an answer like: âThanks for highlighting this concern, we are very mindful of this throughout our workâ which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.
I donât mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I donât currently consider it worthwhile to spend time investigating or even encouraging others to investigate.
Wishing Centre For, the best of luck!
I do a lot of writing at my job, and find myself using AI more and more for drafting. I find it especially helpful when I am stuck.
Like any human assigned with a writing task, Claude cannot magically guess what you want. I find that when I see other people get lackluster writing results with AI, itâs very often due to providing almost no context for the AI to work with.
When asking for help with a draft, I will often write out a few paragraphs of thoughts on the draft. For example, if I were brainstorming ideas for a title, I might write out a prompt like:
âI am looking to create a title for the following document: <document>.
My current best attempt at a title is: âWhy LLMs need context to do good workâ
I think this title does a good job at explaining the core message, namely that LLMs cannot guess what you want if you donât provide sufficient context, but it does a poor job at communicating <some other thing I care about communicating>.
Please help brainstorm ten other titles, from which we can ideate.â
Perhaps Claude comes up with two good titles, or one title has a word I particularly like. Then I might follow up saying:âI like this word, it captures <some concept> very well. Can we ideate a few more ideas using this word?â
From this process, I will usually get out something good, which I wouldnât have been able to think of myself. Usually Iâll take those sentences, work them into my draft, and continue.
Really incredible job, really exciting to see so many great projects come out of Catalyze. Hopefully people will consider funding not just the projects, but also consider the new incubator which created them!
On a side note, I am especially excited about TamperSec and see their work as the most important technical contribution that can be made to AI governance currently.
Donât put all of your savings into shady cryptocurrencies. If it sounds good to be true, it is because it probably is.
that when I wrote the comment, the post was at â4 upvotes!
Why on earth are people downvoting this post?
Figuring out how to respond to the USAID freeze (and then doing it) is probably the most important question in global health and development right now. That there has been virtually no discussion on the forum so far has frankly been quite shocking to me.
Have a fat upvote, wishing you the best of luck
More dakka is to pour more firepower onto a problem. Two examples:
Example: âbright lights donât help my seasonal depressionâ. More dakka: âhave you tried even brighter lights?â
Example: we brainstormed ten ideas, none of them seem good. More dakka: âTry listing a 100 ideasâ
Thank you for pursuing this line of argument, I think the question of legal rights for AI is a really important one. One thought iâve had reading your previous posts about this, is whether it legal rights will matter not only for securing the welfare of AI but also for safeguarding humanity.
I havenât really thought this fully through, but hereâs my line of thinking:
As we are currently on track to create superintelligence and I donât think we can say anything with much confidence about whether the AI we create will value the same things as us, it might be important to set up mechanisms which make peaceful collaboration with humanity the most attractive option for superintelligent AI(s) to get what they want.
If your best bet for getting what you want involves eliminating all humans, you are a lot more likely to eliminate all humans!
ScrewÂworm Free FuÂture is hiring for a Director
I have pushed the idea on the CE research team to the point Iâm sure theyâre sick of hearing me rant about it!
To my knowledge itâs on their list of ideas to research for their next round of animal welfare charities
LaunchÂing ScrewÂworm-Free FuÂture â FundÂing and SupÂport Request
Merry Christmas and happy holidays :)
I havenât looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
Iâm in San Francisco this weekend and next week, if youâre in the bay area want to meet, donât hesitate to reach out :) (both happy to meet 1-1 and at social events)
Absolutely do so! In the eyes of the vast majority of employers, organizing a university group centered around charity shows character and energy, highly positive qualities in an employee.
(conflict of interest note, Iâm pretty good friends with Apartâs founder)
One thing I really like about Apart is how meritocratic it is. Anyone can sign up for a hackathon, and if your project is great, win a prize. They then help prize winners with turning their project into publishable research. This year two prize winners even ended up presenting their work orally at ICLR (!!).
Nobody cares what school you went to. Nobody is looking at your gender age or resume. What matters is the quality of your work and nothing but.
And it turns out that when you look just at quality of the work, youâll find that it comes from all over the worldâoften countries that are otherwise underrepresented in the EA and AI safety community. I think that is really really cool.
I think apart could do a much better job at communicating just how different their approach is to the vast majority of AI upskilling programmes, which heavily rely on evaluating your credentials to decide if youâre worthy of doing serious research.
I donât know anything about the cost-per-participant and whether that justifies funding apart over AI safety projects, but there is something very beautiful and special about Apartâs approach to me.