Half-baked ideas thread (EA / AI Safety)
[Note that I have also cross-posted a version of this post to LessWrong. This version of the post is for both half-baked EA ideas and half-baked AI Safety ideas, whereas the LessWrong version is for half-baked AI Safety ideas specifically.]
I keep having ideas for projects I would like to see being done, but I keep not having enough time available to really think through those ideas, let alone try to implement them. Practically, the alternatives for me are to either post something half-baked, or to not post at all. I don’t want to spam the group with half-thought-through posts, but I also want to post these ideas, even in their current state, in case some of them do have merit and the post inspires someone to take up those ideas.
Originally I was going to start writing up some of these ideas in my Shortform, but I figured that if I have this dilemma then likely other people do as well. So to encourage others to at least post their half-baked ideas somewhere, I am putting up this post as a place where other people can post their own ideas without worrying about making sure they formulate those ideas to the point where they’d merit their own post.
If you have several ideas, please post them in separate comments so that people can consider each of them individually. Unless of course they’re closely related to each other, in which case it might be best to post them together—use your best judgment.
[This post was also inspired by a LessWrong suggestion from Zvi to create something similar to my AGI Safety FAQ / all-dumb-questions-allowed thread, but for ideas / potentially dumb solutions rather than questions. Which is why I am bundling half-baked AI Safety ideas together with the half-baked EA ideas. If people prefer and I ever do a follow-up to this post, I can split those into two different posts. Or I can keep the AI Safety part of the discussion on LessWrong and the EA ideas part here.]
In direct violation of the instruction to put ideas in distinct comments, here’s a list of ideas most of which are so underbaked they’re basically raw:
Meta/infrastructure
Buy a hotel/condo/appt building (maybe the Travelodge?) in Berkeley and turn it into an EA Hotel
Offer to buy EAs no-doctor-required blood tests like this one that test for common productivity-hampering issues (e.g. b12 deficiency, anemia, hypothyroidism)
Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an “EA org”
This might look like a collective of independent-ish researchers?
(uncertain) I think a super high impact thing that a single high school senior could decide to do is to attend a good state honors college or non-Ivy university without a thriving EA club/scene and make it happen there.
I tentatively think it would be worth, say, turning down Harvard to go to the U of Maryland Honors college and start an EA group there
To incentivize this, make it cool and normal to credibly say that this is what one did
To a first approximation, 0% of students outside of the Ivies and maybe 10 other campuses have heard of EA. The fruit is underground at this point
Animal welfare
(Ok this one is like 60% baked) A moderately-intensive BOTEC says that wild fish slaughter accounts for 250 million years of extreme fish suffering per year from asphyxiation or being gutted alive
The linked BOTEC also includes my conservative, lower bound estimate that this is morally equivalent to ~5B human deaths per year
Idea: idk, like do something about this. Like figure out how to make it cheap and easy to kill fish en masse in a quicker and/or more humanely
This one might turn into an actual forum post
Related: can we just raise (farm) a ton of fish ourselves, but using humane practices, with donations subsidizing the cost difference relative to standard aquaculture
This also might turn into a blog post
Hot takes
There should be way more all-things-considered, direct comparisons between cause areas.
In particular, I don’t think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermist-motivated projects instead of animal welfare projects.
As just one demonstration, the Fish Welfare Initiative says they have a nearly $200k funding gap for 2022
Note (as of 6pm June 24) I may update this comment and/or break parts into their own comments as I recall other ideas I’ve had
I’m helping with creating a central platform connecting funders, talent, ideas, and advisors. Let me know if you’d like to be involved or are interested in more infos on it!
Would you mind posting a link to it?
From a Twitter thread a few days ago (lightly edited/formatted), with plenty of criticism in the replies there
I’ve often thought about the idea of paying automated, narrow-AI systems such as warehouse bots or factory robots a wage even though they’re not sentient or anything would help with many of the issues ahead of us with increased general automation. As employment goes down (less tax money) and unemployment (voluntary or otherwise) and therefore social welfare goes up, it creates a considerable strain. Paying automated systems a ‘wage’ which can then be taxed might help alleviate that. It wouldn’t be a wage, obviously, more like an ongoing fee for using such systems to be paid towards the cost of caring for humans. Bonus if that money actually goes into a big pot which helps reimburse people who suffer harm from automated systems. Might be a good stop-gap until our economy adjusts correctly, as tax revenue wouldn’t dip as far.
Obviously this is MASSIVE spitball territory, not an idea I’ve thought about seriously because I literally don’t have the time, but could be an interesting idea. First step would be to check if automation is actually resulting in employment going down, because not sure there’s evidence of that yet.
Economists have thought a bit about automation taxes (which is essentially what you’re suggesting). See, e.g., this paper.
Awesome, thanks for the link! :)
This was a top-level LW post from a few days ago aptly titled “Half-baked alignment idea: training to generalize” (that didn’t get a ton of attention):