Former Reliability Engineer with expertise in data analysis, facilitation, incident investigation, technical writing, and more. Currently working with the Communications team at MIRI. Former Teaching Fellow and current facilitator at BlueDot Impact. I volunteer with AI Safety Quest, giving Navigation Calls and running the MAGIS mentorship program.
Joe Rogero
E.g. Ajeya’s median estimate is 99% automation of fully-remote jobs in roughly 6-8 years, 5+ years earlier than her 2023 estimate.
This seems more extreme than the linked comment suggests? I can’t find anything in the comment justifying “99% automation of fully-remote jobs”.
Frankly I think we get ASI and everyone dies before we get anything like 99% automation of current remote jobs, due to bureaucratic inertia and slow adoption. Automation of AI research comes first on the jagged frontier. I don’t think Ajeya disagrees?
It’s often in the nature of thought experiments to try to reduce complicated things to simple choices. In reality, humans rarely know enough to do an explicit EV calculation about a decision correctly. It can still be an ideal that can help guide our decisions, such that “this seems like a poor trade of EV” is a red flag the same way “oh, I notice I could be Dutch booked by this set of preferences” is a good way to notice there may be a flaw in our thinking somewhere.
Impact Colabs started something similar but then abandoned it. They have a forum post and more detailed write-up on why. Our aim is less ambitious (for now, just listing and ranking project ideas with some filtering options) though we do hope to expand the list to include more active volunteer management options eventually. Of note, this database is divided into “quick wins”—roughly, things someone could do with less than a week’s work without being part of a particular organization—and “larger projects”—which typically involve starting a full-time group or or supporting an existing one.
If you know of a project not listed, feel free to add it!
I’d like to discuss a similar “metaproject” I have in the works. Currently my goal for a “minimum viable product” is just the list, with volunteer matching added later if it works, but also including smaller “quick win” projects and immediate contributions that could be made. Would you be willing to share further and discuss lessons learned on this one?
Not sure if prewritten material counts, but I’d like to enter my Trial of the Automaton if it qualifies. I can transfer it to Google docs if need be.
(Cross-posted on the EA Anywhere Slack and a few other places)
I have, and am willing to offer to EA members and organizations upon request, the following generalist skills:
Facilitation. Organize and run a meeting, take notes, email follow-ups and reminders, whatever you need. I don’t need to be an expert in the topic, I don’t need to personally know the participants. I do need a clear picture of the meeting’s purpose and what contributions you’re hoping to elicit from the participants.
Technical writing. More specifically, editing and proofreading, which don’t require I fully understand the subject matter. I am a human Hemingway Editor. I have been known to cut a third of the text out of a corporate document while retaining all relevant information to the owner’s satisfaction. I viciously stamp out typos.
Presentation review and speech coaching. I used to be terrified of public speaking. I still am, but now I’m pretty good at it anyway. I have given prepared and impromptu talks to audiences of dozens-to-hundreds and I have coached speakers giving company TED talks to thousands. A friend who reached out to me for input said my feedback was “exceedingly helpful”. If you plan to give a talk and want feedback on your content, slides, or technique, I would be delighted to advise.
I am willing to take one-off or recurring requests. I reserve the right to start charging if this starts taking up more than a couple hours a week, but for now I’m volunteering my time and the first consult will always be free (so you can gauge my awesomeness for yourself). Message me or email me at optimiser.joe@gmail.com if you’re interested.
IIRC edamame is safe, though I have had one bad experience with edamame-based noodles. (I think it had other ingredients but someone else did the cooking then so I can’t be sure). Haven’t had quinoa in a while but I think it’s safe too. That’s a good idea.
Yes, “let’s not fail with abandon” is a good summary of my argument to fellow omnivores.
That’s a really good overview by Rethink Priorities. The Invertebrate Sentience Table shifted my credence a little bit in favor of insects, but I think I tend to weight more highly the argument that some sentience criteria can prove too much. I’m not super impressed by a criteria that shares a “Yes” answer with plants and/or prokaryotes. In the same vein, contextual learning sounds impressive, but if I’m understanding that description correctly then it also applies to the recommendation feature of Google Search. I do, however, agree we should take the possibility seriously and continue looking for hard evidence either way.
Here’s a thought: is anyone currently testing where language models like GPT-4 fall on the sentience table?
Thanks, those are some great resources! I can read the post on insect sentience but the link to the paper throws an error. I’d love to read the definitions they use for their criteria.
Getting technical: soy is a different branch of the legume family tree. The one I’m most allergic to seems to be Hologalegina (galegolds), which includes broad beans, peas, and chickpeas.
Tofu is always fine and soy is I think fine, but I’ve had reactions to a few things containing soy + something else (soy protein shakes = very bad day). Soybeans are phaseoloids, the same sub-family as black/brown beans, but only the latter reliably causes me problems. I haven’t tested all the phaseoloids but it’s obviously kinda unpleasant to do so.
Part of the problem with this allergy profile is the uncertainty it spawns; many foods have 2 or 3 ingredients that could be the cause of a reaction and it can be hard to tell which is the culprit. To complicate matters further, cooking helps at least some of them (fish is 50-50, egg yolk is fine).
I’ve been to formal allergy testing but they only had tests for a few of my problem foods because come on, who would be allergic to celery? IIRC the scale they used is 1 to 5 where 5 is “don’t f*ck with this ever”.
Strong reaction: Fish mix (4), egg yolk (4), catfish (5), english pea (5)
Weak reaction: trout (3), green bean (3)
No reaction: shellfish (although the allergist mentioned I could be allergic to the shells, which aren’t tested, and I’ve definitely reacted to every shellfish I’ve tried in the last two decades)
Thanks, Fai! I’m still on the fence about this, but assuming it were true—
what does the evidence look like for suffering? It seems like it might be better to eat an animal that’s lived a relatively normal life compared to e.g. farmed chickens. I knowsomefish farms can get pretty bad but how common is that?Edit: Pete’s comment had a useful source here.I’m curious what evidence convinced you about fish. So far I haven’t seen much on the subject of consciousness specifically, though I have seen some arguments around pain nerves and aversive stimuli.
Conditional on insects having conscious experiences, I’d agree with you. I’m not convinced they do, and I don’t find stimulus-response alone to be sufficient for giving a creature nonzero moral weight. Plenty of people may disagree with me on that, though, and I certainly wouldn’t recommend anyone attempt a diet substitute that they think causes more harm.
I enjoyed this post. Short and to the point.
I’d like to add that the stakes are high enough to justify pushing resources into every angle we might reasonably have on the problem. Even if foundational research has only a sliver of a chance of impacting future alignment, that sliver contains quite a lot of value. And I do think it’s in fact quite a bit more than a sliver.
Great advice, Yonatan! This is actually baked into the original plan—build a minimum viable product, find some users, find the sticking points, iterate and improve. “Build a feedback form” is on the to-do list, and I’m always open to suggestions for better design and sharing.
Also, honestly, even if nobody else benefits from this, I’ll be glad to have it available. Initially the thing that drew me to David’s post was my frustration at not knowing where to look for quick-win opportunities to benefit EA. I figured someone had done something like what I wanted, and I was delighted to find that someone did. Even if it completely flops, I won’t regret the time spent collecting and organizing project ideas, and I’ll probably keep using the list as a reference for years.
Thank you for your suggestions! I’ve added them to the table. I’ll be in touch about editing shortly.
I don’t suppose there’s a way to tag shortforms with this?
An anecdote I sometimes share: during my undergraduate college search, I experienced what you would call “polarizing techniques” at one university and their antithesis at another. I had previously attended a summer camp at a university in my home state; in my senior year of high school, they invited me back for a short seminar and proceeded to spend an hour talking about how wonderful they are, how privileged I would be to attend, how much of an honor it was to be invited to join [insert pithy university collective name]. They were, in fact, a decent school. They were also my backup option. Big fish, small pond.
I attended a different university’s program not long after. The program director’s welcome speech, by contrast, said in essence “we want you here, we think we’d be good for you, but you should go to the school that will bring out the best in you; if you think that’s not us, go elsewhere with our blessing.”
I attended the second school and never regretted it. While my decision was pretty overdetermined, the stark contrast between the pushy, snobbish diatribe at the first school and the encouraging, welcoming, confident-but-not-arrogant tone at the second was a definite influence.
Respect your audience, and they respect you back.
How hard do you suppose it might be to use an AI to scrub the comments and generate something like this? It may be worth doing manually for some threads, even, but it’s easier to get people to adopt if the debate already exists and only needs tweaking. There may even already exist software that accepts text as input and outputs a Kialo-like debate map (thank you for alerting me that Kialo exists, it’s neat).
Hello! I have, and am willing to offer to EA Forum members, the following generalist skills:
Facilitation. Organize and run a meeting, take notes, email follow-ups and reminders, whatever you need. I don’t need to be an expert in the topic, I don’t need to personally know the participants. I do need a clear picture of the meeting’s purpose and what contributions you’re hoping to elicit from the participants.
Technical writing. More specifically, editing and proofreading, which don’t require I fully understand the subject matter. I am a human Hemingway Editor. I have been known to cut a third of the text out of a corporate document while retaining all relevant information to the owner’s satisfaction. I viciously stamp out typos.
Presentation review and speech coaching. I used to be terrified of public speaking. I still am, but now I’m pretty good at it anyway. I have given prepared and impromptu talks to audiences of dozens-to-hundreds and I have coached speakers giving company TED talks to thousands. A friend who reached out to me for input said my feedback was “exceedingly helpful”. If you plan to give a talk and want feedback on your content, slides, or technique, I would be delighted to advise.
I am willing to take one-off or recurring requests. I reserve the right to start charging if this starts taking up more than a couple hours a week, but for now I’m volunteering my time and the first consult will always be free (so you can gauge my awesomeness for yourself).
Honestly this writeup did update me somewhat in favor of at least a few competent safety-conscious people working at major labs, if only so the safety movement has some access to what’s going on inside the labs if/when secrecy grows. The marginal extra researcher going to Anthropic, though? Probably not.