Don’t know where you are, but there might be enough people here in the bay for it to make sense.
catehall
Alvea Wind Down Announcement [Official]
Hi! I’ve struggled loads with addictions to alcohol and other drugs, spending large chunks of my 20s and 30s totally in thrall to one substance or another. I spent several years trying and failing to get sober, and finally succeeded 2.5 years ago. I’m sorry you’re going through it; it’s fucking agonizing.
One thing I found indispensable in early sobriety was fellowship, and I think this is true of a very large % of people who successfully recover. 12 Step programs can be an awkward fit, but also have huge fellowships in many areas, and I was able to get a fair amount out of them eventually. Buddhist recovery was also fairly helpful for me. In my experience, unfortunately, existing secular programs suck.
Feel free to message me if you’d like to talk. Wherever you are in your journey, there’s a very good chance I’ve been there, and have known many other people who have as well. I’ve occasionally thought about trying to organize some kind of recovery fellowship within EA and would be open to doing that if there are others interested, as well.
Thanks so much for your post here! I spent 5ish years as a litigator and couldn’t agree more with this. As an additional bit of context for non-lawyers, how discovery works in a large civil trial, from someone who used to do it:
You gather an ocean of potentially relevant documents from a wide range of sources
You spend a ton of time sifting through them looking for quotes that, at least if taken out of context, might support a point you want to make
You gather up all these potentially useful materials and decide what story you want to tell with them
Like a bird building a nest at a landfill, it’s hard to know what throwaway comment a lawyer might make something out of.
I don’t think this matters FYI—all funds came directly or indirectly from a bankrupt entity or person
Quick thoughts—this isn’t intended to be legal advice, just pointing in a relevant direction. There are a couple types of “clawbacks” under bankruptcy law:
Preference action (11 USC 547): Generally allows clawback of most transfers by an insolvent entity or person made within 90 days of filing for bankruptcy. The concept here is that courts don’t want people to be able to transfer money away to whoever they want to have it just before filing for bankruptcy. My GUESS (this really really isn’t legal advice, I’m really not a bankruptcy lawyer) is that any money transferred to a grantee before ~early August won’t be able to be clawed back in a preference action. Caveat: There are special rules around transfers to insiders, so the situation might be more complicated for grantees that have multiple types of relationships to FTX.
Fraudulent transfer action (11 USC 548): Generally allows clawback of transfers made within 2 years of a bankruptcy filing in cases where the transfer was meant to help conceal or perpetrate the fraud (very rough characterization—trying to balance precision and comprehensibility here). This is the classic Madoff/Ponzi case, where a person/company will pay out some creditors in order to encourage others to invest, meaning that the clawed-back transfers were ones that helped the fraudulent scheme itself work. There’s a special provision (subsection (a)(2)) that says charitable donations usually can’t be clawed back this way, but it doesn’t seem like grant money was necessarily flowing through a 501c3 entity, so I wouldn’t assume this applies. Still, my GUESS (this really really isn’t legal advice, I’m really not a bankruptcy lawyer) is that grants won’t be treated as the kinds of transfers addressed by section 548 because they don’t help perpetuate fraud.
This leaves the situation somewhat unclear for grantees who received funds between August and now. I would GUESS (this really really REALLY isn’t legal advice) that disbursed grants being clawed back isn’t super likely because of a cluster of factors that I won’t be able to clearly describe. This is not very helpful but it’s all I can offer. If I get a better understanding of the situation in the next few days I will post an update.
Because of the specter of a bankruptcy proceeding looming over all of this, I would be surprised if additional grant funds were disbursed in the near future. I’m not sure if any bankruptcy petition has already been filed (I’ve heard conflicting things), but once it is money is effectively frozen.
- What might FTX mean for effective giving and EA funding by 15 Nov 2022 13:24 UTC; 204 points) (
- 10 Nov 2022 20:10 UTC; 20 points) 's comment on Community support given FTX situation by (
- 10 Nov 2022 20:45 UTC; 1 point) 's comment on Community support given FTX situation by (
I dunno, man. I just want to be able to afford a house and a family while working, like, every waking hour on EA stuff. Sure, I’d work for less money, but I would be significantly less happy and healthy as a result — I know having recently worked for significantly less money. There’s some term for this—“cheerful price”? We want people to feel cared for and satisfied, not test their purity by seeing how much salary punishment they will take against the backdrop of “EA has no funding constraints.” I apologize for the spicy tone, but I think this attitude, common in EA in my experience, is an indication of bias against people over 25 — and largely accounts for why there are so few skilled, experienced operators and entrepreneurs in EA.
Hmm for some reason I feel like this will get me downvoted, but: I am worried that an AI with “improve animal welfare” built into its reward function is going to behave a lot less predictably with respect to human welfare. (This does not constitute a recommendation for how to resolve that tradeoff.)
Thank you for the labor of writing this post, which was extremely helpful to me in clarifying my own thinking and concerns. I plan to share it widely.
“I think it would be tempting to assume that the best of these people will already have intuited the importance of scope sensitivity and existential risk, and that they’ll therefore know to give EA a chance, but that’s not how it works.” This made my heart sing. EA would be so much better if more people understood this.
Happy to see this being discussed :) I may come back and write more later, but a couple quick points:
I’ve been having lots of convos with different people in this vein, and am feeling optimistic there’s growing momentum behind recognizing the importance of recruiting mid+ career professionals—not as a matter of equity and diversification, but as one of bringing critical and missing talent into the movement. I think EA has, on the whole, significantly overvalued “potential” and significantly undervalued “skills” and “capacities” in the past.
One of the advantages of the funding overhang is that it creates an opportunity to “remove the excuse” for people to not switch to direct work. It is a very big mistake IMO to be arbitrarily limiting salaries for talented professionals to well under market (for what purpose?). It seems like this is increasingly recognized and orgs are developing more context-sensitive approaches to compensation—accordingly, I think it’s something of a mistake to provide an anticipated salary range (generally below what I would expect mid-career professionals to be paid, I’d add) and describe it as a potential downside of moving to direct work. It’s mentioned in the post but bears repeating: IF MONEY IS THE ONLY THING HOLDING YOU BACK FROM A CAREER IN DIRECT WORK, TALK TO PEOPLE IN THE COMMUNITY ABOUT THIS—there is flexibility here.
Hi all—Cate Hall from Alvea here. Just wanted to drop in to emphasize the “we’re hiring” part at the end there. We are still rapidly expanding and well funded. If in doubt, send us a CV.
Thanks so much for your detailed comment, and sorry for not seeing it earlier!
I’m a bit unclear what’s going on in the Thermo-Fischer example: The second question from the initial letter makes it sound like TF had been granted a license to export under the EAR, but I don’t see a claim that the technology was covered by the Commerce Control List, and the response from Ross seems to suggest otherwise (from what I can tell, I’m behind the WSJ paywall).
In any event, I think this is just the same issue that comes up generally with regulation of dual-use technologies. There’s a question of whether technology with dual-use potential can be restricted from export under the CCL, and I think the answer to that is clearly yes (see, e.g., the restriction on software for DNA synthesizers). Then there’s the separate question of whether it should be restricted, and that’s going to require a context-dependent analysis of each case, with consideration of the balance of offensive and defensive uses of the tech. This is often a difficult question, but I think the analysis from a GCBR/advocacy perspective is going to be the same as it is for, say, differential development of technologies.
The concern about multilateral controls is a good one in general, though I think unilateral controls still pack a lot of punch when it comes to, e.g., publication of research by researchers at American universities.
Using Export Controls to Reduce Biorisk
Hiya—EA lawyer here. While the US legal system is generally a mess and you can find examples of people suing for all sorts of stuff, I think the risk of giving honest feedback (especially when presented with ordinary sensitivity to people you believe to be average-or-better-intentioned) is minimal. I’d be very surprised if it contributed significantly to the bottom-line evaluation here, and would be interested to speak to any lawyer who disagreed about their reasons for doing so.
- 19 Oct 2021 0:05 UTC; 11 points) 's comment on Counterfactual impact when your coworkers share your values by (
I just totally missed that the info was in the job ads—so thank you very much for providing that information, it’s really great to see. Sorry for missing it the first time around!
Just a quick note in favor of putting more specific information about compensation ranges in recruitment posts. Pay is by necessity an important factor for many people, and it feels like a matter of respect for applicants that they not spend time on the application process without having that information. I suspect having publicly available data points on compensation also helps ensure pay equity and levels some of the inherent knowledge imbalance between employers and job-seekers, reducing variance in the job search process. This all feels particularly true for EA, which is too young to have standardized roles and compensation across a lot of organizations.
I’ve been on the EA periphery for a number of years but have been engaging with it more deeply for about 6 months. My half-in, half-out perspective, which might be the product of missing knowledge, missing arguments, all the usual caveats but stronger:
Motivated reasoning feels like a huge concern for longtermism.
First, a story: I eagerly adopted consequentialism when I first encountered it for the usual reasons; it seemed, and seems, obviously correct. At some point, however, I began to see the ways I was using consequentialism to let myself off the hook, ethically. I started eating animal products more, and told myself it was the right decision because not doing so depleted my willpower and left me with less energy to do higher impact stuff. Instead, I decided, I’d offset through donations. Similar thing when I was asked, face to face, to donate to some non-EA cause: I wanted to save my money for more effective giving. I was shorter with people because I had important work I could be doing, etc., etc.
What I realized when I looked harder at my behavior was that I had never thought critically about most of these “trade-offs,” not even to check whether they were actually trade-offs! I was using consequentialism as a license to do whatever I wanted to do anyway, and it was easy to do that because it’s harder for every day consequentialist decisions to be obviously incorrect, the way deontological ones can be. Hand-wavey, “directionally correct” answers were just fine. It just so happened that nearly all of my rough cost-benefit analyses turned up the answers I wanted to hear.
I see a similar issue taking root in the longtermist community: It’s so easy to collapse into the arms of “if there’s even a small chance X will make a very good future more likely …” As with consequentialism, I totally buy the logic of this! The issue is that it’s incredibly easy to hide motivated reasoning in this framework. Figuring out what’s best to do is really hard, and this line of thinking conveniently ends the inquiry (for people who want that). My perception is that “a small chance X helps” is being invoked not infrequently to justify doing whatever work the invoker wanted to do anyway, and to excuse them internally from trying to figure out impact relative to other available options.
Longtermism puts an arbitrarily heavy weight on one side of the scales, so things look pretty similar no matter what you’re comparing it to. (Speaking loosely here: longtermism isn’t one thing, not all people are doing this, etc. etc.) Having the load-bearing component of a cost-benefit analysis be effectively impossible to calculate is a huge downside if you’re concerned about “motivational creep,” even if there isn’t a better way to do that kind of work.
I see this as an even bigger issue because, as I perceive it, the leading proponents of longtermism are also sort of the patron saints of EA generally: Will MacAskill, Toby Ord, etc. Again, the issue isn’t that those people are wrong about the merits of longtermism — I don’t think that — it’s that motivated reasoning is that much easier when your argument pattern-matches to one they’ve endorsed. I’m not sure if the model of EA as having a “culture of dissent” is accurate in the first place, but if so it seems to break down around certain people and certain fashionable arguments/topics.
As the former COO and briefly co-CEO of Alvea, I also endorse Kyle’s reflections!