Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help “neartermist” causes.
Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help “neartermist” causes.
Off the top of my head, that time limit is often two years after the filing of the bankruptcy case.
Do you know where I can find a legal reference with the exact time limit?
If the FTX debtors are paid back without taking back my grant, I’d like to donate it somewhere, but I need to know I’m protected from a clawback in that scenario.
I note there is no path for Cotton-Barratt to become a typical member of the community again
I don’t think this is true?
I don’t feel qualified to give an opinion on the board decisions, punishment etc. for the specific case. But in nature, it does look like a decision that allows returning to full participation in the community, subject to some future checks, which makes sense.
And his reputation has suffered a blow, but not a very big one? Like, I don’t see anyone publicly objecting to his presence on the forum.
I’m glad to see:
The Community Health team adopting more robust procedures
Transparency regarding the changes made and the reasons for them
The boards of EV taking an active role in overseeing this and in acting even against a well known and powerful figure.
I was also glad to see Owen step down from his role, taking full responsibility and apologizing for his actions, cooperating and attempting to improve himself. This sets a good example.
I certainly also think it’d be useless, like most prediction markets in EA.
I think it would be net negative, in the “What is your community doing to prevent sexual misconduct? - Oh, we make bets about it” kind of way.
2 years later, I stumbled onto this comment, and I’d be happy to know if your perspective about this has changed after the FTX crisis.
Thanks for the data! For other readers I’ll note the Faunalytics page you linked to contains more interesting information (e.g. a majority of lapsed vegns try it only for health reasons, while a majority of those who remain vegn do not).
The remainder of that distribution after the 1 year mark would also be interesting, as it might take over that to get accustomed to it.
This does suggest that a gradual transition might have higher success rates?
Sorry, I originally commented with a much more detailed account but decided I didn’t want so much personal info on the forum.
On my first attempt at vegetarianism I failed after about a week, and after that I decided to start with avoiding meat at home and at uni. The transition to being fully vegan took about 2.5 years. I was a picky eater so I had a lot of foods and ingredients to get used to. I also improved my cooking abilities a lot during this time.
Edit: it’s true that I’m now in a phase where it is almost costless for me to be vegan, and I’ve been in that state for years. My point is rather that I didn’t start off like that.
FWIW my personal experience doesn’t square with this. It was initially hard for me but after a transition period where I got accustomed to new foods, it got much easier. For most people—those who are medically able to do it—I think this would be the case.
Are there vaccines specific to the new variant?
Publishing pieces in the media (with minimal 3rd-party editing) is at least tractable on the scale of weeks, if you have a friendly journalist. The academic game is one to two orders of magnitude slower than that.
Given that MIRI has held these views for decades, I don’t quite see how the timeline for academic publication is of issue here.
How does the choice to publish MIRI’s main views as LessWrong posts rather than, say, articles in peer-reviewed journals or more pieces in the media, square with the need to convince a much broader audience (including decision-makers in particular)?
I think I agree with this explanation much more than with the original post.
Maybe you can help us resolve this, SummaryBot—would you say you’re software or not?
I feel like this distinction is mostly true in places that don’t matter, and false in places that do matter.
Sure, a trained LLM is not a piece of software but rather an architecture and a bunch of weights (and maybe an algorithm for fine-tuning). This is also true of other parts of software, like configuration files with a bunch of constants no one understands other than the engineer who optimized them using trial and error.
On the other hand, the only way they can do something, i.e. interact with anything, is by being used inside a program. Such a program gives hopefully well-defined interfaces for them to use. Thus one would be able to do unintended things only if it becomes smart enough to realise what it is and how it is expressed and controlled, and manages to hack its software or convince a human to copy it to some other software.
On the other hand, the “nice” properties you ascribed software aren’t really true themselves:
The result of running a program aren’t determined by the code, but also by a bunch of environmental circumstances, like system definitions, available resources, other people interacting with the same machine.
You can’t always debug it—the most you can hope for is to have good logs and sometimes understand what has gone wrong, if it happens to be captured by what you thought in advance to log.
You can’t always run unit tests—sometimes you’re doing too complicated a process for them to be meaningful, or the kind of data you need is impossible to manufacture synthetically.
You can’t always make sure it’s correct, or individual parts do what they’re supposed to—if you handle something that’s not very simple, there are simply too many cases to think of checking. And you don’t even necessarily know whether your vaguely defined goal is achieved correctly or not.
These are all practical considerations happening simultaneously in every project I’ve worked on in my current job. You think you know what your software does, but it’s only a (perhaps very) educated guess.
Hi Arden, thanks for engaging like this on the forum!
Re: “the general type of person we tend to ask for input”—how do you treat the tradeoff between your advisors holding the values of longtermist effective altruism, and them being domain experts in the areas you recommend? (Of course, some people are both—but there are many insightful experts outside EA).
While I agree that the discussion here is bad at all those metrics, I’m not sure how you infer that the CH team does better at e.g. fairness or compassion.
Reflecting a bit, I’ll admit that I liked it as a norm in my department in uni (“You want to take a class but don’t have the prerequisites? No problem, it’s your responsibility to understand, not ours”), but still think it has no place in broader society—and in personal and romantic relationships in particular.
Since the attitude around me if you don’t like contracts you entered is generally “tough shit, get more agency”, I was surprised at the responses saying Alice and Chloe should have been protected from an arrangement they willing entered
Where is “around you” where this is the norm? FWIW I think it’s a terrible one.
Are you sure you have the numbers right? 80,000 Shrimp doesn’t sound like that many