Talk to me about cost benefit analysis !
Charlie_Guthmann
Do people here think there is a correct answer to this question?
I feel this. It would be cool if you could drop a post and put a zoom link at the bottom to discuss it in like 24 or 48 hours, that way there can still be a discussion but maybe skirts around some of this obsessive forum checking ego stuff
Re the “EAs should not should” debate about whether we can use the word “should” which pops up occasionally, most recently on the “university groups need fixing”.
My take is that you can use “should/ought” as long as your target audience has sufficiently grappled with meta-ethics and both parties are clear about what ethical system you are using.
“Should” (to an anti-realist) is shorthand for (the best action under X moral framework). I don’t mind it being used in this context (though I agree with ozzies previous shortform on this that it seems unnecessarily binary), but it’s problematic using this word around people you don’t know or non-philosophy heads. It’s completely absurd to tell an 18-year-old or anyone else who doesn’t know what utilitarianism and virtue ethics are that they “should” do anything, and if they believe you, then you tricked them into that view (unless you are a moral realist, which I think is also absurd).
If your target audience does not know what the is-ought problem is, it’s better to stick to output-based cost-benefit and not enter into this “cause agnostic” tier list type thing since inter-output rankings rely on arbitrary metaethical functions that aren’t well-known by most or standardized for quick and reliable reference.
However among my friends, we use should all the time because we know what generally mean (our relatively shared utilitarian-ish meta-ethical worldview), and we feel comfortable clarifying this if it seems to be the crux of the debate. But at this point, should loses all of its emotional oomph and maybe it’s just not worth the hassle to shorthand a 7-word sentence.
I don’t know if they’re doing the ideal thing here, but they are doing way better than I imagined from your comment.
Yep after walking through it in my head plus re- reading the post, doesn’t seem egregious to me.
I think you might have replied on the wrong subthread but a few things.
This is the post I was referring to. At the time of extension, they claim they had ~3k applicants. They also infer that they had way fewer (in quantity or quality) applicants for the fish welfare and tobacco taxation projects but I’m not sure exactly how to interpret their claim.
Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of the new participants?
using some pretty crude math + assuming both applicant pools are the same, each additional applicant has ~.7% chance of being one of the 20 best applicants (I think they take 10 or 20). so like 150 applicants to get one replaced. if they had to internalize the costs to the candidates, and lets be conservative and say 20 bucks a candidate, then that would be about 3k per extra candidate replaced.
and this doesn’t included the fact that the returns consistently diminish. and they also have to spend more time reviewing candidates, and even if a candidate is actually better, this doesn’t guarantee they will correctly pick them. you can probably add another couple thousands for these considerations so maybe we go with ~5k?
Then you get into issues of fit vs quality, grabbing better quality candidates might help CE counterfactual value but doesn’t help the EA movement much since your pulling from the talent pool. And lastly it’s sort of unfair to the people who applied on time but that’s hard to quantify.
and I think 20 bucks per candidate is really really conservative. I value my time closer to 50$ an hour than 2$ and I’d bet most people applying would probably say something above 15$.So my very general and crude estimate IMO is they are implicitly saying they value replacing a candidate at 2k-100k, and most likely somewhere between 5-50k. I wonder if we asked them how much they would have to pay for one candidate getting replaced at the time they extended what they would say.
if anyone thinks I missed super obvious considerations or made a mistake lmk.
Hi Peter thanks for the response—I am/was disappointed in myself also.
I assumed RP had thought about this. and I hear what you are saying about the trade-off. I don’t have kids or anything like that and I can’t really relate to struggling to sit down for a few hours straight but I totally believe this is an issue for some applicants and I respect that.
What I am more familiar with is doing school during COVID. My experience left me with a strong impression that even relatively high-integrity people will cheat in this version of the prisoner’s dilemma. Moreover, it will cause them tons of stress and guilt, but they are way less likely to bring it up than someone who is caused issues from having to take the test in one sitting because no one wants to out themselves as a cheater or even thinking about cheating.
I will say in school there is something additionally frustrating or tantalizing about seeing your math tests that usually have a 60% average be in the 90%s and having that confirmation that everyone in your class is cheating but given the people applying are thoughtful and smart they probably would assign this a high probability anyway.
If I had to bet, I would guess a decent chunk of the current employees who took similar tests (>20%) at RP did go over time limits but ofc this is pure speculation on my part. I just do think a significant portion of people will cheat in this situation (10-50%) and given a random split between the cheaters and non-cheaters, the people who cheat are going to have better essays and you are more likely to select them.
(to be clear I’m not saying that even if the above is true that you should definitely time the tests, I could still understand it not being worth it)
Two (barely) related thoughts that I’ve wanted to bring up. Sorry if it’s super off topic.
Rethink priorities application for a role I applied for two years ago told applicants it was timed application and not to take over two hours. However there was no actual verification of this; it was simply a Google form. The first round I “cheated” and took about 4 hours. I made it to the second round. I felt really guilty about this so made sure not to go over on the second round. I didn’t finish all the questions and did not get to the next round. I was left with the unsavory feeling that they were incentivizing dishonest behavior and it could have easily been solved by using something similar to tech companies where a timer starts when you open the task. I haven’t applied for other stuff since so maybe they fixed this.
Charity entrepreneurship made a post a couple months back extending their deadline for the incubator because they thought it was worth it to get good candidates. I decided to apply and made it a few rounds in. I would say I spent like 10 ish hours doing the tasks. I might be misremembering, but at the time of extension I’m pretty sure they already had 2000-4000 applicants. Considering the time it took me, and assuming other applicants were similar, and the amount of applicants they already had, I’m not sure it was actually positive ev extending the deadline.
Neither of these things are really that big of a deal but thought I’d share
Curious how it would do on chess 960.
Would be interesting to compare my likes on the ea forum with other people. I feel like what I up/downvote is way more honest than what I comment. If I could compare with someone the posts/comments where we had opposite reactions, i.e. they upvoted and I downvoted I feel like it could start some honest and interesting discussions.
Fantastic post/series. The vocab words have been especially useful to me. few mostly disjunctive thoughts even though I overall agree.
I wonder what you think would happen if an economically valuable island popped up in the middle of the ocean today?
My guess is it would be international lands in some way and no country would let or want another country to claim the land
I don’t think this is super analogous but I think there is some cross over.
The generalization of the first bullet point is that under the right political circumstances, HV (or otherwise) governments can prevent unlicensed outward colonization from within their society without themselves colonizing.
Some obvious objections here, like as soon as the gov can’t lock stuff down for a period of time it could be impossible to stop the outward expansion.
But this honestly depends a lot on the technology levels of the relevant players
Governments could also theoretically do this to other civilizations. They could do a military version of von Neumann probes, locking down areas and stopping evolution from occurring while not actually colonizing the land in any sentient adding sense.
I’m concerned that it’s easy to handwave a lot of stuff with claims of AGI being able to do XYZ. While I often buy these claims myself, It would be nice to condition this question on like 5-10 different levels of maximum technology, or technology differential between a ruling state and everyone else. I think that’s where a lot of the disconnect comes between the current day island scenario and your post.
At the very least, it would be nice to have a section where you say ~ about what your estimate for the tech is.
The Expanse is a show about a similar concept, I don’t think it’s necessarily a great prediction of what life will be like but it’s cool to see a fleshed-out version of the tension between the expanders and non expanders.
It being fleshed out might give you a slightly different perspective/ see that there are perhaps a few more details or considerations needed.
If PU society isn’t asymmetric on the action-omission axis, then they should still have some level of concern about just expanding like crazy, since they need to consider the fact that they are locking in a worse conversion of physical resources to positive utility still.
I don’t fully agree with Will’s claim about deleting the lightcone. It depends on the the ratio at which the suffering focused agents value pleasure to pain and where they fall on the action-ommision axis. Nonetheless, if spreading good lives is nearly as easy as spreading, spreading and destroying everything as you spread is probably in between the two, or if something like false vacuum decay is possible, even easier than spreading.
Yep, I was about to comment on the same thing. Would like to see what OP has to say
3. If humans become grabby, their values are unlikely to differ significantly from the values of the civilization that would’ve controlled it instead.
I think this is phrased incorrectly. I think the correct phrasing is :
3. If humans become grabby, their values (in expectation) are ~ the mean values of a grabby civilization.
Not sure if it’s what you meant but let me explain the difference with an example. let’s say there are three societies:
[humans | zerg | Protoss]
for simplicity let’s say the winner takes all of the lightcone.
EV[lightcone| zerg win] = 1
EV[lightcone| humans win] = 2
EV[lightcone| protoss win] = 3
Then if humans become grabby their values are guaranteed to differ from whoever else would have won, yet for utilitarian purposes we don’t care because the expected value is the same, given we don’t know if the zerg or protoss will win.
I think you might have meant this? But it’s somewhat important to distinguish because my updated (3) is a weaker claim than the original one, yet still enough to hold the argument together.
Moreover, even in the face of strong selection pressure, systems don’t seem to converge on similar equilibria in general.
I like this thought but to push back a bit—nearly every species we know of is incredibly selfish or at best only cares about their very close relatives. Sure, crabs are way different than lions but OP is describing a much lower dimension, which seems more likely to generalize regardless of context.
If you asked me to predict what (animal) species live in the rainforest just by showing me a picture of the rainforest I wouldn’t have a chance. If you asked me if the species in the rainforest would be selfish or not that would be significantly easier. For one, it’s easier to predict one dimension than all the dimensions, and second, some dimensions we should expect to be much less elastic to the set of possible inputs.
I stopped being vegetarian almost 2 years ago—one of the biggest reasons I’m not a vegetarian is that I stay up late pretty much every day and don’t always feel like cooking or eating snacks so I will go to whatever is open near me. During university, nothing really stayed open after 10 anyway because Evanston is a lame place. So I would often eat at or before 10, and if I was eating out there were vegetarian options (stir fry with tofu, chipotle, etc.) still at this time.
Now I live in a predominantly eastern European and Mexican area of Chicago. There isn’t much vegetarian food in this neighborhood in general, although there is some still. However, the vegetarian restaurants here seem to service a wealthier demographic than the non vegetarian food. It closes earlier, more expensive, etc. The cheap and late night options are fast food and taquerias, which essentially have no quality vegetarian items. But since this stuff is open, it actually makes me lazier and I’ll often eat at 11:00 PM because I can. However getting into this routine means I eat more meat.
I’m pretty sure if there was a decent cheap vegetarian restaurant that stayed open till 2:00 am I would eat at least 1 less meat meal a week, probably 2-3.
why aren’t there any vegetarian late night options near me? probably the normal reasons—no one around here wants or can open one, or there isn’t enough demand.
In either case it got me wondering. If there is enough demand to recoup say 95% ish of cost for a late night falafel stand, would it be a cost effective intervention (over whatever other things ACE recommends) to fund that last 5%? I might think more about this unless it’s super obvious to someone that this is orders of magnitude worse than other options.
Have you thought about not doing interviews?
Assume there are two societies that passed the great filter and are now grabby. Society EA and society NOEA.
Society EA you could say is quite similar to our own society. The majority of the dominant species is not concerned with passing the great filter and most individuals are inadvertently increasing the chance of the species extinction. However, a small contingent had become utilitarian rationalists and speced heavily into reducing x-risk. Since the group passed the great filter, you can assume this is in large part due to this contingent of EAs/guardian angels.
Now society NOEA is a species that passed the filter, but they didn’t have EA rationalists. The only way they were able to pass the filter was because as a species, they are overall quite careful and thoughtful. The whole species rather than a divergent few has enough of a security mindset that there was no special group that “saved” them.
Which species would we prefer to get more control of resources?
The punchline is that the very fact that we “need” EA on earth might provide evidence that our values are worse than the species that didn’t need EA to pass the filter.
I think this would be good. one thing is that In many situations If you can write p(sucess) in a meaningful way then you should consider running a competition instead of grantmaking. Not going to work in every situation but I find this the most fair and transparent when possible.
I definitely have very little idea what I’m talking about but I guess part of my confusion is inner alignment seems like a capability of ai? Apologies if I’m just confused.
I don’t remember specifics but he was looking if you could make certain claims on models acting a certain way on data not in the training data based on the shape and characteristics about the training data. I know that’s vague sorry, I’ll try to ask him and get a better summary.
Probably many people know these and also I wouldn’t say any of them are extremely aligned but since there are no comments.
The various arpa orgs
Congressional budget office
Institute for progress
Market shaping accelerator
Ethical humanist society