You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9
Charles He
every sentence is about how comparing one’s level of depression to someone who has it even worse isn’t helpful or justified
i can see how you can intrepret this comment as “the OP is putting up norms by measuring their depression and I discourage this”, in an intellectual sense, but reading the ocmment, this read is still marginal, the comment doesn’t really make the case clear.
whatever their experiences are, the writer of the comment isn’t likely writing while currently in a state of depression, and are responsible for communicating. their comment is disorganized and reads like a string of consciousness (because it probably is) and it’s self involved.
note that actually i didn’t downvote the comment by the way.
Moderate importance bug, probably applies to various situations where user uploaded images are presented:
This user’s thumbnail is enormous, because an image inside of it is resized to the entire window.
Also, in comments/posts, I am having trouble controlling the size of images, they just resize horizontally to fit the screen, and this isn’t ideal for many situations. This should be controllable.
I think there should be a lot more writing in EA forum that:
direct, brief, and not trying to win the game of politeness/EA rhetoric.
Also, assuming it’s drawing and directly communicating critical skills/experience that EAs don’t have, more writing should be lower effort and not cover every base.
This will get a lot more knowledge and people with time costs/expertise, instead of the lousy situation it is in now.
But your comment is really bad, I can’t even work out what your point is, even after close reading.
The performance of the org you’re describing is wildly bad by itself. What you wrote seems credible (besides the style things I mentioned).
You have also experienced serious misconduct/abuse by voting manipulation against you.
Can you say whether the org you’re criticizing is an “EA org”, or “EA funded”, or has people known to the community? Can you say if the people who abusively mass voted you are part of this org you criticized?
EA isn’t some big corporation, or friends who get funding for each other. It seems good to communicate this.
This article is hard to read and the tone is strong, which makes it come off as ranty. This is bad because it seems like it has substantive content that you’ve thought out.
For example, the second paragraph has a few good ideas, but these have been chopped up. This makes it laborious to read:
What do you want from a hiring process? A good hire. Crucially, no bad hire. And for those people whom you haven’t hired to be mostly happy with how things went. Because you care for them.
The issues continues in the third paragraph. Even though the ideas are good, you’re coming across as overbearing. This is distracting, which is especially bad since this paragraph gives the overview/purpose of the article and introduces the key org (Hirely) that you’re talking about.
Sadly, you’re at risk of making a bad hire and disgruntling your other applicants if you don’t know what you’re doing. If you don’t know what you’re doing, outsourcing isn’t a solution, either, because you don’t know how to judge the actions of those you’re outsourcing to. I will demonstrate this by example of a hiring process I’ve observed as an outsider, in which the hiring firm (call them Hirely) acted in a way that would have seemed sensible to the average founder who knows little about hiring, but to me looked like blundering. Even if you don’t plan to outsource hiring, the following points are worth thinking about.
Other comments:
After this, it seems like there’s with multiple sections of meta (“Added 2022-10-16” and “This is a repurposed article with a history”). These suggest serious misconduct by someone hostile to you, but this is sort of buried.
Terms like “Manager Tools” and “Hirely” are really important to you, but this requires closer reading to figure out what they really mean and most people won’t push past this.
Your views/promotion of Manager Tools seems pretty disjoint about the other issues in this post.
You’ve given each hyperlink it’s own custom tag. This convention/process seems wildly different and seems to make a lot of extra amount of work for you?
A lot of your content is thoughtful and thinks from the perspective of the “users”/”customers”.
It sounds like you have good perspectives, and small tweaks, like a summary up front, would add a lot.
He applied for funding to run the project but got rejected.
Is this true? At face value, that seems really disappointing.
Quiet, diligent work, like stats or census taking, or contributing to wikis or other epistemic projects is valuable and underrated.
Some really strong talent in this org, whatever they do, I hope it’s very impactful and well funded.
Yes, understood, thanks, I was just confused.
Yes, raising the bar would make the interviews more useful. This is a good thought that makes a lot of sense to me.
I think what you said makes sense and is logical.
Since I’m far away and uninformed, I think I’m more reluctant to say anything about the process and there could be other explanations.
For example, maybe Ben or his team wanted to meet with many applicants because he/they viewed them highly and cared about their EA activities beyond CEA, and this interview had a lot of value, like a sort of general 1on1.
The “vision” for the hiring process might be different. For example, maybe Ben’s view was to pass anyone who met resume screening. For the interview, maybe he just wanted to use it to make candidates feel there was appropriate interest from CEA, before asking them to invest in a vigorous trial exercise.
Ben seems to think hard about issues of recruiting and exclusivity, and has used these two posts to express and show a lot of investment in making things fair.
The arguments opening up this post, aren’t really biting or that relevant.
Thomas Hurka’s St Petersburg Paradox:
The Very Repugnant Conclusion:
I made a “contest”, where I’ll write “answers” and challenge anyone to reply to me.
This seems related to this post.
I’m not sure, but my guess is that probably the post is framed/motivated in a way that makes its ideas seem much less tangential than it really is.
I’m writing to look for people interested in participating in this contest:
Ban Charles He Contest
Sort of inspired by this post, I’m disappointed by a lot of my forum writing because my content is minor and mentally easy and not that meaningful. (While more forum writing probably isn’t the ideal solution) there is more difficult writing I can do:
Basically, I have “answers” to a number of arguments or viewpoints that are either “used against EA” or just occupy a lot of time on EA online media. I think this is bad.
These arguments or viewpoints are:
Newcomb’s paradox
The repugnant conclusion
This variation of the St Petersburg’s paradox,
Also, I think I have an answer to the “general St Petersburg paradox”, let’s throw that in too
I think my four answers are pretty obvious.
I’m worried this sounds like a blogger and I want to put some skin in the game.
If I’m “very wrong”, I’ll get banned, as determined in the following way:
I’m looking for people willing to critique my answers, especially in a ruthless, crisp way. For example, if my answer is flat out logically wrong, highly irrelevant, or has been dealt with thoroughly so that about 50% of experienced EAs are answering with good knowledge of the content of my answer.
Note that “discursive” sort of replies, with a “additional consideration” sort of flavor, should not count. Replies should undermine the substance of my answer and generally prop up the targeted argument as a valid project or objection.
This is subjective, so I’ll implement this judgment of banning in this way:
I’ll post my “answer”. After I post my answers to each of the three topics, anyone can critique with a “reply”.
If any single “reply” gets an agreement score of 10 or more, I’ll be banned for that number of days.
Note that strong votes are fine, so just 2 people can trigger this condition.
This banning will function additively with replies with distinct arguments, e.g. I can get banned for 60 days if there are 6 replies across my four “answers”.
I’m writing because I’m looking for people who are interested in replying to my answers, making my task harder.
I will wait until I get a few replies to this comment, which will be taken as a sign of interest and real chance for my ban, then write the above at some point.
- Oct 18, 2022, 2:00 AM; 2 points) 's comment on Getting on a different train: can Effective Altruism avoid collapsing into absurdity? by (
(Maybe my stats/prob/econometrics is rusty, feel free to stomp this comment)
Yeah, you guys have a 94% pass rate for one dataset you use in one regression.
So you could only be getting any inference from the literally 3 people who failed for the screening interview.
So, like, in a logical, “Shannon information sense”, that is all the info you have to go with, to get magnitudes and statistical power, for that particular regression. Right?
So how are you getting a whole column of coefficients for it?
So uh you guys/girls have n=7 samples of people in this FAANG group, and you’re using this to get coefficients for one of the regressions. Then for the next regression for the FAANG people making it a cut further, you probably only have 3 observations that regression?
So I think the norm here is to show “summary stats” style of data, e.g. a table that says “For the FAANG applicants, of these 7 made it). I think this table would be better.
Basically, a regression model doesn’t add a lot, with this level of data.
Also, at this extremely low amount of data, I’m unsure, but there might be weird “degree of freedom” sort of things, where due to an interaction, the signs/magnitudes explode/implode.
Can you share your code for the regressions that made this table?
Assets aren’t showing up:
I think you like to make these lists. Are you manually constructing the content in these comments? Does this take a lot of time?
This seems precise and well structured.
Would it be interesting to have a minor service/script that does this for you?
This was criticised by several people, both for ignoring flow-through effects (like existential risks, wild animal suffering, or long run growth, or population ethics)
There is a lot going on here, but basically none of these concerns are mainstream (so generally are neglected from an EA standpoint).
On the other hand, family planning and access to contraception seems to be almost universally conventional promoted because it empowers women and reduces poverty.
For example, see https://www.unfpa.org/family-planning:
Umm, there is a lot going on here.
advocated for reducing human populations in the third world in order to reduce meat consumption. This was criticised by several people, both for ignoring flow-through effects (like existential risks, wild animal suffering, or long run growth, or population ethics) and for seeming dishonest about your true motivations / resembling eugenics
Is this view that this “resembles eugenics” your personal view? Because I can’t find this claim in your linked comments besides Ben Millwood’s feelings that this could produce negative reactions.
Millwood’s concerns are fine and welcome, but your comment seems much much stronger. Do we want to encourage a norm that stops discussions/projects, because in a contrived, remote way, these could lead to people slipping in implausible, extremely negative associations (often to the disadvantage of conservative viewpoints, since the coastal left is heavily over represented in EA?)
now deleted from the internet...seeming dishonest about your true motivations...neglecting these concerns
You say the initial presentation of the idea is “dishonest”, but it’s not clear why? You state their agenda is the mission of reducing animal suffering, and then you state that this ignores flow through effects. That is not dishonesty.
I see you have continued to do work on charities that would reduce human populations, though without making as explicit that the original motivation was not so much to help people directly but rather to reduce their number.
Charity Entrepreneurship has developed literal standout EA charities including LEEP, which is actively promoted Will MacAskill, and Fortify Health, which is funded by GiveWell. Both of these improve the welfare, almost solely, of people in developing countries.
This takes vast amounts of effort and dedication. Both Karolina and Joey, and many members of the team, have worked and developed deep competencies in developing countries.
It seems like family planning would be more along these lines of CE’s work, instead of some covert eugenics program?
was criticised by several people, both for ignoring flow-through effects (like existential risks, wild animal suffering, or long run growth, or population ethics)
...
neglecting these concerns, even though they had caused others to reach the opposite conclusion, because of time limitations
...
You suggested that you “prefer to discuss it in conversation rather than in writing”; have you published such a report on population ethics and other flow-through effects since?Do anyone here think that a full analysis of (checks notes) “existential risks, wild animal suffering, or long run growth, or population ethics”, especially to the degree that it would satisfy EA forum discussion norms, is a going to be practical use of time, when they could create more charities?
Of all the sort of “decision theory”-style discussions in EA, I think Anthropics (e.g. the fact we exist, tells us something about the nature of successful intelligence and x-risk) seems like one of the most useful that could arrive just from pure thought. This is sort of amazing.
The blog posts I’ve seen written in 2021 or 2020 seem sort of unclear and tangled (e.g. there are two competing theories and empirical arguments are unclear).
Is there a good summary of Anthropic ideas? Are there updates on this work? Is there someone working on this? Do they need help (e.g. from senior philosophers or cognitive scientists)?
I use a lot of ideas from Leviathan (Hobbes) all the time, but my knowledge comes from just from reading the title and the first paragraph of the Wikipedia page[1]. I’m worried I look dumb in front of smart people.
Does anyone have a good approachable summary of Leviathan, or even better, a tight, well written overview of the underlying and related ideas from a modern viewpoint?
- ^
(“Bellum omnium contra omnes” is just so cool to say)
- ^
Some thoughts, not entirely related:
There was another post about blinding karma (maybe not names), at the post level (so no one can see the karma). This might have some good effects on norms and experiences about voting.
IIRC, this idea about post-level blinding produced a disagreement about practicalities or transparency, and the conversation stopped.
This objection about the transparency/practicalities is solved by a system that blinds karma/names for a fixed, limited, time, say, 1/2/7 days, after which everything is revealed.
Also, you can just have a user option (maybe requiring a token effort, like strong voting requires an effort) to unblind.
Reddit actually implements this temporary system, so that you can’t see recent karma.
There’s many other details that are important.
But basically if you implement a post level system as something authors can opt into, that seems like a win and another way to roll out this feature.