Egg producers vary in their treatment of hens; as eggs are a major part of your diet, I think it’s worth looking into whether there’s a brand that meets your ethical standards you could switch to before deciding to cut them out entirely.
dirk
Holly herself believes standards of criticism should be higher than what (judging by the comments here without being familiar with the overall situation) she seems to have employed here; see Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent.
I was extremely disappointed to see this tweet from Liron Shapira revealing that the Centre for AI Safety fired a recent hire, John Sherman, for stating that members of the public would attempt to destroy AI labs if they understood the magnitude of AI risk. Capitulating to this sort of pressure campaign is not the right path for EA, which should have a focus on seeking the truth rather than playing along with social-status games, and is not even the right path for PR (it makes you look like you think the campaigners have valid points, which in this case is not true). This makes me think less of CAIS’ decision-makers.
Another Jhourney attendee, Nadia Asparouhova, has done a writeup of her experiences with some instructions for hopefully achieving the jhanas, which seems potentially helpful; the instructions are nicely succinct and concrete.
Why aren’t there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
There was an attempt at that in rationalism, Dragon Army, though it didn’t ultimately succeed; you can find the postmortem at https://medium.com/@ThingMaker/dragon-army-retrospective-597faf182e50.
I’m skeptical of metrics like “x% of people involved said they were satisfied” for estimating cost-effectiveness. Customer satisfaction doesn’t really connect very well to any of the things I care about; in most cases I’m happier with a rough estimate of lives saved/units of suffering prevented/QALYs purchased/etc. per dollar than with a more precise accounting of things that touch less directly on the end goal.
Thanks for including the fish-per-dollar estimate! I know it doesn’t account for the value of your speculative work, but having the number at all makes it a lot easier for me to reason about it.
(Also also, it isn’t only the poster who has to worry about the truth of what they say? It’s everyone? Comments also receive criticism all the time. I don’t think this poster/commenter divide cuts reality at the joints.)
(Also, I understand the comment was not phrased helpfully to you, but for my part I felt that it communicated the errors clearly enough that I could understand them easily, and appreciate having the false dichotomy especially pointed out without having to discover it myself).
Thank you for sharing, but I’ve read your post and am not convinced (either in this instance or in general). I think it was a fine comment to which you reacted with unwarranted negativity. Or, in short: no, you’re wrong.
I think if your arguments are locally invalid, that is something important about your post. High standards of accuracy and quality are something I value about Less Wrong and EA, and to me part of having high standards is trying to avoid even small mistakes.
Dunno if it’s still helpful, but https://www.highimpactprofessionals.org/talent-directory is a directory of EAs looking for work and contained several each of lawyers and accountants on a quick search.
I think speculating about what exactly constitutes the most good is perfectly on-topic. While ‘murdering meat-eaters’ is perhaps an overly direct phrasing (and of course under most ethical frameworks murder raises additional issues as compared to mere inaction or deprioritization), the question of whether the negative utility produced by one marginal person’s worth of factory farming outweighs the positive utility that person experiences—colloquially referred to as the meat-eater problem—is one that has been discussed here a number of times, and that I feel is quite relevant to the question of which interventions should be prioritized.
My main observation is that he and his people really do think the election was stolen from them.
That sounds to me like a reason not to elect him? Self-deceiving for personal gain (endemic though it is 😔) is not a positive trait for a president to have.
I don’t think illusionism is an accurate view, so I’d be opposed to adopting it.
If you don’t think their arguments are convincing, I consider it misleading to attempt to convince other people with those same arguments.
The claim that they can’t be moral patients doesn’t seem to me to be well-supported by the fact that their statements aren’t informative about their feelings. Can you explain how you think the latter implies the former?
Of course if we can’t ascertain their internal states we can’t reasonably condition our decisions on same, but that seems to me to be a different question from whether, if they have internal states, those are morally relevant.
I agree that LLM output doesn’t convey useful information about their internal states, but I’m not seeing the logical connection from inability to communicate with LLMs to it being fine to ignore their welfare (if they have the capacity for welfare); could you elaborate?
Was titotal’s post not a critique of 2027′s models of the bad timeline? I had not interpreted it in the way you’re describing.