Oh man, happy to have come across this. I’m a bit surprised people remember that article. I was one of the main people that set up the system, that was a while back.
I don’t know specifically why it was changed. I left 80k in 2014 or so and haven’t discussed this with them since. I could imagine some reasons why they stopped it though. I recommend reaching out to them if you want a better sense.
This was done when the site was a custom Ruby/Rails setup. This functionality required a fair bit of custom coding functionality to set up. Writing quality was more variable then than it is now; there were several newish authors and it was much earlier in the research process. I also remember that originally the scores disagreed a lot between evaluators, but over time (the first few weeks of use) they converged a fair bit.
After I left they migrated to Wordpress, which I assume would have required a fair effort to set up a similar system in. The blog posts seem like they became less important than they used to be; in favor of the career guide, coaching, the podcast, and other things. Also the quality has become a fair bit more consistent, from what I can tell as an onlooker.
The ongoing costs of such a system are considerable. First, it just takes a fair bit of time from the reviewers. Second, unfortunately, the internet can be a hostile place for transparency. There are trolls and angry people who will actively search through details and then point them out without the proper context. I think this review system was kind of radical, and can imagine it not being very comfortable to maintain, unless it really justified a fair bit of effort.
I’m of course sad it’s not longer in place, but can’t really blame them.
I think I’m pretty torn up about this. I agree that this was a failure, but going too far in the other direction seems like a loss of opportunity. I think my ideal would be something like a very competent and large CEA, or another competent and large organization spearheading a bunch of new EA initiatives. I think there’s enough potential work to absorb an additional 30-1000 full time people. I’d prefer small groups to do this to a poorly managed big group, but in general don’t trust small groups all too much for this kind of work in the long run. Major strategic action requires a lot of coordination, and this is really difficult with a lot of small groups.
I think my take is that the failures mentioned were mostly failures of expectations, rather than bad decisions in the ideal. If CEA could have done all these things well, that would have been the ideal scenario to me. The projects often seemed quite reasonable, it just seemed like CEA didn’t quite have the necessary abilities at those points to deliver on them.
Referencing above comments, I think, “Let’s make sure that our organization runs well, before thinking too much about expanding dramatically” is a very legitimate strategy. My guess is that given the circumstances around it, it’s a very reasonable one as well. But I also have some part of me inside screaming, “How can we get EA infrastructure to grow much faster?”.
Perhaps more intense growth, or at least bringing in several strong new product managers, could be more of a plan in 1-2 years or so.
I think these comments could look like an attack on the author here. This may not be the intention, but I imagine many may think this when reading it.
Online discussions are really tricky. For every 1000 reasonable people, there could be 1 who’s not reasonable, and who’s definition of “holding them accountable” is much more intense than the rest of ours.
In the case of journalists this is particularly selfishly-bad; it would be quite bad for any of our communities to get them upset.
I also think that this is very standard stuff for journalists, so I really don’t feel the specific author here is particularly relevant to this difficulty.
I’m all for discussion of the positives and weaknesses of content, and for broad understanding of how toxic the current media landscape can be. I just would like to encourage we stay very much on the civil side when discussing individuals in particular.
I feel like it’s quite possible that the headline and tone was changed a bit by the editor, it’s quite hard to tell with articles like this.
I wouldn’t single out the author of this specific article. I think similar issues happen all the time. It’s a highly common risk when allowing for media exposure, and a reason to possibly often be hesitant (though there are significant benefits as well).
Agreed, though the suggestions are appreciated!
VOI calculations in general seem like a good approach, but figuring out how to best apply them seems pretty tough.
I’m a bit surprised that recusal seems to be pushed for last-resort in this document. Intuitively I would have expected that because there are multiple members of the committee, many in very different locations, it wouldn’t be that hard to have the “point of contact” be different from the “one who makes the decision.” Similar to how in some cases if one person recommends a candidate for employment, it can be easy enough to just have different people interview them.
Recusal seems really nice in many ways. Like, it would also make some things less awkward for the grantors, as their friends wouldn’t need to worry about being judged as much.
Any chance you could explain a bit how the recusal process works, and why it’s preferred to not do this? Do other team members often feel really unable to make decisions on these people without knowing them? Is it common that the candidates are known closely by many of the committee members, such that collective recusal would be infeasible?
Kudos for writing up a proposal here and asking for feedback publicly!
Companies and nonprofits obviously have boards for similar situations, these funds having similar boards that would function in similar ways would seem pretty reasonable to me. I imagine it may be tricky to find people both really good and really willing. Having a board kind of defers some amount of responsibility to them, and I imagine a lot of people wouldn’t be excited to gain this responsibility.
I guess one quick take would be that I think the current proposed COI policy seems quite lax, and I imagine potential respected board members may be kind of uncomfortable if they were expected to “make it respectable”. So I think a board may help, but wouldn’t expect it help that much, unless perhaps they did some thing much more dramatic, like work with the team to come up with much larger changes.
I would personally be more excited about methods of eventually having the necessary resources to be able to have a less lax policy without it being too costly; for instance, by taking actions to grow the resources dedicated to funding allocations. I realize this is a longer-term endeavor, though.
Would it have been reasonable for you to have been secretively part of the process or something?
You write in that if you win, you just don’t accept the cash prize.
You write in that if you win, they tell you, but don’t tell anyone else, and select the next best person for the official prize.
I’d be curious what the signaling or public value of the public explanation, “Person X would have won 1st place, but removed themselves from the running” would be compared to “Person X won 1st place, but gave up the cash prize”
I think that in theory, if things were being done quite well and we had a lot of resources, we should be in a situation where most EAs really don’t need much outside of maybe 20-200 hours of EA-specific information; after which focusing more on productivity and career-specific skills would result in greater gains.
Right now things are more messy. There’s no great one textbook, and the theory is very much still in development. As such, it probably does require spending more time, but I’m not sure how much more time.
I don’t know if you consider these “EA” concepts, but I do have a soft spot for many things that have somewhat come out of this community but aren’t specific to EA. These are more things I really wish everyone knew, and they could take some time to learn. Some ideas here include:
“Good” epistemics (This is vague, but the area is complicated)
Applied Stoicism (very similar to managing one’s own emotions well)
Cost-benefit analyses and related thinking
Pragmatic online etiquette
If we were in a culture that was firmly attached to beliefs around the human-sacrificing god Zordotron, I would think that education to carefully remove both the belief and many of the practices that are caused by that belief, would be quite useful, but also quite difficult. Doing so may be decently orthogonal to learning about EA, but would seem like generally a good thing.
I believe that common culture taught in schools and media is probably not quite as bizarre, but definitely substantially incorrect in ways that are incredibly difficult to rectify.
Sounds good, best of luck with that! Writing posts on the EA forum or LessWrong on things you find interesting and partaking in the conversation can be a good way of getting up to speed and getting comfortable with ongoing research efforts.
I just want to point out that this seems very, very difficult to me, and I would not recommend trusting “being safe” unless you really have no other choice.
I know of multiple very smart people who have tried to stay anonymous, got caught, and bad things happened. (For instance, read many books on “top hackers”)
After more thought in areas of definition, I’ve come to believe that the presumption of authority can be a bit misrepresentative.
I’m all for the coming up and encouraging of definitions of Effective Altruism and other important topics, but the phrase “The definition of effective altruism” can be seen to presuppose authority and unanimity.
I’m sure that even after this definition was proposed, alternative definitions will be used.
Of course if there were to be one authority on the topic it would be William MacAskill, but I think that even if there were only one main authority, the use of pragmatic alternative definitions could only be discouraged. It would be difficult to call them incorrect or invalid. Dictionaries typically follow use, not create it.
Also, to be clear, I have this general issue with a very great deal of literature, so it’s not like I’m blaming this piece because it’s particularly bad, but rather, I’m pointing it out because this piece is particularly important.
Maybe there could be a name like the “The academic definition...”, “The technical definition”, or “The definition according to the official CEA Ontology”. Sadly these still use “The” which I’m hesitant to, but they are at least more narrow.
Also, I’m not sure if you used a pseudonym here, but I hope you did. I’d suggest being pretty wary of posting some details like that online, I would imagine they could easily be found later on if you did anything along those lines.
I would quite strongly recommend against lying and also strongly would recommend against anything that could give you the death penalty! I realize EAs are kind of intense, but that seems far over the line. Please do not engage inactivities that could directly put yourself in danger.
It sounds like you’re in a challenging situation. I could definitely empathize with that and for others in similar situations.
One thing to consider may be that if creating an EA group may come with so many local disadvantages, it may mean that even once you do create it, it may continue to have severe limits to growth, limiting its potential anyway.
I think I’d encourage you to feel fine not contributing anything useful right away, but instead focus efforts on your long-term efforts. Learning a lot and doing work to secure a great career can go a long way, though it will of course take a while and can be frustrating for that reason. I know 80,000 Hours has written a lot about topics related to this.
Thanks! This is interesting, will spend some time thinking about.
Please don’t worry much about embarrassing yourself! It’s definitely a challenge with forums like this, but it would be pretty unreasonable for anyone to expect that post/comment authors have degrees in all the possibly relevant topics.
Low-effort thoughts can be pretty great, they may be some of the highest value-per-difficulty work.
Nice find! This seems like a useful step, though of course likely considerably different than what I imagine consequentialists would aim for.
I think that personally, I’d mostly advocate for attempts to decouple motivation from total impact magnitude, rather for attempts to argue that high impact magnitude is achievable, so to speak, when trying to improve motivation.
If you attach your motivations to a specific magnitude of “$2,000 per life saved”, then you can expect them to fluctuate heavily when estimates change. But ideally, you would want your motivations to stay pretty optimal and thus consistent for your goals. I think this ideal is somewhat possible and can be worked towards.
The main goal of a consequentialist should be to optimize a utility function, it really shouldn’t matter what the specific magnitudes are. If the greatest thing I could do with my life is to keep a small room clean, then I should spend my greatest effort on that thing (my own wellbeing aside).
I think that most people aren’t initially comfortable with re-calibrating their goals for arbitrary utility function magnitudes, but I think that learning to do so is a gradual process that could be learned, similar to learning stoic philosophy.
It’s similar to learning how to be content no matter one’s conditions (aside from extreme physical ones), as discussed in The Myth of Sisyphus.
I think that makes sense. Some of it is a matter of interpretation.
From one perspective, the optimizer’s curse is a dramatic and challenging dilemma facing modern analysis. From another perspective, it’s a rather obvious and simple artifact from poorly-done estimates.
I.E. they sometimes say that if mathamaticians realize something is possible, they consider the problem trivial. Here the optimizer’s curse is considered a reasonably-well-understood phenomena, unlike some other estimation-theory questions currently being faced.