Potentially held in less liquid forms? So it could be difficult to get the money out fast enough.
trait-feign
I like this idea. One example of it within the EA sphere was the AI Safety Distillation Contest.
I would be interested in a Minimal Viable Product version of what you describe above. Perhaps where a group of individuals each attempt to make a mini summary of a paper/post of interest—holding each other accountable. If it has sufficient traction an more robust system as you describe above could be put in place. Would you be interested?
For motivation—Lizka writes a good breakdown of why things like this might be useful Distillation and research debt
Thanks for writing this Jack! This is a really helpful collection of summarized papers, and I wish there was more work like it.
Thanks for the detailed response. It’s great hearing about the care and consideration when forming these surveys!
Given “last year about 50% of respondents started the extra credit section and about 25% finished it”, this still feels like free info even if people don’t finish. But I guess there are also reputation risks in becoming The Survey That None Can Finish.
I note that previous surveys had some of information I suggested as useful listed, and I think that’s why I’d be so excited to see it carried over across the years. Especially with rapid growth of EAs.
I don’t feel like any substantial change should be made off my views expressed here, but I did want to iron out a few points to make my feedback clearer. Your point about follow-up surveys probably catches most of my worries about sufficient information being collected. Thanks again David and team :)
I think there should be more questions under the ‘extra credit’ section. I was willing to spend more time on this, and I think there are other views I would be interested in understanding from the average EA.
A low effort attempt of listing a few things which come to mind:
moral views
biggest current uncertainties with EA
community building preferences
identification with EA label
best and worst interactions with EA
Hi Froolow, thanks for taking the time to write up this piece. I found your explanations clear and concise, and the worked examples really helped to demonstrate your point. I really appreciate the level of assumed knowledge and abstraction—nothing too deep assumed. I wish there were more posts like this on the forum!
Here are some questions this made me think about:
Do you have any recommended further reading? Two examples of things I’d like to hear about:
1)a) Really well done applications of uncertainty analysis which changed long standing decisions
1)b) Theoretical work, or textbook demonstrations for giving foundational understanding
1)c) The most speculative work you know of working with uncertainty analysis
I think (1c) would be particularly useful for porting this analysis to longtermist pursuits. There is little evidence in these field, and little ability to get evidence. So I would want to consider similar case studies, but perhaps this is on a larger scale than common-use health economics.
Are there some levels above PSA of uncertainty of model formation of parameter covariance? Many level seem to potentially suffer from any underlying structural flaws in the model. PSA seems to question this via Monte Carlo. But, if for example there were covarying parameters, are there methods for assigning model or ‘hyperparameter’[^hyp] uncertainty?
Somewhat relatedly:
-
I’m concerned that in thresholding a single parameter what’s actually happening that a separate more pivotal parameters effects are over weighting this parameter. This would be more of a problem in scenario analysis since nothing else is varying. But Under PSA, perhaps this could arise through non-representative sampling distributions?
-
I think something funky might be happening under this form of risk adjustment. Variance of outcome has been adjusted by pulling out the tails, but I don’t think this mimics the decision making of a risk-adverse individual. Instead I think you would want to form the expected return, and compare this to a the expected return from a risk adverse motivation function.
Meta: I hope it doesn’t come across as suggesting this should reduce use of uncertainty analysis in any of these questions! I’m just wondering about how this is dealt with in normal health economics practice :)
[^hyp] : I don’t think hyperparameter is the correct term here, but some sort of adjustment of sampling distribution.
Note that in context of OP’s original question, this story demonstrates that discussion of realistic depiction may increase chances of risk!
Gwern elaborates on this idea here .
Mirror of ‘Effective Altruism’ Is Neither, the article in question. As it is a non-direct mirror should not affect readership numbers.
I think these spectrum arguments are doing much more of point (1) ‘The “moral intuition” is clearly not generated by reliable intuitions’ rather than (2) ‘proving too much’.
As such I think these are genuinely useful thought experiments, as then we can discuss the issues and biases we are discussing under (1). For example, I too would be willing to bite the bullet on Cowen’s St Petersberg Paradox Persistence edition—as I can point to the greater value each time. I think many people find it counter-intuitive due to risk adversity. Which I think is also a fine point and can be discussed readily! Or maybe someone doesn’t like transitivity—also an interesting point worth considering!
I do not think that means we can throw these thought experiments out the window, or point to them being unfair. The moral views that we’re are defending are necessarily optimising so it makes sense to point out when this optimisation process makes people think that a moral harm has been committed. Exactly what spectrum arguments are set out to do.
I think this Everettian framing is useful and really probes at how we should think about probabilities outside of the quantum sense as well. So I would suggest your reasoning this holds for the standard coin flip case too.
I think this post is excellent for a number of reasons:
discusses something underexplored on EA Forum (day-to-day operations)
raises perspectives from outside the EA lens (military)
raises why this post might be wrong*
*Perhaps there should be more elaborating this second point of different types of ops. I would guess that research EA organizations may have less ‘reactive’ operations requirements. Though I think working at an ‘in the field’ organization would be closer to the experience you describe. Of note is that research organizations may have more ‘in the field’ elements (planning events, meetings, logistics), but I would expect there is less non-systematized logistics—less chance for human error.
I would love to see another post about your experience of operations within the navy, any key lessons learned or advice you could give from what I would imagine is a unique and effective work environment.
I really enjoyed this post! Thanks for sharing it.
I would be keen to try to understand which of these points raised in ‘Concluding thoughts’ should be most seriously considered by the average EA. I feel like (3) is often considered, (1) is maybe a push that many individuals get from appeals to novelty. However, (2) I think is under-discussed and I would like to see more modelling using it. For example, some models could attempt to apply this: in mental health of an EA, in motivation of an EA, in persistence of a charity, in persistence of an EA group. I wonder if this is inline with other’s intuitions? Or if there are other ways these points could be explored?
As an aside, I think I would have appreciated the 3 summarised points in’ Concluding thoughts’ points to be included in the TLDR or somehow at the beginning of the article. I think they are really well worded and it might help individuals to get a grasp on the key ideas without having to engage with the entire post. This doesn’t detract from the article as a whole, but seems a small cost that could benefit future skim-readers.
Hope you are having a wonderful day!
I would be hesitant on directly noting comparisons. I think the first clause of your sentence could come off as suggesting other charities are poor. This would be especially bad if any other person on the show mentions a charity. I like Aaron Gertler’s plug at DreamHack (from his answer on this post) as it specifically notes the analytic and methodical side of showing expected returns from charitable investment. Just getting good bang for your buck.
I personally find the concept of “best charities are 100 times more effective than others” motivating, and it may be what we imply when saying ‘the most effective charities’, but I think it encourages an immediate knee-jerk response in many to take the view as something self-righteous, showy, or pompous.
I like the second clause of your sentence.
Zapier have a guide on using their service for filtering an RSS feed. In your case you should be able to filter based off the
link
field (ie/link contains forum.effectivealtruism.org
), to create a new RSS feed.Dm me if you attempt this and are having any issues. Or if sufficient people upvote your comment (say 5+ votes) I’m happy to make a public RSS feed for this.
Slightly confused by the large number of disagree voters here? Like, people disagreevoting are saying they prefer to rely on billionaires?
I understand that it might be the most effective way to direct money at this point in time.But people aren’t commenting saying that—just multiple people strong-disagreeing with this. I personally would encourage reaching out to more HNWIs. Though it wouldn’t be correct to say I am not concerned about relying on a small number of super-super rich.
Note: I can understand the downvoting (karma) - as maybe this doesn’t have the style of communication one prefers in EA Forum, nor does it explain, nor is it ‘be kind’. The latter two advised under commenting guidelines.