Congrats on launching! This is super interesting so I have a bunch of questions. I’ve split them up into multiple comments for ease of threading.
First, from your website:
We merge expert judgment with statistical models of project success. We used our expertise and the expertise of our advisers to determine a set of variables that is likely to be positively correlated with project success. We then utilize a multi-criteria decision analysis framework which provides context-sensitive weightings to several predictive variables. Our framework adjusts the weighting of variables to fit the context of the projects and adjusts the importance of feedback from different evaluators to fit their expertise.
How are the weights actually computed? How was the model fit? On what dataset? How does the score influence your recommendations?
I’m going to be posting the full equation on the website in the near future. It’ll be easier to answer in-depth questions about the process after that has been posted.
The evaluation process includes an assessment of the importance of each characteristic to the project at hand which determines the weightings. So, if our raters assess persuasion as being particularly important to the project at hand, the weighting of persuasion in the overall score will be greater. This allows our weightings to be adaptable to the details of the project at hand. We also weight the assessment of the importance of a variable to the project by the expertise of the evaluater.
Right now, a good score in the evaluation process is necessary but not sufficient for a project to be funded. This is because I expect to significantly update the details of the evaluation process as we review our inaugural round of applicants.
I imagine Ben would give robust criticism for this before or after it is posted. Presumably it’s better for that to happen before?
I’m picturing a simple linear model that is based on arbitrary weights. I’ve not read the literature here but if this can improve decision-making (like fitted models, which certainly can) then it would be an impressive fact.
I think it would be great to discuss it on the EA forum, both from the point of view of transparency, and because it’s a much better medium for multi-threaded discussion. (But I understand if you’d rather keep it private if it’s not very refined right now.)
We use simple linear models all the time in investment; they are actually quite good. Best of all they are robust. Like Owen I would love to discuss this.
For example, today I was trying to predict some property of companies. I came up with 5 signals I can easily calculate which all capture some information about the underlying property, turned them into 5 binary indicators, and just added them together for a composite signal. No attempt at producing weights, but for various reasons I’m pretty happy with this approach, and I’m confident my boss would endorse it too if we went into details.
It looks like there’s evidence for using them to predict continuous variables using continuous inputs, which might be your case. Also, if you’re using it to supplement your personal decision-making, then on the face of it, that’s more likely to work well than using it as a substitute.
The book linked to in the evaluation process page on the website suggests that a linear model where the sign is determined and the weights are random beats expert judgment.
Thanks. Looks like the original experiment is here.
Just looking at the abstract, it seems like the article is describing a situation where you have numerical inputs, which doesn’t map perfectly to EA Ventures: “This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors.”
Congrats on launching! This is super interesting so I have a bunch of questions. I’ve split them up into multiple comments for ease of threading.
First, from your website:
How are the weights actually computed? How was the model fit? On what dataset? How does the score influence your recommendations?
I’m going to be posting the full equation on the website in the near future. It’ll be easier to answer in-depth questions about the process after that has been posted.
The evaluation process includes an assessment of the importance of each characteristic to the project at hand which determines the weightings. So, if our raters assess persuasion as being particularly important to the project at hand, the weighting of persuasion in the overall score will be greater. This allows our weightings to be adaptable to the details of the project at hand. We also weight the assessment of the importance of a variable to the project by the expertise of the evaluater.
Right now, a good score in the evaluation process is necessary but not sufficient for a project to be funded. This is because I expect to significantly update the details of the evaluation process as we review our inaugural round of applicants.
I imagine Ben would give robust criticism for this before or after it is posted. Presumably it’s better for that to happen before?
I’m picturing a simple linear model that is based on arbitrary weights. I’ve not read the literature here but if this can improve decision-making (like fitted models, which certainly can) then it would be an impressive fact.
I’d love to be a part of the discussion of this equation. I was just going to wait patiently but am speaking up in case it’s taken to email. :)
If it’s taken to email I’ll include you on the list :-)
I think it would be great to discuss it on the EA forum, both from the point of view of transparency, and because it’s a much better medium for multi-threaded discussion. (But I understand if you’d rather keep it private if it’s not very refined right now.)
We use simple linear models all the time in investment; they are actually quite good. Best of all they are robust. Like Owen I would love to discuss this.
For example, today I was trying to predict some property of companies. I came up with 5 signals I can easily calculate which all capture some information about the underlying property, turned them into 5 binary indicators, and just added them together for a composite signal. No attempt at producing weights, but for various reasons I’m pretty happy with this approach, and I’m confident my boss would endorse it too if we went into details.
It looks like there’s evidence for using them to predict continuous variables using continuous inputs, which might be your case. Also, if you’re using it to supplement your personal decision-making, then on the face of it, that’s more likely to work well than using it as a substitute.
http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/2te
The book linked to in the evaluation process page on the website suggests that a linear model where the sign is determined and the weights are random beats expert judgment.
I can’t get to the book. Is there any more information about the experiment?
You can read it here. The money pages are 63-64.
Thanks. Looks like the original experiment is here.
Just looking at the abstract, it seems like the article is describing a situation where you have numerical inputs, which doesn’t map perfectly to EA Ventures: “This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors.”