This is great! I find this extremely important, and I agree that we have a lot of room to improve. Thank you for the clear explanation and the great suggestions.
Further ideas:
A global research agenda / roadmap.
Bounties for specific requests.
Perhaps someone can set a (capped) 1-1 matching for individual requesters.
Better, give established researchers or organizations credit to use for their requests.
A peer review mechanism in the forum. A concrete suggestion:
Users submitting a “research post” can request peer review, which is displayed in the post [a big blue “waiting for reviewers”].
Reviewer volunteers to review, and present their qualifications (and statement of no conflict of interest) to a dedicated board that consists of “EA experts”, which can approve them for review.
There are strict-ish guidelines on what is expected from a good post, and a guide for reviewers.
The reviewers submit their review anonymously and publicly.
They can accept the post [a big green “peer reviewed”].
They can also ask to fix some errors and improve clarity [a big yellow “in revision”].
They can decide that it is just not good enough or irrelevant [a big red “rejected”].
(The above is problematic in several ways. The reviewer is not randomized, so there is inherent bias. The incentive for reviewing is not clear. It can be tough to be rejected..)
Better norms for linking to previous research and asking for it. Better norms for suitable exposition. These norms don’t have to be strict on “non-research” posts.
The forum itself can contain many further innovations (Good luck, JP!):
Polls and embedded prediction tools.
Community editable wiki posts.
Suggested templates.
Automated suggestion for related posts while editing (like in stackexchange).
An EA tag on lesswrong/alignment forum (or vice versa) with which posts can be displayed on both sites (like the LW/AF workflow).
A mechanism for highlighting and commenting like in Medium. (Not sure I like it)
Suggestions that appear (only) to the editor like in google docs.
There are some great stuff already on their way also :)
Regarding a wiki, Viktor Petukhov wrote a post about it with some discussion following it on the post and in private communication.
More research mentorships. Better support for researchers at the start of their path.
Better expository and introductory materials, and guides to the literature.
Better norm and infrastructure for partnering.
A supportive infrastructure to coordinate projects globally, between communities. This can allow more easily to set up large scale, volunteer-led projects for better epistemic institutions. The importance of local communities here is as a vetting mechanism.
On 4., in addition to the incentive problem, there’s also the problem of matching the right reviewer to reviewee such that the counterfactual value generated is high enough, which will depend greatly on the post and the reviewer. I think this is harder than the incentives problem. Downsides of not solving the matching problem could be people spending time reviewing posts that might have been better spent, or posts that are promising and need reviewing get reviewed by whoever is most incentivized/has time on their hands and then people think it’s been reviewed already so the price for a 2nd review goes up.
This is great! I find this extremely important, and I agree that we have a lot of room to improve. Thank you for the clear explanation and the great suggestions.
Further ideas:
A global research agenda / roadmap.
Bounties for specific requests.
Perhaps someone can set a (capped) 1-1 matching for individual requesters.
Better, give established researchers or organizations credit to use for their requests.
A peer review mechanism in the forum. A concrete suggestion:
Users submitting a “research post” can request peer review, which is displayed in the post [a big blue “waiting for reviewers”].
Reviewer volunteers to review, and present their qualifications (and statement of no conflict of interest) to a dedicated board that consists of “EA experts”, which can approve them for review.
There are strict-ish guidelines on what is expected from a good post, and a guide for reviewers.
The reviewers submit their review anonymously and publicly.
They can accept the post [a big green “peer reviewed”].
They can also ask to fix some errors and improve clarity [a big yellow “in revision”].
They can decide that it is just not good enough or irrelevant [a big red “rejected”].
(The above is problematic in several ways. The reviewer is not randomized, so there is inherent bias. The incentive for reviewing is not clear. It can be tough to be rejected..)
Better norms for linking to previous research and asking for it. Better norms for suitable exposition. These norms don’t have to be strict on “non-research” posts.
The forum itself can contain many further innovations (Good luck, JP!):
Polls and embedded prediction tools.
Community editable wiki posts.
Suggested templates.
Automated suggestion for related posts while editing (like in stackexchange).
An EA tag on lesswrong/alignment forum (or vice versa) with which posts can be displayed on both sites (like the LW/AF workflow).
A mechanism for highlighting and commenting like in Medium. (Not sure I like it)
Suggestions that appear (only) to the editor like in google docs.
There are some great stuff already on their way also :)
Regarding a wiki, Viktor Petukhov wrote a post about it with some discussion following it on the post and in private communication.
More research mentorships. Better support for researchers at the start of their path.
Better expository and introductory materials, and guides to the literature.
Better norm and infrastructure for partnering.
A supportive infrastructure to coordinate projects globally, between communities. This can allow more easily to set up large scale, volunteer-led projects for better epistemic institutions. The importance of local communities here is as a vetting mechanism.
On 4., in addition to the incentive problem, there’s also the problem of matching the right reviewer to reviewee such that the counterfactual value generated is high enough, which will depend greatly on the post and the reviewer. I think this is harder than the incentives problem. Downsides of not solving the matching problem could be people spending time reviewing posts that might have been better spent, or posts that are promising and need reviewing get reviewed by whoever is most incentivized/has time on their hands and then people think it’s been reviewed already so the price for a 2nd review goes up.