Misha_Yagudin
Long list of AI questions
(From an email.) Some questions I am interested in:
-
What’s the size of alexithymia (A)?
-
Does it actually make MH issues more likely or more severe? This mashes a few plausible claims and needs to be disentangled carefully, e.g., (a) given A, does MH more likely to be developed in the first place; (b) given MH, will A (even if acquired as a result of MH) make MH issues last longer or be worse? A neat casual model might be helpful here, separating A acquired with MH vs. A pre-existing to MH.
-
How treatable is A? Does treating A improves MH? Is there any research (esp. RCT) on this? Does this depends on subgroups, e.g., A acquired with MH vs. A pre-existing to MH vs. A without MH…
-
How would treating A fit into the MH treatment landscape? Can it be integrated with ongoing MH efforts in general (like people generally seeing therapy or doing CBT with a book or an app)? Can it be integrated with existing seemingly effective solutions (e.g., https://www.charityentrepreneurship.com/our-charities incubated MH orgs or charities recommended by https://www.happierlivesinstitute.org/)?
-
Yes, the mechanism is likely not alexithymia directly causing undesirable states like trauma but rather diminishing one’s ability to get unstack given that traumatic events happened.
[Linkpost] Alexithymia — The Most Overlooked Emotional Health Problem
Ah, I didn’t notice the forecasting links section… I was thinking of adding a hyperlink to the question to the name of the platform at the highlighted place.
Also, maybe expanding into the full question when you hover over the chart?
I think this is great!
https://funds.effectivealtruism.org/funds/far-future might be a viable option to get funding.
As for suggestions,
maybe link to the markets/forecasting pools you use for the charts like this “… ([Platform] (link-to-the-question))?
I haven’t tested, but it would be great for links to your charts to have snappy social media previews.
If you think there is a 50% chance that your credences will say go from 10% to 30%+. Then you believe that with a 50% probability, you live in a “30%+ world.” But then you live in at least a 50% * 30%+ = 15%+ world rather than a 10% world, as you originally thought.
FWIW, different communities treat it differently. It’s a no-go to ask for upvotes at https://hckrnews.com/ but is highly encouraged at https://producthunt.com/.
Good luck; would be great to see more focus on AI per item 4!
Is Rebecca still a fund manager, or is the LTFF page out of sync?
So it’s fair to say that FFI-supers were selected and evaluated on the same data? This seems concerning. Specifically, on which questions the top-60 were selected, and on which questions the below scores were calculated? Did these sets of questions overlap?
The standardised Brier scores of FFI superforecasters (–0.36) were almost perfectly similar to that of the initial forecasts of superforecasters in GJP (–0.37). [17] Moreover, even though regular forecasters in the FFI tournament were worse at prediction than GJP forecasters overall (probably due to not updating, training or grouping), the relative accuracy of FFI’s superforecasters compared to regular forecasters (-0.06), and to defence researchers with access to classified information (–0.1) was strikingly similar.[18]
Hey, I think the fourth column was introduced somehow… You can see it by searching for “Mandel (2019)”
Thank you very much, Dane and the tech team!
More as food for thought… but maybe “broad investor base” is a bit of exaggeration? Index funds are likely to control a significant fraction of these corporations, and it’s unclear if the board members they appoint would represent ordinary people. Especially when owning ETF != owning actual underlying stocks.
From an old comment of mine:
Due to the rise of index funds (they “own” > 1⁄5 of American public companies), it seems that an alternative strategy might be trying to rise in the ranks of firms like BlackRock, Vanguard, or SSGA. It’s not unprecedented for them to take action (partly for selfish reasons); here are examples of BlackRock taking stances on environmental sustainability and coronavirus cure/vaccine.
Here is a paper exploring the potential implications of the rise of index funds and their stewardship: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3282794
The table here got all messed up. Could it be fixed?
Thanks for highlighting Beadle (2022), I will add it to our review!
I wonder how FFI Superforecasters were selected? It’s important to first select forecasters who are doing good and then evaluate their performance on new questions to avoid the issue of “training and testing on the same data.”
How much of the objection would be fixed if Windfall Clause required the donations to be under the board’s oversight?
Thank you, Hauke, just contributed an upvoted to the visibility of one good post — doing my part!
Alternatively, is there a way to apply field customization (like hiding community posts and up-weighting/down-weighting certain tags) to https://forum.effectivealtruism.org/allPosts?
Is there a way to only show posts with ≥ 50 upvotes on the Frontpage?
Related: https://www.clearerthinking.org/post/can-you-experience-enlightenment-through-sam-harris-waking-up-meditation-app