Could be interesting to see some more thinking about investments that have short-to-medium-term correlations with long-term-upside-capturing/mission-hedging stocks that don’t themselves have these features (as potential complementary shorts).
CalebW
From the discord: “Manifold can provide medium-term loans to users with larger invested balances to donate to charity now provided they agree to not exit their markets in a disorderly fashion or engage in any other financial shenanigans (interpreted very broadly). Feel free to DM for more details on your particular case.”
I DM’d yesterday; today I received a mana loan for my invested amount, for immediate donation, due for repayment Jan 2, 2025, with a requirement to not sell out of large positions before May.
There’s now a Google form: https://forms.gle/XjegTMHf7oZVdLZF7
A stray observation from reading Scott Alexander’s post on his 2023 forecasting competition:
Scott singles out some forecasters who had particularly strong performance both this year and last year (he notes that being near the very top in one year seems noisy, with a significant role for luck), or otherwise seem likely to have strong signals of genuine predictive outperformance. These are:
- Samotsvety
- Metaculus
- possibly Peter Wildeford—
possibly Ezra Karger (Research Director at FRI).
I note that the first 3 above all have higher AI catastrophic/extinction risk estimates than the average superforecaster (I note Ezra given his relevance to the topic at hand, but don’t know his personal estimates)
Obviously, this is a low n sample, and very confounded by community effects and who happened to catch Scott’s eye (and confirmation bias in me noticing it, insofar as I also have higher risk estimates). But I’d guess there’s at least a decent chance that both (a) there are groups and aggregation methods that reliably outperform superforecasters and (b) these give higher estimates of AI risk.
Post links to google docs as quick takes if posting posts proper feels like a high bar?
I haven’t thought about this a lot, but I don’t see big tech companies working with existing frontier AI players as necessarily a bad thing for race dynamics (compared to the counterfactual). It seems better than them funding or poaching talent to create a viable competitor that may not care as much about risk—I’d guess the question is how likely we’d expect them to be successful in doing so (given that Amazon is not exactly at the frontier now)?
Agree this seems bad. Without commenting on whether this would still be bad, here’s one possible series of events/framing that strikes me as less bad:
- Org: We’re hiring a temporary contractor and opening this up to international applicants
- Applicant: Gets the contract
- Applicant: Can I use your office as a working space during periods I’m in the states?
- Org: SureThis maybe then just seems like the sort of thing the org and applicant would want to have good legal advice on (I presume the applicant would in fact look for a B1/B2 visa that allows business during their trip rather than just tourism)
For completeness, here’s what OpenAI says in its “Governance of superintelligence” post:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.
If there was someone well-trusted by the community (in or outside of it) you trusted not to doxx you, you might ask if they’d be willing to endorse a non-specific version of events as accurate. I do accept there’s an irony in suggesting this given your bad experience with something similar previously!
This may or may not be relevant to your situation, but I’d be more willing to accept non-specific claims at face value if a trusted third party was vouching for that interpretation.
Tl;dr—my (potentially flawed or misguided) attempt at a comment that provides my impression of Catherine as a particularly trustworthy and helpful person, with appropriate caveats and sensitivity to Throwaway’s allegation.
Note: I haven’t written this sort of comment before, and appreciate that it would be easy for this sort of comment to contribute to have a chilling effect on important allegations of wrongdoing coming to light, so would welcome feedback on this comment or any norms that would have been useful for me to adhere to in making it or deciding to make it.First things first: I’m sorry that Throwaway had had a bad experience with Catherine! Notwithstanding the lack of further detail, I recognize that given this comment there’s a reasonable chance that there was miscommunication or Catherine made mistakes around confidentiality, including some chance this was in poor faith, part of a meaningful pattern, or involved misjudgement that could call her position into question. Throwaway has my empathy and I wish them the best in their forthcoming post, which seems courageous and selfless to do given their experience. I appreciate that what I say below would be very frustrating and disheartening to read for someone in the position they mention in the comment. I’ll certainly do my best to read anything further from them with my best attempt at good faith and unbiasedness, and would need to apologize to them and downgrade my confidence in my ability to judge people’s trustworthiness if the picture I paint below turns out to be have been unhelpful in hindsight.
With this said, it makes me feel uncomfortable to see a fully anonymous/uncorroborated/non-specific allegation of wrongdoing prominently in the comments to this sort of post—I’m not sure I like the incentive structures where anyone can costlessly cast a significant shadow on someone’s reputation, given the costs involved in dissuading people from talking to someone whose role is to provide community health support. I definitely agree with Lorenzo’s impression that it would be great to have appointed independent/external person or body that someone could take these sorts of allegations to with confidence.
I feel compelled to provide my own impression of Catherine (for context, we first met 5+ years ago through the Effective Altruism community in New Zealand; my early experiences involved sharing some thoughts with Catherine based on my experience of having been involved with EA a couple of years prior to her, and I helped her put on a Giving Game, we have subsequently continued to have conversations when we have the chance to see each other out of a mutual good feeling and an interest in EA community health).
My impression of Catherine is that she is a particularly kind, trustworthy and virtuous person, with an interest in doing right by people and not breaking commonly accepted ethical norms. Concretely, I’d give at least 5-to-1 in odds that a year from now, I’ll continue to both recommend her as someone particularly helpful and trustworthy to talk to, and endorse her meaningfully remaining in her current role (of course, in the hypothetical where I’m telling someone this, I would out of transparency note that someone has called into question her upholding of confidentiality in one instance).
Any updates around the likelihood/timing of a discussion course? :)
[Update 26 Jul ’22: the website should be operational again. Sorry again to those inconvenienced!]
Hello,
I’ve recently taken over monitoring the donation swaps. There have historically been a handful of offers listed each month, but it looks like the system has broken sometime over the past few weeks—thanks to Oscar below for emailing to bring this to our attention. I’m sorry for the inconvenience for anyone who has been trying to use the service and will hopefully provide a further update in the not-too-distant future!
Thanks for organising :)
When do you expect decisions on applications will be made by?
Thanks for writing this—it seems worthwhile to be strategic about potential “value drift”, and this list is definitely useful in that regard.
I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.
In the vein of Denise_Melchin’s comment on Joey’s post, I believe most people who appear to have value “drifted” will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don’t think that attempting to override these other values is generally sustainable for long-term motivation.
This hypothesis would point away from pledges or ‘locking in’ (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to “reduce the risk of value drift”, we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one’s impactfulness.
In the same vein as this comment and its replies: I’m disposed to framing the three as expansions of the “moral circle”. See, for example: https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/
I’m weakly confident that EA thought leaders who would consider seriously the implication of ideas like quantum immortality generally take a less mystic, reductionist view of quantum mechanics, consciousness and personal identity, along the lines of the following:
It seems that the numbers in the top priority paragraph don’t match up with the chart
I’ll throw in Bostrom’s ‘Crucial Considerations and Wise Philanthropy’, on “considerations that radically change the expected value of pursuing some high-level subgoal”.
A thought: EA funds could be well-suited for inclusion in wills, given that they’re somewhat robust to changes in the charity effectiveness landscape
Nice one!
A nitpick (h/t @Agustín Covarrubias ): the English translation of the US-China cooperation question (‘How much do you agree with this statement: “Al will be developed safely without cooperation between China and the US?‘) reads as ambiguous.
ChatGPT and Gemini suggest the original can be translated as ‘Do you agree that the safe development of artificial intelligence does not require cooperation between China and the United States?’, which would strike me as less ambiguous.