Would be interested in seeing the cause of the general problem here, and some possible solutions.
Peter Berggren
[Question] What’s causing mentorship bottlenecks?
I know some people in this category, mostly because they are extremely uncertain over what the best work is on AI risk.
I amend my previous comment to replace the phrase “seriously considered” with “considered.” Also, there are many state laws against human reproductive cloning, but many states have no such laws:
https://www.thenewatlantis.com/publications/appendix-state-laws-on-human-cloning
I think that it’s good that this proposal was seriously considered. I don’t think it currently beats other megaprojects on impact/solvability/neglectedness, especially since quite a bit of genetic engineering research is already legal in the US (I am once again reminding everyone that human reproductive cloning is legal in many US states, and that it seems unlikely for blue states to enact new laws against reproductive autonomy in a post-Roe era). However, I think that it’s good that this proposal was seriously considered, and there should be, on the margin, more proposals like it (in terms of large scale, outside-the-box thinking, potential “weirdness,” etc.)
Last I checked, Tetlock’s result on the efficacy of superforecasters vs. domain experts wasn’t apples-to-apples: it was comparing individual domain expert forecasts vs. superforecaster forecasts that had been aggregated.
Do they really control the narrative in the “mainstream media,” though, or just a few far-left content mills that tend to get clicks by being really outrageous?
As I understand it, there are regulations surrounding what sort of grants foreign organizations are allowed to make to people within those countries. Not an expert; just half-remembering something from a similar form.
It seems to me that, while the form/meaning distinction in this paper is certainly a fascinating one if your interests tend towards philosophy of language, this has very little to say about supposed inherent limitations of language models, and does not affect forecasts of existential risk.
As an aside, the idea that we should prioritize optics over intellectually honest exploration of the epistemic landscape is deeply harmful to effective altruism as a whole.
I never denied that they have published their arguments in many places. I just can’t find any such arguments that are object-level.
I didn’t mean to imply that Emile Torres didn’t think that this was an extinction risk. I’m sorry that I misspoke on that part.
BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism?
I think I read somewhere that GiveWell doesn’t tend to report these figures because the QALY assessment system is so subjective; they instead, for charities focusing on morbidities other than death, report specific results such as “cost per case of blindness averted” or “cost per additional year of school.”
The Bostrom email situation, and the Tegmark grant proposal situation, both seem very minor to me, at least compared to many other things that have happened to EA in the past with the same amount of panic or less.
I really appreciate this level of openness about possible changes, even though I disagree with almost every suggestion made here. I think that EA is chronically lacking in coordination and centralized leadership, and that its primary failures of late (obsessive self-flagellation, complete panic over minor incidents) could be resolved by a more coordinated strategy. As such, I feel that the “market” structure will collapse in on itself fairly quickly if we do not fix our organizational culture to stop panic spirals.
However, I do have a suggestion for resolving the monopsony issue. CEA and other movement-building organizations should focus large amounts of active fundraising effort on other billionaires (similarly to what many other charities do behind the scenes), and the community should become more supportive of earning-to-give (as many supposed “talent constraints” can in fact be resolved with enough hiring).
I am. Which organization is lobbying on that, because I’d be happy to join?
Bayesed.
This was an announcement by the False Emperor.
Last I checked, the whole point of the Overton window is that you can only shift it by advocating for ideas outside of it.