What’s the definition of expertise in x-risk? Unless someone has an academic background in a field where expertise is well-defined by credentials, there doesn’t appear to be any qualified definition for expertise in x-risk reduction.
What are considered the signs of a value-misaligned actor?
What are the qualities indicating “exceptionally good judgement and decision-making skills” in terms of x-risk reduction orgs?
Where can we find these numerous public lists of project ideas produced by x-risk experts?
Comments:
While ‘x-risk’ is apparently unprecedented in large parts of academia, and may have always been obscure, I don’t believe it’s unprecedented in academia or in intellectual circles as a whole. Prevention of nuclear war and and once-looming environmental catastrophes like the ozone holes posed arguably existential risks that were academically studied. The development of game theory was largely motivated by a need for better analysis of war scenarios between the U.S. and Soviet Union during the Cold War.
An example of a major funder for small projects in x-risk reduction would be the Long-Term Future EA Fund. For a year its management was characterized by Nick Beckstead, a central node in the trust network of funding for x-risk reduction, not providing much justification for grants made mostly to x-risk projects the average x-risk donor could’ve very easily identified themselves. The way the issue of the ‘funding gap’ is framed seems to imply patches to the existing trust network may be sufficient to solve the problem, when it appears the existing trust network may be fundamentally inadequate.
1.
I don’t have a definition of x-risk expertise. I think the quality of x-risk expertise is currently ascribed to people i) with a track record of important contributions to x-risk reduction ii) subjective peer approval from other experts.
I think a more objective way to evaluate x-risk expertise would be extremely valuable.
2.
Possible signs of a value mis-aligned actor:
if they don’t value impact maximisation, they may focus on ineffective solutions, perhaps based
on their interests
if they don’t value high epistemic standards, they may hold beliefs that they cannot rationally justify, and may make more avoidable bad/risky decisions
if they don’t value the far future, they may make decisions that hare high risk for the far future
I also think good judgement and decision-making results from a combination of qualities of the individual and qualities of their social network. Plausibly, one could make much better decisions if they have frequent truth-seeking dialogue with relevant domain experts with divergent views.
Upvoted.
Questions:
What’s the definition of expertise in x-risk? Unless someone has an academic background in a field where expertise is well-defined by credentials, there doesn’t appear to be any qualified definition for expertise in x-risk reduction.
What are considered the signs of a value-misaligned actor?
What are the qualities indicating “exceptionally good judgement and decision-making skills” in terms of x-risk reduction orgs?
Where can we find these numerous public lists of project ideas produced by x-risk experts?
Comments:
While ‘x-risk’ is apparently unprecedented in large parts of academia, and may have always been obscure, I don’t believe it’s unprecedented in academia or in intellectual circles as a whole. Prevention of nuclear war and and once-looming environmental catastrophes like the ozone holes posed arguably existential risks that were academically studied. The development of game theory was largely motivated by a need for better analysis of war scenarios between the U.S. and Soviet Union during the Cold War.
An example of a major funder for small projects in x-risk reduction would be the Long-Term Future EA Fund. For a year its management was characterized by Nick Beckstead, a central node in the trust network of funding for x-risk reduction, not providing much justification for grants made mostly to x-risk projects the average x-risk donor could’ve very easily identified themselves. The way the issue of the ‘funding gap’ is framed seems to imply patches to the existing trust network may be sufficient to solve the problem, when it appears the existing trust network may be fundamentally inadequate.
1. I don’t have a definition of x-risk expertise. I think the quality of x-risk expertise is currently ascribed to people i) with a track record of important contributions to x-risk reduction ii) subjective peer approval from other experts.
I think a more objective way to evaluate x-risk expertise would be extremely valuable.
2. Possible signs of a value mis-aligned actor:
if they don’t value impact maximisation, they may focus on ineffective solutions, perhaps based on their interests
if they don’t value high epistemic standards, they may hold beliefs that they cannot rationally justify, and may make more avoidable bad/risky decisions
if they don’t value the far future, they may make decisions that hare high risk for the far future
3. see http://effective-altruism.com/ea/1tu/bottlenecks_and_solutions_for_the_xrisk_ecosystem/foo
I also think good judgement and decision-making results from a combination of qualities of the individual and qualities of their social network. Plausibly, one could make much better decisions if they have frequent truth-seeking dialogue with relevant domain experts with divergent views.