I have about 60 EA-related ideas right now. This list includes some of the most promising ones, broken down by category. I am interested in feedback on which ideas people like the best.
Plus signs indicate how well thought-out an idea is:
+ = idea seems interesting, but I have no idea what to say about it
++ = partially formed concept, but still a bit fuzzy
+++ = fully-formed concept, just need to figure out the details/actually do it
Fundamental problems
“Pascal’s Bayesian Prior Mugging”: Under “longtermist-friendly” priors, if a mugger asks for $5 in exchange for an unspecified reward, you should give the $5 ++
If causes differ astronomically in EV, then personal fit in career choice is unimportant ++
EAs should focus on fundamental problems that are only relevant to altruists (e.g., infinity ethics yes, explore/exploit no) +++
The case for prioritizing “philosophy of priors” ++
How quickly do forecasting estimates converge on reality? (use Metaculus API) +++
How risk-averse should altruists be (and how does it vary by cause)? +
Can patient philanthropists take advantage of investors’ impatience? +
Giving now vs. later
Reverse-engineering the philanthropic discount rate from observed market rates +++
Optimal behavior in extended Ramsey model that allows spending on cash transfers or x-risk reduction +++
If giving later > now, what does that imply for talent vs. funding constraints? +
Is movement-building an expenditure or an investment? +
Fermi estimate of the cost-effectiveness of improving the EA spending rate +++
Prioritization research might need to happen now, not later ++
Long-term future
If technological growth linearly increases x-risk but logarithmically increases well-being, then we should stop growing at some point ++
Estimating P(existential catastrophe) from a list of near-catastrophes +++
Thoughts on doomsday argument +
Value of the future is dominated by worlds where we are wrong about the laws of physics ++
If x-risk reduction is permanent and people aren’t longtermist, we should give later +++
Other
How should we expect future EA funding to look? +
Can we use prediction markets to enfranchise future generations? (Predict what future people will want, and then the government has to follow the predictions) +
Altruistic research might have increasing marginal utility ++
“Suspicious convergence” is not that suspicious because people seek out actions that look good across multiple assumptions +++
I like that these generally seem quite clear and focused.
In terms of decision relevance and benefit, I get the impression that several funders and meta EA orgs feel a crunch in not having great prioritization, and if better work emerges, they may change funding fairly quickly. I’m less optimistic about career change type work, mainly because it seems like it would take several more years to apply (it would take some time from convincing someone to having them start producing research).
I’m skeptical of how much research into investments will change investments in the next 2-10 years. I don’t get the impression OpenPhil or other big donors are closely listening to these topics here.
Therefore I’m more excited about the Giving Now/Later and Long-Term Future work.
Another way of phrasing this is that I think we have a decent discount rate (maybe 10% a year), plus I think that high-level research prioritization is a particularly useful field if done well.
A few years back a relatively small amount of investigation into AI safety (maybe 20 person years?) led to a huge change from OpenPhil and a bunch of EA talent.
I would be curious to hear directly from them. I think that work that influences the big donors is the highest leverage at this point, and I also get the impression that there is a lot of work that could change their minds. But I could be wrong.
I have about 60 EA-related ideas right now. This list includes some of the most promising ones, broken down by category. I am interested in feedback on which ideas people like the best.
Plus signs indicate how well thought-out an idea is:
+
= idea seems interesting, but I have no idea what to say about it++ = partially formed concept, but still a bit fuzzy
+++ = fully-formed concept, just need to figure out the details/actually do it
Fundamental problems
“Pascal’s Bayesian Prior Mugging”: Under “longtermist-friendly” priors, if a mugger asks for $5 in exchange for an unspecified reward, you should give the $5 ++
If causes differ astronomically in EV, then personal fit in career choice is unimportant ++
EAs should focus on fundamental problems that are only relevant to altruists (e.g., infinity ethics yes, explore/exploit no) +++
The case for prioritizing “philosophy of priors” ++
How quickly do forecasting estimates converge on reality? (use Metaculus API) +++
Investing for altruists
Alternate version of How Much Leverage Should Altruists Use? that assumes EMH +++
How risk-averse should altruists be (and how does it vary by cause)? +
Can patient philanthropists take advantage of investors’ impatience? +
Giving now vs. later
Reverse-engineering the philanthropic discount rate from observed market rates +++
Optimal behavior in extended Ramsey model that allows spending on cash transfers or x-risk reduction +++
If giving later > now, what does that imply for talent vs. funding constraints? +
Is movement-building an expenditure or an investment? +
Fermi estimate of the cost-effectiveness of improving the EA spending rate +++
Prioritization research might need to happen now, not later ++
Long-term future
If technological growth linearly increases x-risk but logarithmically increases well-being, then we should stop growing at some point ++
Estimating P(existential catastrophe) from a list of near-catastrophes +++
Thoughts on doomsday argument +
Value of the future is dominated by worlds where we are wrong about the laws of physics ++
If x-risk reduction is permanent and people aren’t longtermist, we should give later +++
Other
How should we expect future EA funding to look? +
Can we use prediction markets to enfranchise future generations? (Predict what future people will want, and then the government has to follow the predictions) +
Altruistic research might have increasing marginal utility ++
“Suspicious convergence” is not that suspicious because people seek out actions that look good across multiple assumptions +++
I like that these generally seem quite clear and focused.
In terms of decision relevance and benefit, I get the impression that several funders and meta EA orgs feel a crunch in not having great prioritization, and if better work emerges, they may change funding fairly quickly. I’m less optimistic about career change type work, mainly because it seems like it would take several more years to apply (it would take some time from convincing someone to having them start producing research).
I’m skeptical of how much research into investments will change investments in the next 2-10 years. I don’t get the impression OpenPhil or other big donors are closely listening to these topics here.
Therefore I’m more excited about the Giving Now/Later and Long-Term Future work.
Another way of phrasing this is that I think we have a decent discount rate (maybe 10% a year), plus I think that high-level research prioritization is a particularly useful field if done well.
A few years back a relatively small amount of investigation into AI safety (maybe 20 person years?) led to a huge change from OpenPhil and a bunch of EA talent.
I would be curious to hear directly from them. I think that work that influences the big donors is the highest leverage at this point, and I also get the impression that there is a lot of work that could change their minds. But I could be wrong.
I’d really like to see “If causes differ astronomically in EV, then personal fit in career choice is unimportant”
I’d be interested in basically all of the Giving Now vs Later but especially: