That definitely matches my intuition too.
Is there a postmortem somewhere on Certificates of Impact & challenges they faced when implementing?
I think causes that are more robust to cluelessness should be higher priority than causes that are less so.
I feel pretty uncertain about which cause in the “robust-to-cluelessness” class should be second priority.
If I had to give an ordered list, I’d say:
1. AI alignment work
2. Work to increase the number of people that are both well-intentioned & highly capable
Got it. So this would go something like:
There’s a prize!
I’m going to do X, which I think will win the prize!
Do you want to buy my rights to the prize, once I win it after doing X ?
Seems like this will select for sales & persuasion ability (which could be an important quality for successfully executing projects).
So the prize money gets paid out in 2022, in the tl;dr example? (I’m a little unclear about that from my quick read.)
This means that the Impact Prize wouldn’t help teams fund their work during the 2019-22 period. Am I understanding that correctly?
Could you say a little more about how you decide what size each pot of money should be?
If someone’s already applied to the Fund for this round, do they need to take any further action? (in light of the new donation & deadline extension)
The whole thread around the comment you linked to seems relevant to this.
Oh yeah, good call. Forgot about the Pareto Fellowship.
Paradigm Academy comes to mind. Curious about how you see your proposal as being different from that.
Thanks for all that you’re doing to make REACH happen!
Is there a quick way to use the agenda to see GPI’s research prioritization? (e.g. perhaps the table of contents is ordered from high-to-low priority?)
Comments on any issue are generally welcome but naturally you should try to focus on major issues rather than minor ones. If you post a long line of arguments about education policy for instance, I might not get around to reading and fact-checking the whole thing, because the model only gives a very small weight to education policy right now (0.01) so it won’t make a big difference either way. But if you say something about immigration, no matter how nuanced, I will pay close attention because it has a very high weight right now (2).
I think this begs the question.
If modeler attention is distributed proportional to the model’s current weighting (such that discussion of high-weighted issues receive more attention than discussion of low-weighted issues), it’ll be hard to identify mistakes in the current weighting.
YC 120 isn’t quite a funding source, but getting in would connect you with a bunch of possible funders. Applications close on Feb 18th.
For sure. Also check with Tyler before applying because there’s some stuff he definitely won’t fund (and he replies to his email).
Eh, but nowadays we’re “responsible” in a way that carries dark undertones.
Many US elderly aren’t embedded in multigenerational communities, but instead warehoused in nursing homes (where they aren’t in regular contact with their families & don’t have a clear role to play in society).
Hard to say whether this is an improvement over how things were 100 years ago. I do know that I’m personally afraid of ending up in a nursing home & plan to make arrangements to reduce the probability of such.
Seems like a real shift. (Perhaps driven by the creation of a nursing home industry?)
Thanks! This is from the Oxford Handbook of Happiness?
This is great – thank you for taking the time to write it up with such care.
I see overlap with consequentialist cluelessness (perhaps unsurprising as that’s been a hobbyhorse of mine lately).