I would love to see priority.wiki succeed, especially given its high-quality design & the broad sweep of content already on there.
This seems like a dangerous period where the project is likely to be abandoned (more likely than during the post-launch honeymoon, at least).
My current guess is that priority.wiki falls into a state of disuse & disrepair over the next 6 to 18 months, if no one intervenes. Does that match the views of others?
Perhaps tighter integration with the Forum and/or effectivealtruism.org would help invigorate the wiki. I’ll try to think of others things that could help too.
Heh, just came here to ask the same.
Curious if you have data on number of contributors, number of edits, number of unique visitors, etc.
Here’s a clickable version of the “notify me about next round of EA Grants” form.
The EA recommendation I tend to see for physicists is to not do physics.
I’ve been understanding this to mean that under the current institutional paradigm, more physics research on the margin probably isn’t very helpful.
Achieving a fundamental breakthrough seems obviously great (though hard to do in the current paradigm), and meaningfully reforming the current paradigm would probably be very high-value as well (though tricky to do).
Robin Hanson has more to say on this (a):
Previously, physics foundations theorists were disciplined by a strong norm of respecting the theories that best fit the data. But with less data, theorists have turned to mainly judging proposed theories via various standards of “beauty” which advocates claim to have inferred from past patterns of success with data. Except that these standards (and their inferences) are mostly informal, change over time, differ greatly between individuals and schools of thought, and tend to label as “ugly” our actual best theories so far.
See also “What does any of this have to do with physics?” (a).
I tend to think person-affecting views are the least-bad of the options
Whether or not one holds a person-affecting view seems like a big crux for prioritizing mental health (especially if the altruism cascade consideration doesn’t seem compelling).
If it’s quick, could you say a bit more about why you hold a person-affecting view?
(My guess is that many forum readers follow Nick Beckstead in thinking that a totalist view makes more sense, so they won’t find mental health compelling to the extent that it rests on a person-affecting view.)
Analogous “altruism cascade” arguments could be made for interventions like bed nets & GiveDirectly, though my intuition is that the cascade would be stronger for mental health interventions as they more directly attack unhappiness. (Related GiveWell discussion on flow-through effects here.)
I don’t see analogous cascades coming out of x-risk prevention or fundamental philosophical research.
Thanks, Michael, for continuing to push forward on this important front!
Another reason that mental health interventions seem promising: improving someone’s mental health may increase their ability to help to others. This could create large indirect impacts, where people who’s mental health improves are able to bring their newly unlocked energy to bear on their own altruistic efforts (which impact other people, who perhaps also go on to help others in a cascade of altruism).
This argument assumes that people tend to grow (at least somewhat) more altruistic as they become happier. This has been true in my experience & observations; I’m not sure how far it generalizes.
I suppose this could be rolled into the Google Brain section, as it looks like most Distill contributors have a Google affiliation.
(A) is a great point.
Curious why Distill wasn’t included.
Their stuff on interpretability seems like it has implications for alignment, and their work seems high-quality in general.
Thanks, I was also curious about how you sourced the newsletter :-)
Why do you think Twitter is degrading?
Thanks, I found this very helpful!
What process do you use to stay on top of the new literature as it comes out?
I have a rough model of what to do to track organizational output: sign up for newsletters & RSS feeds, check their websites occasionally, ask them if I’ve missed anything near the end of the year.
I have no idea what to do to track the work coming out of academia (i.e. the stuff in your “Other Research” section) - arxiv seems like a morass to navigate. How do you stay on top of that?
Curious how you’re thinking about efforts that are intended to reduce x-risk but instead end up increasing it.
e.g. public-facing aerosol injection research:
Given this strategic landscape, the effects of calling attention to stratospheric aerosol injection as a cause are unclear. It’s possible that further public-facing work on the intervention results in international agreements governing the use of the technology. This would most likely be a reduction in existential risk along this vector.
However, it’s also possible that further public-facing work on aerosol injection makes the technology more discoverable, revealing the technology to decision-makers who were previously ignorant of its promise. Some of these decision-makers might be inclined to pursue research programs aimed at developing a stratospheric aerosol injection capability, which would most likely increase existential risk along this vector.
...followed by a runoff vote to resolve ties.
Is the runoff vote also approval voting?
A little strange that a post’s karma isn’t part of the prize evaluation process.
(I guess it is implicitly, would be interesting to see an explicit karma component.)
I really like the prize idea as a method of content curation – I wasn’t planning to read Sanjay’s Cool Earth post (because I didn’t understand what it was about & it didn’t seem relevant to my interests at first brush), but now I will.
Thanks all for making this happen :-)
For myself, I would regard those gains to be sufficiently small that I would think it irrational for an egoist to focus much of their attention on earning more money at that point, rather than fostering strong relationships, a sense of purpose, or improving their self-talk.
I agree with this.
The main takeaway I’m pushing here is something like:
“After a certain point, making more money has severe diminishing returns re: your happiness, as does donating lots of money.
So don’t lean on making lots of money to make you happy, and don’t lean on giving away lots of money to make you happy.”
There’s a temptation to use “donate a lot of money to effective causes” to scratch the “sense of purpose” itch, which I don’t think works very well (due to the diminishing returns).
Whether this counts as “extra income continuing to affect happiness quite a bit” or “extra income not affecting happiness that much” I guess is for readers to judge.
I notice I have some difficulty thinking through the implications of a 0.5 bump in life satisfaction on a 0.0 to 10.0 scale, especially when the 0.5 increase is in aggregate across an entire lifetime.
On one view, 0.5 doesn’t seem like that much. “7.5 instead of 8.0? That’s a negligible effect. Once you’re at 7.5 life-satisfaction-wise, time to focus on other things.”
On another view, the 0.5 bump is quite a lot. If 10.0 on the scale is “most satisfying life possible”, going from 7.5 to 8.0 could be a big frickin’ deal. Also could be a big deal if the 0.5 bump cashes out to something like “one less terrible day per month, for the rest of your life”.
This consideration is probably dominated by measurement problems though. When I subjectively assess my life satisfaction, I have trouble discerning the difference between a 7 and an 8 on a 0-10 scale (though I’m benchmarking on 10 being “best out of the ways my life has tended to go”, not “most satisfying life possible”).
I’ve started using a 0-5 scale because of this granularity consideration. It’s much easier for me to tell apart the difference between 3 and 4 on a 0-5 scale than it is to tell apart 7 and 8 on a 0-10 scale.
This is all to say that a 0.5 bump on a 0.0-10.0 scale might not be subjectively detectable at all to most people. (Though a 0.5 in-aggregate effect could still cash out to large subjective gains for many people.)
I think paper books are an under-appreciated technology these days; very excited that paper versions of Rationality are coming out!
Thanks for making this happen!
I think you’re approaching an interesting point on the earnings curve. Probably making $200k and giving away $100k would be more satisfying than earning $100k and giving away $1,000 (wildly speculating here).
I’m not sure that making $100k and giving away $50k of it would be more satisfying than earning $50k and giving away $1,000 though. After reflecting on it for a minute, I think it probably would be, but nowhere near 50x. And you’d be leaving a lot of other happy-making possibilities on the table if you gave away $50k out of $100k (which seems less true for giving away $100k out of $200k).