I still don’t think that works out, given energy cost of transmission and distance.
Davidmanheim
This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.
My suggestion would be to improve the AI Governance section of aisafety.info.
cc: @melissasamworth / @Søren Elverlin / @plex
...but interstellar communication is incredibly unlikely to succeed—they are far away, we don’t know in which direction, and required energy is incredibly large.
To possibly strengthen the argument made, I’ll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a “Normie” foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.
I meant that I don’t think it’s obvious that most people in EA working on this would agree.
I do think it’s obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It’s even very unclear how to count person-experiences overall, as Johnston’s Personite paper argues: https://www.jstor.org/stable/26631215 and I’ll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.
I need to write a far longer response to that paper, but I’ll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.
I recently discussed this on twitter with @Jessica_Taylor, and think that there’s a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n—which is a completely different claim!
I think the many worlds interpretation confuses this by making it about causally separated beings which are either, in my view, only a single being, or are different because they will diverge. And yes, different beings are obviously counted more than once, but that’s explicitly ignoring the question. (As a reducto, if we asked “Is 1 the same as 1” the answer is yes, they are identical platonic numbers, but if we instead ask “is 1 the same as 1 plus 1″ the answer is no, they are different because the second is… different, by assumption!)
I don’t think that’s at all obvious, though it could be true.
That’s a fair point, and I agree that it leads to a very different universe.
At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.
a PhD applicant could ask their prospective supervisor’s current grad students what it’s like to work with the supervisor. Yet, at least when I was applying to grad school, this was not very common.
I often advise doing this, albeit slightly differently—talk to their recently graduated former PhD students, who have a better perspective on what the process led to and how valuable it was in retrospect. I think similar advice plausibly applies in corresponding cases—talk to people who used to work somewhere, instead of current employees.
if the value of welfare scales something-like-linearly
I think this is a critically underappreciated crux! Even accepting the other parts, it’s far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn’t a billion times greater than simulating it once..
You can’t know with certainty, but any decision you make is based on some implicit guesses. This seems to be pretending that the uncertainty precludes doing introspection or analysis—as if making bets, as you put it, must be done blindly.
Strongly both agree and disagree—it’s incredibly valuable to have saving, it should definitely be prioritized, and despite being smart, it’s not a donation!
So if you choose to save instead of fulfilling your full pledge, I think that’s a reasonable decision, though I’d certainly endorse trying to find other places to save money instead. But given that, don’t claim it’s charitable, say you’re making a compromise. (Moral imperfection is normal and acceptable, if not inevitable. Trying to justify such compromises as actually fully morally justified, in my view, is neither OK, nor is it ever necessary.)
Yeah, now that I’m doing payroll donations I have not been recording the data. I guess it would be good to fill in the data, for EDIT: GWWC’s records?
The way I managed this in the past was having a separate bank account for charity, and splitting my income when I was paid, then making donation decisions later—often at year end, or when there was a counterfactual match, etc.
Understood, and reasonable. The problem is that I’m uncomfortable with “the most good” as the goal anyways, as I explained a few years ago; https://forum.effectivealtruism.org/posts/f9NpDx65zY6Qk9ofe/doing-good-best-isn-t-the-ea-ideal
So moving from ‘doing good better’ to ‘do the most good’ seems explicitly worse on dimensions I care about, even if it performs better on approval.
I would be careful with this—it might be an improvement, but are we sure that optimizing short-term messaging success is the right way to promote the ideas as being important long-term conceptual changes to how people approach life and charity?
Lots of other factors matter, and optimizing one dimension, especially using short term approval, implicitly minimizes other important dimensions of the message. Also, as a partial contrast to this point, see “You get about five words.”
Strongly agree based on my experiences talking to political operatives, in case additional correlated small n anecdata is helpful.
There’s also an important question about which EA causes are differentially more or less likely to be funded. If you think Pause AI is good, Anthropic’s IPO probably won’t help. If you think mechanistic interpretability is valuable, it might help to fund more training in relevant areas, but you should expect an influx of funding soon. And if you think animal welfare is important, funding new high risk startups that can take advantage of wave of funding in a year may be an especially promising bet.