The way I understood his post was that even a few hundred thousand or a few million dollars, if invested pre-explosive growth, might become astronomical wealth post-explosive growth. Whereas people without those investments may have nothing due to labor displacement. Which is an interesting theory? Maybe we need a hedge fund for EAs to invest in AI lol, though that would create hairy conflicts of interest!
That was the point I had meant to convey, Aaron. Thanks for clarifying that.
This seems like an important critique, Tobias, and I thank you for it. It was a useful readjustment to realise I wouldn’t be exceptionally wealthy for doing this in either society at large or the EA community. My sense is still that even being in the 92nd percentile of the UK going into this would be really valuable. Not world-changing valuable, but life-changing for many. That everything might get solved by technology and richer people is plausible, given the challenges in predicting how the future will pan out. I see this strategy mainly as a backstop to mitigate the awfulness of the most S-risk intensive ways this could go.
The way I understood his post was that even a few hundred thousand or a few million dollars, if invested pre-explosive growth, might become astronomical wealth post-explosive growth. Whereas people without those investments may have nothing due to labor displacement. Which is an interesting theory?
Maybe we need a hedge fund for EAs to invest in AI lol, though that would create hairy conflicts of interest!
That was the point I had meant to convey, Aaron. Thanks for clarifying that.
This seems like an important critique, Tobias, and I thank you for it. It was a useful readjustment to realise I wouldn’t be exceptionally wealthy for doing this in either society at large or the EA community. My sense is still that even being in the 92nd percentile of the UK going into this would be really valuable. Not world-changing valuable, but life-changing for many. That everything might get solved by technology and richer people is plausible, given the challenges in predicting how the future will pan out. I see this strategy mainly as a backstop to mitigate the awfulness of the most S-risk intensive ways this could go.