“Maybe I really should do the thing where I sit down for a few dozen hours and formulate my empirical and moral uncertainties into precise questions whose answers are cruxy to me working on alignment and then try to answer those questions.”
It sounds like you have a plan. If you have the time, you could follow it as an exercise. It’s not clear to me that we all have a buffet of areas to contribute to directly, but you might be an exception.
Speaking for myself, I develop small reserves of information that I consider worth sharing, but not much after that.
I look for quick answers to my questions of whether to pursue common goals, identifying whatever disallows my personally causing the solution.
Here are some quick answers to a list of EAish goals:
Work on alignment? Will fail because we rationalize our behavior after the fact and remain open to influence, we shouldn’t go there with conscious machines, expert systems are the safe (and well-studied) alternative for our selfish purposes. No.
Work on reversing climate change? Climate change is now self-amplifying, muddling solution requires (personally uncausable) changes in human society (for example, voluntary birth-rate reduction and going vegan), only tech option (nanotech manufacturing) for innovating our way out of it is plausibly worse than human extinction. No.
Maximize personal altruism, minimize personal anti-altruism? Good idea, but wonder whether modern life bounds me to a ratio of altruism/anti-altruism score of consequences of my actions. Will check, until then, no.
Become vegan? Been there, done that. I would like a cheap means of grinding vegan protein powders to much finer mesh than sold commercially, if anyone knows about one. Or else no.
Develop ethical models? Not my interest, ethical standards are orthogonal to selfishness and well explored already. So no.
Support worthwhile causes? Yes, with small donations to high impact charities, unless I ever get wealthy.
Explore updating and other EA mind tricks? Did, a while ago, unintentionally. Probabilistic weighting of my own beliefs is foolish and distracting. Will try to find value in other parts of EA.
Work for a charity or nonprofit? Subject to the usual prerequisites, I look at job listings and include nonprofits/charities in that search.
My only point is that you too could write a list, and look for the first thing that removes a list item from the list. I could painstakingly develop some idealized but context-bound model of why to do or not do something, only to find that I am not in that context, and “poof”, all that work is for nothing.
BTW, I like your post, but you might not have reason to value my opinion, I don’t get much karma or positive feedback myself, and am new on the forum.
You wrote,
“Maybe I really should do the thing where I sit down for a few dozen hours and formulate my empirical and moral uncertainties into precise questions whose answers are cruxy to me working on alignment and then try to answer those questions.”
It sounds like you have a plan. If you have the time, you could follow it as an exercise. It’s not clear to me that we all have a buffet of areas to contribute to directly, but you might be an exception.
Speaking for myself, I develop small reserves of information that I consider worth sharing, but not much after that.
I look for quick answers to my questions of whether to pursue common goals, identifying whatever disallows my personally causing the solution.
Here are some quick answers to a list of EAish goals:
Work on alignment? Will fail because we rationalize our behavior after the fact and remain open to influence, we shouldn’t go there with conscious machines, expert systems are the safe (and well-studied) alternative for our selfish purposes. No.
Work on reversing climate change? Climate change is now self-amplifying, muddling solution requires (personally uncausable) changes in human society (for example, voluntary birth-rate reduction and going vegan), only tech option (nanotech manufacturing) for innovating our way out of it is plausibly worse than human extinction. No.
Maximize personal altruism, minimize personal anti-altruism? Good idea, but wonder whether modern life bounds me to a ratio of altruism/anti-altruism score of consequences of my actions. Will check, until then, no.
Become vegan? Been there, done that. I would like a cheap means of grinding vegan protein powders to much finer mesh than sold commercially, if anyone knows about one. Or else no.
Develop ethical models? Not my interest, ethical standards are orthogonal to selfishness and well explored already. So no.
Support worthwhile causes? Yes, with small donations to high impact charities, unless I ever get wealthy.
Explore updating and other EA mind tricks? Did, a while ago, unintentionally. Probabilistic weighting of my own beliefs is foolish and distracting. Will try to find value in other parts of EA.
Work for a charity or nonprofit? Subject to the usual prerequisites, I look at job listings and include nonprofits/charities in that search.
My only point is that you too could write a list, and look for the first thing that removes a list item from the list. I could painstakingly develop some idealized but context-bound model of why to do or not do something, only to find that I am not in that context, and “poof”, all that work is for nothing.
BTW, I like your post, but you might not have reason to value my opinion, I don’t get much karma or positive feedback myself, and am new on the forum.