I think humans are probably more likely in expectation than aliens to generate high value as EAs understand it, simply because our idea of what “high value” is, is well, a human idea of it.
This seems like a strange viewpoint. If value is something about which one can make ‘truthy’ and ‘falsey’ - or something that we converge to given enough time and intelligent thought if you prefer—then it’s akin to maths and aliens would be a priori as likely to ‘discover’ it as we are. If it’s arbitrary, then longtermism has no philosophical justification, beyond the contingent caprices of people who like to imagine a universe filled with life.
Also if it’s arbitrary, then over billions of years even human-only descendants would be very unlikely to stick with anything resembling our current values.
I think humans are probably more likely in expectation than aliens to generate high value as EAs understand it, simply because our idea of what “high value” is, is well, a human idea of it.
This seems like a strange viewpoint. If value is something about which one can make ‘truthy’ and ‘falsey’ - or something that we converge to given enough time and intelligent thought if you prefer—then it’s akin to maths and aliens would be a priori as likely to ‘discover’ it as we are. If it’s arbitrary, then longtermism has no philosophical justification, beyond the contingent caprices of people who like to imagine a universe filled with life.
Also if it’s arbitrary, then over billions of years even human-only descendants would be very unlikely to stick with anything resembling our current values.
Yes. See also Yudkowsky’s novella Three World’s Collide for an illustration of why finding aliens could be very bad from our perspective.