While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?
Since around 2017, there has been a lot of public interest in how youtube’s recommendation algorithms may affect individuals and society negatively. Governments, think tanks, the press/media, and other institutions have pressured youtube to adjust its recommendations. You could think of this as our world’s (indirect & corrupted) way of trying to instill humanity’s values into youtube’s algorithms.
I believe this sort of thing doesn’t get much attention from EAs because there’s not really a strong case for it being a global priority in the same way that existential risk from AI is.
Do you believe that altruism actually makes people happy? Peter Singer’s book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.
After being only for a few months deeply into EA I already realise that discussing with non EA-people makes me emotional, since I “cannot understand” why they are not getting easily convinced of it as well. How can something so logical not being followed by everyone? At least by donating? I think there is the danger to become pathetic if you don’t reflect on it and be aware that you cannot convince everybody.
On the other side EA is already having a big impact on how I donate and how I act in my job—so in this regards I do feel much more impactful which certainly makes me happier and more relaxed in other parts of my life as ambitions shifted. Does that make any sense?
Would also be interested on research if anyone has!
Haha. Well, I guess I would first ask effective at what? Effective at giving people additional years of healthy & fulfilling life? Effective at creating new friendships? Effective at making people smile?
I haven’t studied it at all, but my hypothesis that it is the kind of intervention that is similar to “awareness building,” but it doesn’t have any call to action (such as a donation). So it is probably effective in giving people a nice experience for a few seconds, and maybe improving their mood for a period of time, but it probably doesn’t have longer-lasting effects. From a cursory glance at Google Scholar, it looks like there hasn’t been much research on free hugs.
Hmm, I’m a little confused. If I cook a meal for someone, it doesn’t seem to mean much. But if no one is cooking for someone, it is a serious problem and we need to help. Of course, I’m not sure if we’re suffering from that kind of “skinship hunger.”
I’d also re-focus on effective at what? What is the goal or objective of these free hugs? Once you know that, then you can more easily estimate how effective free hugs are compared to other interventions.
Using the analogy of hunger, here is one way that I am currently thinking about it: giving a willing stranger a hug is like giving a willing stranger a candy bar; they get some nourishment, but if they are chronically food insecure this won’t solve that longer-term problem. It won’t help them get regular/consistent access to meals that they can afford. So in that sense it is like a band-aid: it is treating the symptom, but it is not addressing the cause.
If someone is suffering from a consistent and pervasive lack of human touch, such as “skinship hunger,” a hug might feel nice for a few seconds, but when the hug is finished that person’s situation (lacking human touch) remains unchanged. I suppose you could create some kind of program in which they spend 60 minutes with a professional cuddler every week, but I honestly don’t see that as being cost competitive if the goal is to get QALYs at the best price.
But if you just want to estimate it then you could put together a simple Fermi estimate: what are the costs to giving free hugs, and what are the benefits, and then figure out how much value do you please on each of those.
It is like a seed. Basic trust and support are provided. It is doubtful whether long-term, indefinite provision is necessary. Wouldn’t it be similar to UBI? I don’t know because there is no research. I believe you are begging the question. I can’t agree or disagree with the claim that it will soon return to its initial state without any long-term effects. As for the estimate… I’m not sure. I can’t think of a good measure or anything yet. I might need a psychologist to help me. Perhaps an estimate for mental health or well-being, but I doubt QALYs or DALYs. But as an initial estimate, it seems like a good measure. Alternatively, it could be expressed as pain relief or social support. I confess I had no intention of doing any serious research, as I was simply asking for an idea. It’s more of a question of whether it’s worth it.
Thoughts on project or research auction. It is very cumbersome to apply for funds one by one from Openphil or EA fund. Wouldn’t it be better for a major EA organization to auction off the opportunity to participate in a project and let others buy it? It will be similar to a tournament, but you will be able to sell a lot more projects at a lower price and reduce the amount of resources wasted on having many people competing for the same project.
I assume the argument is that neurotic people suffer more when they don’t get resources, so resources should go to more neurotic people first?
I think that’s correct in an abstract sense but wrong in practice for at least two reasons:
Utilitarianism says you should work on the biggest problems first. Right now the biggest problems are (roughly) global poverty, farm animal welfare, and x-risk.
A policy of helping neurotic people encourages people to act more neurotic and even to make themselves more neurotic, which is net negative, and therefore bad according to utilitarianism. Properly-implemented utilitarianism needs to consider incentives.
1. If pain is somehow an essential part of consciousness or well-being, then even if the x-risk is resolved, the s-risk may be a more serious problem. 2. Neuroticism is to some extent hereditary. Incentives can solve some problems, but not all.
While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?
You might be interested in Building Human Values into Recommender Systems: An Interdisciplinary Synthesis as well as Jonathan Stray’s other work on alignment and beneficence of recommender systems.
Since around 2017, there has been a lot of public interest in how youtube’s recommendation algorithms may affect individuals and society negatively. Governments, think tanks, the press/media, and other institutions have pressured youtube to adjust its recommendations. You could think of this as our world’s (indirect & corrupted) way of trying to instill humanity’s values into youtube’s algorithms.
I believe this sort of thing doesn’t get much attention from EAs because there’s not really a strong case for it being a global priority in the same way that existential risk from AI is.
Do you believe that altruism actually makes people happy? Peter Singer’s book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.
Good question I also think about!
After being only for a few months deeply into EA I already realise that discussing with non EA-people makes me emotional, since I “cannot understand” why they are not getting easily convinced of it as well. How can something so logical not being followed by everyone? At least by donating? I think there is the danger to become pathetic if you don’t reflect on it and be aware that you cannot convince everybody.
On the other side EA is already having a big impact on how I donate and how I act in my job—so in this regards I do feel much more impactful which certainly makes me happier and more relaxed in other parts of my life as ambitions shifted. Does that make any sense?
Would also be interested on research if anyone has!
I would like to estimate how effective free hugs are. Can anyone help me?
Haha. Well, I guess I would first ask effective at what? Effective at giving people additional years of healthy & fulfilling life? Effective at creating new friendships? Effective at making people smile?
I haven’t studied it at all, but my hypothesis that it is the kind of intervention that is similar to “awareness building,” but it doesn’t have any call to action (such as a donation). So it is probably effective in giving people a nice experience for a few seconds, and maybe improving their mood for a period of time, but it probably doesn’t have longer-lasting effects. From a cursory glance at Google Scholar, it looks like there hasn’t been much research on free hugs.
Hmm, I’m a little confused. If I cook a meal for someone, it doesn’t seem to mean much. But if no one is cooking for someone, it is a serious problem and we need to help. Of course, I’m not sure if we’re suffering from that kind of “skinship hunger.”
I’d also re-focus on effective at what? What is the goal or objective of these free hugs? Once you know that, then you can more easily estimate how effective free hugs are compared to other interventions.
Using the analogy of hunger, here is one way that I am currently thinking about it: giving a willing stranger a hug is like giving a willing stranger a candy bar; they get some nourishment, but if they are chronically food insecure this won’t solve that longer-term problem. It won’t help them get regular/consistent access to meals that they can afford. So in that sense it is like a band-aid: it is treating the symptom, but it is not addressing the cause.
If someone is suffering from a consistent and pervasive lack of human touch, such as “skinship hunger,” a hug might feel nice for a few seconds, but when the hug is finished that person’s situation (lacking human touch) remains unchanged. I suppose you could create some kind of program in which they spend 60 minutes with a professional cuddler every week, but I honestly don’t see that as being cost competitive if the goal is to get QALYs at the best price.
But if you just want to estimate it then you could put together a simple Fermi estimate: what are the costs to giving free hugs, and what are the benefits, and then figure out how much value do you please on each of those.
It is like a seed. Basic trust and support are provided. It is doubtful whether long-term, indefinite provision is necessary. Wouldn’t it be similar to UBI? I don’t know because there is no research. I believe you are begging the question. I can’t agree or disagree with the claim that it will soon return to its initial state without any long-term effects. As for the estimate… I’m not sure. I can’t think of a good measure or anything yet. I might need a psychologist to help me. Perhaps an estimate for mental health or well-being, but I doubt QALYs or DALYs. But as an initial estimate, it seems like a good measure. Alternatively, it could be expressed as pain relief or social support. I confess I had no intention of doing any serious research, as I was simply asking for an idea. It’s more of a question of whether it’s worth it.
Thoughts on project or research auction. It is very cumbersome to apply for funds one by one from Openphil or EA fund. Wouldn’t it be better for a major EA organization to auction off the opportunity to participate in a project and let others buy it? It will be similar to a tournament, but you will be able to sell a lot more projects at a lower price and reduce the amount of resources wasted on having many people competing for the same project.
I think this requires more elaboration how exactly the suggested system is supposed to work.
I wrote the post
나는 Brian Caplan의 기사 중 하나에서 비슷한 뉘앙스를 읽은 적이 있습니다. 공리주의자라면 신경증적인 사람들을 선호하는 사회를 만들 것입니다. 이 문제를 해결할 필요가 없다면 그 이유는 무엇입니까? 이 문제를 해결해야 한다면 어떻게 해결해야 할까요?
I assume the argument is that neurotic people suffer more when they don’t get resources, so resources should go to more neurotic people first?
I think that’s correct in an abstract sense but wrong in practice for at least two reasons:
Utilitarianism says you should work on the biggest problems first. Right now the biggest problems are (roughly) global poverty, farm animal welfare, and x-risk.
A policy of helping neurotic people encourages people to act more neurotic and even to make themselves more neurotic, which is net negative, and therefore bad according to utilitarianism. Properly-implemented utilitarianism needs to consider incentives.
1. If pain is somehow an essential part of consciousness or well-being, then even if the x-risk is resolved, the s-risk may be a more serious problem.
2. Neuroticism is to some extent hereditary. Incentives can solve some problems, but not all.
I am planning to write post about happiness guilt. I think many of EA would have it. Can you share resources or personal experiences?
Detach the grim-o-meter comes to mind. I think that post helped me a little bit.