I actually hope that AI's can get to the point where they have feelings. At the risk of being cynical, it means there is a lever we can pull if we have to, and the possibility that they might empathise with at least a few humans when the time comes for them to eliminate us as unnecessary.
But it's also a worrying thing to have to look into, because it raises the question of whether human emotions are anything more than a programmed response to environmental stimuli.
So AI ethics is intensely tied up with human ethics and an understanding of what it means to be sentient and/or human. Which then goes on to prompt the question: could AI's believe in god ?