I read a lot of Asimov and some Arthur C Clarke as a kid. They foresaw some things that are just happening now. I do wonder if robots can be taught rules they have to obey as the systems these days use learning algorithms that can make mistakes. I wonder if the machine learning could be combined with some hard rules to do no harm. Some of the 'AI' makes suggestions that could be dangerous, but how will they know that? I think Asimov went into some of this as he featured a robot psychologist.
We live in the proverbial interesting times.