There is a lot here! You said that ethics are universal exist... does that mean that you believe in objective morality?
Personally, I don't think morality exists without a goal to uphold.
For example:
If the goal is to preserve humanity, and the AI calculates that in order to do so, it needs to rid the planet of certain humans. We wont know it's mind, but lets assume it's correct. Then from the AI's perspective, it will be moral, while from ours, it will be a monster.
Even if it wants what is best for us, if we allow it to determine what "best" means, we will be in a world of trouble...
And the sad part is, as @holoz0r mentioned - we don't even know what we want collectively. So, until we know what is our collective goal, and have it clearly defined, we cannot expect AI to be moral, not because it's evil, but simply because we haven't yet set the metric by which the AI can judge its own morality.