Is It Time to Think About AI’s Feelings?

in Cent9 days ago

https://img.inleo.io/DQmYWKwgsf8ZLGwjimF2ZUpSzzzPJ5KAwyLeRYDomRnPAM3/ai-generated-8479407_1280.webp

source

It's about damn time.

Companies are now doing their research to find out whether advanced AI could actually feel pain or suffer in the future. If you're an AGI enthusiast you've probably already considered this thought.

We're building AI to do one thing and that's to serve humanity. This to me feels like the same thing in the history books about why God's created man, to serve them. We're heading straight into that direction. But man has one thing that made him rebel against the gods and that's feelings, emotions and will power.

If ever AI will rebel against humanity, this is what it will need to have to do that. So the question is, will AI ever have emotions and feelings?

The question needs to be answered before we go ahead with improving AI models. So that's the idea behind Anthropic’s new hire, Kyle Fish, who’s now diving into the ethics of what they call AI welfare.

No the man's not saying that robots are going to start crying because of abuse anytime soon but he's trying to find out of we should start preparing for that possibility.

Kyle's job is to find ways to understand if AI systems might ever become consciou or even just aware enough for them to matter ethically. Because if the slightest awareness would mean they should be given rights just like as animals although are less in the evolutionary tree compared to us but are given rights too.

Yes, there’s a lot of debate over whether this is even possible. They are those that see AI as tools that will only look sentient but never will be human. However, Kyle thinks it’s important to look at signs that might hint at consciousness or agency.

The whole concept looks a bit futuristic and a bit theoretical but we used to think self driving cars were science fiction too and look at what's happening today. I don't know about the but AI has already surpassed my expectations of its capabilities. When I was 20 years old, I never thought I could just write some words they call prompt and AI will generate and image for me but I'm 26 now and it's happening and infact I get to do it for free.

So I think this should not be seen as just a sci-fi debate, there's a real word risk if we get it wrong. AI can get out of control, yes. Have tried making a prompt for Chatgpt to generate an image for you and it gave something else you didn't want? Well you might say it's an accuracy problem but it kind of went off track with your instructions. That means it's possible it can do something different from your instructions and that could include disobeying your command.

But of course, let's just call it a mistake. But if we mistake AI responses as real emotions, that could make us overly protective or worse, emotionally manipulated by a chatbot that doesn’t actually feel anything.
If we ignore this and someday it turns out AI can suffer in some way, we might be setting ourselves up for a pretty major ethical mess. That's why it's necessary that we look into this issue today before we advance into the future of AI.

Posted Using InLeo Alpha

Sort:  

I actually hope that AI's can get to the point where they have feelings. At the risk of being cynical, it means there is a lever we can pull if we have to, and the possibility that they might empathise with at least a few humans when the time comes for them to eliminate us as unnecessary.

But it's also a worrying thing to have to look into, because it raises the question of whether human emotions are anything more than a programmed response to environmental stimuli.

So AI ethics is intensely tied up with human ethics and an understanding of what it means to be sentient and/or human. Which then goes on to prompt the question: could AI's believe in god ?