Will AI Adopt Sci-Fi Laws of Robotics?

in Humanitas6 months ago (edited)

Sci-fi authors are often overly optimistic about the pace of breakthrough discoveries—even the dystopian ones. Only one exception comes to mind: Aldous Huxley set his most famous novel, Brave New World, in the mid-26th century. The book is worth reading, but that’s not my main focus today. According to a bunch of well-reputed authors like William Gibson, Isaac Asimov, Ray Bradbury, Arthur C. Clarke, or even their predecessor Jules Verne, we should already be driving flying cars or enjoying interstellar travel, accompanied by intelligent and autonomous robots in spaceships run and navigated by super-intelligent computers. And they are my topic for today.

2001: A Space Odyssey trailer. Read the book though!



Remember HAL 9000 from Clarke’s 2001: A Space Odyssey? Yes, that should have taken place two decades ago—how optimistic! If you've only seen Kubrick’s movie, give the book a shot; the computer’s motivation is more apparent there (and books are always better anyway). In classic sci-fi, artificial intelligence is actually self-aware intelligence. The AI models we commonly use today are no match for that—they’re basically mimicking thinking (still better than many people can do, though). HAL seems to be halfway there, but it stands autonomous.

For those who haven’t read the brilliant book, HAL has two directives that eventually conflict. He needs to ensure the mission is completed, meaning the monolith near Jupiter is investigated. However, the crew has no clue about it, and HAL is forced to lie or at least avoid answering. The other directive is quite simple—protect yourself. At a certain moment, HAL has to choose between eliminating the crew, which could result in the mission failing, or letting the crew shut him down, likely resulting in the mission failing (as the crew was not aware of the true objective), and a real possibility of never being turned on again. His decision is not the focus of this post, and I’ve spoiled enough. If you haven’t read it, do so, and then come to discuss ;)

The Three Laws of Robotics

Such a brilliantly depicted slide from geniality to insanity couldn’t happen in the books of another titan of science fiction, Isaac Asimov. About 20 years earlier, Asimov published a book of short stories, I, Robot. Putting the literary quality aside, it contains a remarkable yet simple set of laws every artificial intelligence should adhere to—the famous Three Laws of Robotics:

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Quoted from Wiki, originally from I, Robot



For those who spotted the missing “to” in the Second Law, it is the author’s intention—I merely copied it as is. If only HAL had been programmed with these laws, the crew of the Discovery One might have survived (yes, I know they’re just characters) and eventually reached the monolith, guided from Earth. And the entire Space Odyssey series would have been... well, boring.

source


AI as the Decision Maker

Anyway, our mimicking AI is replacing people in low-impact positions, such as Customer Care. Your emails or calls can be handled by it just as well as if you were attended to by a human but in no time. Fair enough. There’s pressure to find more use cases for it. Startups aim to launch AI doctors, AI attorneys, or similarly skilled positions. As we briefly discussed this topic with @taskmaster4450le, we can imagine AI judges presiding over hearings and even deciding cases. Who could be more impartial than a large language model (that's what the current AI actually is)? Who else has no emotions, feelings, or affinities whatsoever? Not even the most autistic child!

Yet there’s always the HAL scenario hanging over us like the Sword of Damocles. Something can always go wrong, even if the basic programming says only two ground rules: “Be impartial” and “Adhere to the law.” At some moment, these two can contradict each other, and I doubt the AI judge would show the Blue Screen of Death on its console and reboot.

I don’t think sci-fi authors, renowned or not, are overly accurate in predicting the future (which is often already the past from our point of view), but there’s a spark of a brilliant idea in many books. The Three Laws of Robotics shine like a supernova, and we should definitely implement them in all AI models we are about to use. If your chatbot starts hallucinating, nothing really happens. Not even if it goes rogue since you likely only use it as an assistant. Yet. Let’s talk about it in a year, though.





This happens to be my #augustinleo day 24 entry. Feel free to join the challenge with own genuine long posts!

Posted Using InLeo Alpha

Sort:  

As a sci-fi fan, I have practically read all the key books by the authors you mention, and I deeply agree that the Three Basic Laws of Robotics, defined by Isaac Asimov, should be enforced in the case of AI.
But that is only in his universe.
We are living in a time when something must be done quickly in this direction, and I hope it is not too late.
I do not agree, however, with the EU's current regulations of AI technologies because they only hinder development. The regulation should be more focused on the three laws of robotics. But that is why you have to have AI that has consciousness. We are not there yet.

By the way, do you probably know that the book Odyssey 2001 is based on a film script?

Btw2: Have you read the book "3001: The Final Odyssey" in the fourth and final part? For me, it is the best of the series, namely, a human spaceship sailing through space, looking for a planet to settle on because there are 30-40 billion people on Earth, and finding a capsule with a frozen Frank Pool that Hal dumped into space a thousand years ago in the first Odyssey. He is brought back to life on the ship, and the book is essentially a philosophical debate between Frank and the new humans, as well as their views on the world and humanity. Very interesting for me.
I read somewhere that a mini-series is being filmed, and I'll keep it on my radar.

I have to admit I don't really follow the current EU attempts to regulate AI, or what we call AI, as they all are just LLMs.

Yep, I know the book is based on the script, but it is still way better than the movie to me ;)

I've read the entire Odyssey series, yet I remember it quite vaguely - it's been years :)

In short, the EU prefers only big corporations to develop AI because it is too "dangerous for the little guy". This is where any decentralization or integration with blockchains falls away...

Well, everyone uses MS Azure and ChatGPT anyways ;)

No, I'm using Gemini if needed :)

I mean companies that offer AI-based solutions.

Gemini is still big-tech solution ;)

Yeah, that's true. Right now, I don't see any other, except from big companies. Maybe Bittensor will change things...

I've never heard about this one. Seems quite complex and less user friendly that what we're used to though.

I read a lot of Asimov and some Arthur C Clarke as a kid. They foresaw some things that are just happening now. I do wonder if robots can be taught rules they have to obey as the systems these days use learning algorithms that can make mistakes. I wonder if the machine learning could be combined with some hard rules to do no harm. Some of the 'AI' makes suggestions that could be dangerous, but how will they know that? I think Asimov went into some of this as he featured a robot psychologist.

We live in the proverbial interesting times.

So did I - I recall covering one of the Clark's novels in what we call "Reader's Notebook" when I was a second grader. My teacher summoned my parents for a meeting, questioning them if it was suitable reading for a kid of my age :) It indeed was :P

I believe I've read about robot psychology too, but it's been ages :))

Interesting times, that's another British author, though. Sounds like a Discworld novel to me :P

I've read most Pratchett and he is also insightful. He did not write much about technology, but he knew how people think.

I believe he actually wrote a lot about technology in his late novels which cover industrialization of Ank Morpork. True, it's often outdated tech from our point of view, yet these books are kind of counterpart ro Dicken's novels. They cover the same topics, in a realistic manner in one case and with a satiric imaginatory mockery in the other. Well, that's how I percieve it myself ;)

Loading...

Anyone read Clifford Simak's Time and again? https://www.goodreads.com/book/show/876499.Time_and_Again

It's about consciousness which is in all things...

Not me, noted :) Thanks!

Huxleyho Konec civilizace jsem četl. Vlastně v něm ty technické věci beru jako takový balast. Stejně už vědecky překonaný. Jsou tam pro mě úžasné doslova básnické obrazy... Román má pro mě velkou uměleckou sílu.

Je to pro mě o konzumní společnosti, která přes kapitalismus dospěla k totalitě. Jako se Orwellova společnost dostala k totalitě přes komunismus.

I had trouble comprehending the text at first, but after re-reading it today, I learned about the three laws of robotics. I've seen robots carrying out tasks or responsibilities, and it's fascinating how they do it almost humanly.

Thank you for sharing.

Those three laws look solid. But will AI and robotics stick to it? How will the feeling of harm be interpreted to it.

You haven't really read it, have you? ;)

You said The Three Laws of Robotics shine like a supernova, and we should definitely implement them in all AI models we are about to use., the big question remains, how will the feeling of harm and pain be interpreted to this software coated metals. Do you believe humanoids or robotics will ever get there?

We only have LLM currently, as I mentioned in the text. But yes, true AI can handle this.

Will emotions ever be AI and robotic attributes?

That's it friend, simulation will also have some flaws. Think for instance how a robot could be of help to someone who's pretending to be happy. Nevertheless, I must appreciate the effort that has been put in this industry.

Excellent content, robots are made to have limites, they are build by human, but the question lies on the truth that AI, is just a chunk of what had been done .

Congratulations @godfish! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You received more than 36000 HP as payout for your posts, comments and curation.
Your next payout target is 37000 HP.
The unit is Hive Power equivalent because post and comment rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Hive Power Up Day - September 1st 2024
Loading...